Mutale Nkonde joined Slate's technology podcast What Next: TBD to discuss Alphabet and inherent bias.
“You can effectively use Google products in every single area of your life, and the underlying algorithms are going to have problems of bias not because Google is a terrible company or the computer scientists are racist, it’s just the fact that they are using societal data and our data has inherent biases.”
Jessica Fjeld, lead author of the recent BKC report Principled Artificial Intelligence, warns that giving too much credence to Big Tech is like “asking the fox for guidance on henhouse security procedures.”`
“AI principles are a map that should be on the table as regulators around the world draw up their next steps. However, even a perfect map doesn’t make the journey for you,” writes Fjeld. “At some point—and soon—policymakers need to set out the real-world implementations that will ensure that the power of AI technology reinforces the best, and not the worst, in humanity.”
The difference between the protections YouTube offers its advertisers and those it provides consumers is stark.
Jonas Kaiser notes that YouTube faces questions of censorship and freedom of speech when it comes to what videos are permitted on the platform. “The relationship YouTube has with advertisers is more straightforward,” he says, adding that YouTube protects itself from suffering financially by working to remove ads from harmful content.
A state lawmaker in Utah wants police to stop using consumer genealogy databases to help them find criminals.
Jasmine McNealy, faculty associate, said that law enforcement accessing personal data held by third parties is not a new legal debate. “We’ve seen this problem with banking and cell phone data for a long time,” she said. “But with DNA we immediately see the implications. It needs a higher privacy standard.”
The “smart city,” presented as the ideal, efficient, and effective for meting out services, has captured the imaginations of policymakers, scholars, and urban-dwellers. But what are the possible drawbacks of living in an environment that is constantly collecting data?
Ben Green joins Jasmine McNealy to discuss his book The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future.
Ethan Zuckerman contributed to a series of essays from the Knight First Amendment Institute called “The Tech Giants, Monopoly Power, and Public Discourse.”
“At these moments of technological shift, it’s easy to assume that the business models adopted by technological innovators are inevitable and singular. They are not.”
In a world where our data allows us to be consistently identified over time, bans on facial recognition aren't enough writes Bruce Schneier.
“Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point,” says Schneier. “We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.”
Faculty associate Woodrow Hartzog spoke to The New York Times about the harrowing consequences of facial recognition
“We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Hartzog said. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
The deadly December shooting of three U.S. sailors at a Navy installation could reignite a long-simmering fight between the federal government and tech companies over data privacy and encryption.
“They’re just public shaming and asking nicely,” said Bruce Schneier. “Hurting everybody’s security for some forensic evidence is a dumb tradeoff.”
One of the trends that came into sharp focus in 2019 was, ironically, a woeful lack of clarity around AI ethics. The AI field at large was paying attention to ethics, creating and applying frameworks for AI research, development, policy, and law, but there was no unified approach. A team of researchers from BKC recently released a white paper and visualization that mapped AI principles and guidelines to find consensus.
evelyn douek joined a virtual panel of experts to weigh in on the two approaches:
“I don’t think this issue is going to be solved by platitudes about free speech or categorical statements about the difficulty of defining truth. I’m much more interested in empirically informed ideas somewhere in between.”
evelyn douek spoke with law professors Bobby Chesney and Danielle Citron about deep fakes on the Lawfare Podcast—with recently circulated, doctored video of Speaker of the House Nancy Pelosi and presidential candidate Joe Biden, they talked about how the issue hasn't gone away, as well as the distinction between deep fakes and other less sophisticated forms of editing.
BKC fellow Mutale Nkonde responded to Slate with her views on Alphabet:
“Unless we have strong privacy protections in place, Google can use our personal data to build advanced technological systems, which, if they are built using datasets with in-built bias, will have a discriminatory impact on traditionally marginalized groups.”
Joan Donovan participated in a panel of scholars who study media manipulation, digital resources, and the spread of misinformation on how to spot “fake news” in an age of disinformation.
“The governance of online platforms has unfolded across three eras – the era of Rights (which stretched from the early 1990s to about 2010), the era of Public Health (from 2010 through the present), and the era of Process (of which we are now seeing the first stirrings),” write John Bowers and Jonathan Zittrain. “In the era of Process, platforms, regulators, and users must transcend this stalemate between competing values frameworks, not necessarily by uprooting Rights-era cornerstones like CDA 230, but rather by working towards platform governance processes capable of building broad consensus around how policy decisions are made and implemented.”
Joan Donovan talks with Mathew Ingram of Columbia Journalism Review’s Galley forum. Donovan discusses technological determinism and why getting data from YouTube wouldn’t be enough for researchers.
“Researchers have to be more like investigative journalists to get at independent and verifiable data, which might include developing different tools entirely than hoping for a tranche of platform data,” Donovan says.
“The network has plenty of other security weaknesses, including ones the United States doesn’t want to fix since they help its own surveillance efforts,” writes Bruce Schneier.
Faculty associate Leah Plunkett joins The Jefferson Exchange to discuss the problems with parents creating a digital dossier that could follow their kids for life.
Chinmayi Arun in conversation about the applications of AI and the ethical debates surrounding its use on the podcast Interpreting India.
Ethan Zuckerman adds to the discussion about whether concerns about “time well spent” are overblown.
“Every time new tech comes out, there’s a moral panic that this is going to melt our brains and destroy society,” says Zuckerman. “In almost every case, we sort of look back at these things and laugh.”