We are all familiar with the spinning wheels and download indicators that signify when our electronic devices are “working”, but are they making us fall for the “labour illusion”?
“When I have talked to designers about this, what they are trying to do is create an experience rather than an accurate representation of time,” says BKC faculty associate Jason Farman.
In 2016, the Russian government orchestrated vast disinformation campaigns that leveraged U.S. race relations to influence the presidential election. In the years since, how much has changed? More importantly, are we any better equipped to fight back heading into 2020?
Mutale Nkonde discusses the role race plays in election interference efforts for the City on the Hill podcast.
Ethan Zuckerman imagines a different kind of Internet—one propelled by public concerns, not the interests of venture-backed startups and large corporations.
“A public service Web invites us to imagine services that don’t exist now, because they are not commercially viable, but perhaps should exist for our benefit, for the benefit of citizens in a democracy,” wrote Zuckerman. “We’ve seen a wave of innovation around tools that entertain us and capture our attention for resale to advertisers, but much less innovation around tools that educate us and challenge us to broaden our sphere of exposure, or that amplify marginalized voices. Digital public service media would fill a black hole of misinformation with educational material and legitimate news.”
Jessie Daniels spoke with Kim Crayton on the #CauseAScene podcast.
“I’m actually working on a book right now…I’m calling it ‘From Barbecue Beckys to Pink Pussy Hats’ – calling out white women and white feminists, because we white women have got some work to do,” Daniels says.
As Portland, Oregon and other cities ban facial recognition technology, Mutale Nkonde warns of companies using biometric data.
“The capturing and encoding of our biometric data is going to probably be the new frontier in creating value for companies in terms of AI,” says Mutale Nkonde
David O’Brien spoke with Gizmodo about password sharing on services like Disney+ or Netflix.
“I think it’s generally not a great idea because you’re giving your credentials to someone else who might not have your best interest in mind when it comes to your security, so it does raise the possibility that those credentials could be lost in some way. You never know how they could be used,” O’Brien said. “We’ve seen plenty of attacks in the past that indicate it’s often possible using multiple accounts from different services to triangulate in and get access to something you really care about.”
Julie Owono discussed the global trend of Internet shutdowns and how they have evolved in the recent years, starting with the recent case of Iran, with RTS francophone. Owono also explained the link between Internet shutdowns and disinformation and hate speech -- specifically how they are used to justify network disruptions in Africa and South Asia.
Read more (in French)
In Wired, Urs Gasser makes the case for including young people in our conversations about artificial intelligence.
“Young people have a right to participate as we make critical choices that will determine what kind of technological world we leave for them and future generations. They also have unique perspectives to contribute as the first generation to grow up surrounded by AI shaping their education, health, social lives, leisure, and career prospects.”
BKC researchers argue the dangers of deepfakes are overblown, but they will still require journalists to give thought to how they handle unconfirmed information.
“By making it more expensive for newsrooms to do good forensics work at the breakneck pace of the news cycle — and opening the door for those less principled — deepfakes might slip through a loophole in journalistic ethics,” write John Bowers, Tim Hwang, and Jonathan Zittrain. “Even if their persuasive power doesn’t far outstrip that of conventional formats for disinformation, the difficulty of quickly and conclusively debunking deepfakes (or verifying legitimate content) may bog down the traditional media institutions that many of us still appropriately rely on as counterweights against viralized disinformation.”
Law and documentary film may seem far apart, but they actually share many connections, Martha Minow says in The Boston Globe.
“Documentary filmmakers confront legal questions about privacy, secrecy, access to public and private spaces, and ownership of images and other materials,” Minow writes. “Lawyers increasingly use video interviews and computer-generated graphics in hearings and negotiations. Mass media culture informs views of legal decision-makers and everyone’s pictures of courts and law.”
In the wake of the tumultuous launch of Disney+, David O’Brien talks password security with Gizmodo
“People very commonly reuse passwords between sites because it’s convenient,” O’Brien said. “The reason there is, of course, it’s hard to memorize long passwords to begin with, and it’s hard to memorize a long list of long passwords. So people often take the shortcut of just using the same password between sites and they might not know when it’s been compromised or not.”
Among concerns of disinformation and the role of radio and cable TV in amplifying “fake news,” Yochai Benkler says there’s scarce evidence of the impact of such ads.
“There’s little evidence that targeted ads have the power to to change minds or votes, says Harvard law professor Yochai Benkler, co-author of the book ‘Network Propaganda.’ Belief in targeted ads in general is more faith-based than evidence-based, he says. Advertisers assume the targeting causes people to buy things — though this is far from proven,” the article says.
Ariel Herbert-Voss breaks down what it means to attack machine learning systems in a talk at DEFCON.
Martha Minow spoke with Vox about the possibilities of restorative justice.
“Above all, we have to ensure that the benefits of a more forgiving system extend to everyone and not simply to the most powerful forces in the country. If we can do that, the country as a whole will be better,” Minow said.
Jonathan Zittrain and John Bowers spoke with The Wall Street Journal about Google’s search algorithm.
“Building a service like this means making tens of thousands of really, really complicated human decisions, and that’s not what people think,” Bowers said.
Mutale Nkonde spoke to The Wall Street Journal about how landlords are using artificial intelligence to vet prospective renters. Nkonde says she sees “red flags” in the algorithms but “because the algorithms are protected by intellectual property laws, we have no way of scrutinizing them,” Ms. Nkonde said.
Ryan Budish discusses a recent report from the National Security Commission on Artificial Intelligence with MuckRock. Budish compares the NSCAI’s guiding principles to other high-level principles for AI:
“The real question now is how do we move from these specific principles to actual actionable steps that organizations, regardless of whether they’re in the public or private sector, can follow,” Budish says. “There’s a big gap between a high level principle about respecting human rights to actually making difficult tradeoffs when designing a system.”
Mako Hill uses high-performance computing to understand how online communities work — and just maybe how to keep them from sliding into oligarchy. His work was featured by the University of Washington, where he is an assistant professor in the Department of Communication.
“These online groups produce tons of data,” Hill said. “I knew I could make the data speak and answer some of these questions.”
BKC fellow Mutale Nkonde was one of four women featured in Essence for her work on biased algorithms.
“We’re no longer having to march,” says Nkonde. “We have to be unplugging things and making sure tech is optimized for justice.”
Social-media giants can’t decide how far is too far, but a panel of regular people can, argues Jonathan Zittrain in an op-ed for The Atlantic.
“But far more than its own version of the Supreme Court, Facebook needs a way to tap into the everyday common sense of regular people. Even Facebook does not trust Facebook to decide unilaterally which ads are false and misleading. So if the ads are to be weighed at all, someone else has to render judgment.”