Mutale Nkonde spoke to The Wall Street Journal about how landlords are using artificial intelligence to vet prospective renters. Nkonde says she sees “red flags” in the algorithms but “because the algorithms are protected by intellectual property laws, we have no way of scrutinizing them,” Ms. Nkonde said.
Ryan Budish discusses a recent report from the National Security Commission on Artificial Intelligence with MuckRock. Budish compares the NSCAI’s guiding principles to other high-level principles for AI:
“The real question now is how do we move from these specific principles to actual actionable steps that organizations, regardless of whether they’re in the public or private sector, can follow,” Budish says. “There’s a big gap between a high level principle about respecting human rights to actually making difficult tradeoffs when designing a system.”
Mako Hill uses high-performance computing to understand how online communities work — and just maybe how to keep them from sliding into oligarchy. His work was featured by the University of Washington, where he is an assistant professor in the Department of Communication.
“These online groups produce tons of data,” Hill said. “I knew I could make the data speak and answer some of these questions.”
BKC fellow Mutale Nkonde was one of four women featured in Essence for her work on biased algorithms.
“We’re no longer having to march,” says Nkonde. “We have to be unplugging things and making sure tech is optimized for justice.”
Social-media giants can’t decide how far is too far, but a panel of regular people can, argues Jonathan Zittrain in an op-ed for The Atlantic.
“But far more than its own version of the Supreme Court, Facebook needs a way to tap into the everyday common sense of regular people. Even Facebook does not trust Facebook to decide unilaterally which ads are false and misleading. So if the ads are to be weighed at all, someone else has to render judgment.”
Apryl Williams sheds light on online dating for an article in The Boston Globe.
“In previous times, you were able to say, go out to dinner, and you wouldn’t have to worry about seeing your boss and maybe your school teacher all in the same space,” said Williams. “Whereas Facebook and Twitter and all of our other social media create a space where our social lives are converging in one space. And I think because people are particularly sensitive about dating, that’s one area of context collapse that they don’t want to merge.”
BKC affiliate Joan Donovan and Assembly alum Elizabeth Dubois share their thoughts on Twitter’s forthcoming political ad policy and the potential impact it may have on other platforms.
While these companies tend to move in a flock, we will see some very significant differences in their approaches to governing speech, based on the difference in scale of advertising across these companies,” Donovan said. “For example, Facebook serves many more ads than Twitter, and Facebook does a better sell of direct marketing than Twitter. Google and YouTube’s ad services don’t work in the same way and therefore will not be as publicly pressured to change their policies right now.”
Mutale Nkonde and Jessie Daniels suggest tech and AI companies embrace racial literacy to address the tech industry’s lack of diversity.
“Many of the barriers that came up in the interviews, and even anecdotally in our lives, is that people don’t want to acknowledge race. They want to pretend that it doesn’t matter and that everybody is the same, and what that actually does is reinforce racist patterns and behavior,” Nkonde said. “It would mean companies have to be clear about their values, instead of trying to be all things to all people by avoiding an articulation of their values.”
As implants grow more common, experts fear surveillance and exploitation of workers. Urs Gasser and Ifeoma Ajunwa share their concerns with The Guardian.
“Seeing employees get implanted at the workplace made people question what it means to be an employee,” Gasser said. “Are you a person being paid for your work, or are you the property of the company you work for?”
In her new book "When Should Law Forgive?", Martha Minow explores the complicated intersection of the law, justice, and forgiveness. Minow discussed the book at a recent event, where she described the goal of her book as exploring how, in especially polarized times, forgiveness can be an integral aspect of achieving justice.
“We are living in an age of resentment,” she declared. “We are living in an age of justified resentments. But if we all continue down our roads with our justified resentments, then we will have our vengeance and we will have the opposite of justice.”
Connecting the (Far-)Right Dots: A Topic Modeling and Hyperlink Analysis of (Far-)Right Media Coverage during the US Elections 2016
“The 2016 US election and the victory of Donald Trump are closely connected to a perceived rise of the far-right in the United States. We build upon public sphere and alternative media theory to discuss the relevance of alternative media for the US (far-)right and whether the election period and the candidate Trump allowed far-right alternative media to establish themselves in the (far-) right networked public sphere. We investigate whether it has come to a convergence of topics between the right and the extreme far-right. We analyze the topics nine right-wing outlets, ranging from Fox News to the Neo-Nazi Daily Stormer, covered in 2015/2016 during the US presidential election. We show through topic modeling of 21,919 articles how Breitbart established itself as a media outlet between the extreme far-right and mainstream right by both covering more extreme and more classic conservative topics. We show through time series clustering how Breitbart and Fox News converged in their coverage of Islam and immigration. Finally, we show through hyperlink analysis that the connection between the far-right and the mainstream right is mostly one-sided; while the alternative outlets link to more established ones, the established outlets mostly ignore the outlets from the far-right.”
Mutale Nkonde outlines some training data issues underlying AI bias and suggests that CSR departments would do well to respond.
"There’s an opportunity here for businesses that want a first-mover advantage in differentiating themselves in the marketplace by using fair and accurate AI,” Nkonde writes.
Martha Minow writes about the power – and nuance – of forgiveness for The Boston Globe.
“To ask how laws may forgive is not to deny the fact of wrongdoing,” Minow writes. “Rather, it is to widen the lens to understand larger patterns at work and visualize a more constructive path forward for all. When it comes to the justice system, saying — and hearing — “I forgive you” may prove to be the most just thing we can do.”
The internet turned 50 this week, but what comes next could be even more revolutionary, according to an article from Popular Mechanics.
Judith Donath says that in the future, “Strangers will be identified, with increasingly detailed information about them presented. People will subscribe to different augmentations, much as we now subscribe to magazines.”
Danielle Citron reflects on her career, current events, and the future in a Q&A with New York Magazine.
“What has been so gratifying in the past 12 years is not only convincing companies to take [content moderation] seriously and working with the safety folks at Facebook and Twitter and Microsoft, but also working really closely with law enforcers and lawmakers,” Citron says.
Jonathan Zittrain and evelyn douek weigh in on the power of platforms like Facebook for Columbia Journalism Review.
Zittrain said the political ad fact-checking controversy is about more than just a difficult product feature. “Evaluating ads for truth is not a mere customer service issue that’s solvable by hiring more generic content staffers,” he said. “The real issue is that a single company controls far too much speech of a particular kind, and thus has too much power.
Leah Plunkett discusses her new book on the dangers of “sharenting” with WNYC. Plunkett warns of sharing photos and information about kids online and with pregnancy tracking apps.
“We know that data brokers out there right now are scooping up vast amounts of data about all of us, including our kids. And the ability to build a digital data dossier for kids that goes back to that question of ‘when did you first become a glean in your parents’ eye?’ is a really powerful tracking device that data brokers and other companies can use to try to make predictions about our children’s’ futures, in addition of course, to targeting them with advertising and marketing.”
Ifeoma Ajunwa joins WBUR to discuss the use of AI in hiring processes.
“I think it’s also quite misguided to think that using an automated system means that human bias has been fully removed. We still have to remember that humans are creating these algorithms—these automated systems and unless we actually know that the systems have been created very cautiously with the idea of removing human bias, then we can’t really blindly rely on the fact that it’s automated as a sign that its objective or free of bias.”
Meeri Haataja delineates motivations for companies to address AI, including finances, reputation, compliance, and contractual requirements.
“More influential AI, more visible problems and more data shared between organizations outline an interesting reality, where companies ask for standards and regulation to bring some much-needed clarity and predictability into the risky business of algorithms,” Haataja writes.