Judith Donath’s previous work on trolling in Usenet groups was highlighted in a recent Wired article on disinformation.
“A troll can disrupt the discussion on a newsgroup, disseminate bad advice, and damage the feeling of trust in the newsgroup community. Furthermore, in a group that has become sensitized to trolling—where the rate of deception is high—many honestly naïve questions may be quickly rejected as trollings … ’ Donath wrote in 1998. “Compared to the physical world, it is relatively easy to pass as someone else online since there are relatively few identity cues ... Even more surprising is how successful such crude imitations can be.”
Elizabeth Renieris writes about why a blockchain-based identity layer for the web could have unintended consequences.
“Adding identity to the web isn’t just adding it to the web anymore. With the digital subsuming our reality, it would become an identity layer for our lives.”
The most common concerns among health professionals when it comes to misinformation online is compliance with health treatments or prevention efforts, says Natalie Gyenes.
“It can lead to vaccination levels below herd immunity, harmful impacts on minors whose parents are responsible for their health care and well-being, engaging in alternative or homeopathic treatments as a primary approach and only complying with necessary medical treatments at a time where effectiveness is decreased.”
Judith Donath joins Radio Boston to discuss the major moments in technology, social media, and artificial intelligence from the last 10 years and looks beyond 2020.
Jasmine McNealy on why “context is ever more important in this era of data-driven elections, social media, and emerging technology.”
“Proper context will require that news outlets understand the political, economic, historical, and social environments of the places, people, and events that they report on.”
Julia Reda on why US academics and activists should be paying close attention to the response and reaction to New EU Copyright Rules:
“While the benefits of Article 17 for artists are questionable because the automated filters tend to privilege large rightsholders and limit the ability for authors to creatively build upon the works of others, their main economic effect is likely to be to give a boost to the content filtering technology industry — hardly what the EU lawmakers had been hoping for.”
When Ali Bongo, the president of Gabon, appeared on video to give his traditional New Year’s address last year, he looked healthy — but something about him was off. His right arm was strangely immobile, and he mumbled through parts of his speech. Some of his facial expressions seemed odd.
“It’s a total information disorder,” said Julie Owono, “people are even considering that someone else might be impersonating the president, and may be exercising the highest office of the country — which is quite frankly, very frightening.”
A Q&A with BKC faculty associate Sasha Costanza-Chock:
“This year marks the 20th anniversary of Indymedia, a global network for open publishing and free programming that was first launched in 1999 alongside the anti-globalization, environmentalist, hacker and anarchist movement. At that time, social media networks like Facebook, Twitter, YouTube or MySpace did not exist," says Costanza-Chock. “Indymedia was one of the few open spaces where you could publish information from within the nucleus of social movements. Then what happened? The cultural industry realized that it could make money from all this information and it stole this innovation from social movements and free programming, turning it into the social media we know today.”
The Lumen Database helped a researcher uncover widespread fraud and aided in a criminal charge.
“If only the forged court order hadn’t been sent to Google...Steve Farzam, chief operating officer of the Shore Hotel, would never have been charged with counterfeiting a Los Angeles County Superior Court seal..."
Jonas Kaiser on what journalists can learn from the late 2000s when it comes to misinformation
“Scrambling to understand what had happened, we were looking for answers, and misinformation was the prime suspect: as flashy as it was intuitive, as paternalistic as it was elitist, and it absolved us from responsibility, giving us a clear culprit.”
Elizabeth Renieris argues that context is necessary when it comes to digital IDs.
“...we have created a privacy disaster in our digital lives because we have treated the ‘online’ or ‘digital’ space as a single monolithic context," says Renieris, “ In the same way that our approach to online consent is too simplistic, digital identity solutions that treat the “digital” as one single context are doomed to fail.”
Faculty associate james Wahutu on what he hopes media organizations in the U.S. and the UK will do in 2020.
“This is not to suggest that media organizations in these countries have cracked the code. Instead, it’s about recognizing that there is useful knowledge about how to work under hostile regimes in African media markets.”
Leah Plunkett on how you should navigate your child's digital footprint.
“To save their childhood, youth today need us, their parents, to fight against our “sharenting” habits. Our kids need us to protect their privacy and, along with it, their protected space to play so that they can make mischief, make mistakes, and grow up better for having made them.”
Every day, companies like Google remove links to online content in response to court orders, influencing the Internet search results we see. But what happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood.
Christopher Bavitz joined General Counsels from some of America’s health care institutions – hospitals, insurers, biotechnology companies – at Harvard Law School as part of the General Counsels Roundtable to explore pressing health policy and legal issues facing companies today.
Bavitz led the group in a discussion of the opportunities and challenges that artificial intelligence, machine learning, and algorithms present to the health care industry. The spirited conversation that followed raised questions on how to conceptualize the role of AI in health care decisions. Should AI be used as tool, or decision-maker? Or should it be viewed as a product or service, or even in some sense a “hire” for the system?
Baobao Zhang explains how public opinion will likely shape the regulation of three applications of AI in the US: "facial recognition technology used by law enforcement, algorithms used by social media platforms, and lethal autonomous weapons."
“In the U.S., legislation to regulate social media platforms has stalled because of the divergent policy priorities of the two parties,” says Zhang. “The techlash from the left and the right are different: Democrats prioritize the prevention of digital manipulation and consumer privacy while Republicans focus on alleged bias against conservatives.”
Mary Gray on the need for a better social contract when it comes to the rights of informal workers:
“The challenge is that unless policy makers and the public see the people doing the work, we’re not likely to say that we need a portable benefit system for them. Companies are [benefiting from] this work, so they have to pay their fair share to support the availability of people on demand.”
BKC faculty associate Danielle Allen ponders how Americans can become citizens again.
“Things were getting bad even before the 2016 election, but somehow, within just a few years, they have gotten worse. In an environment of intense partisan warfare, each side believes it has a claim to lead the nation based on its own set of values,” writes Allen. “Each side understands that it has more to gain from aggrievement than achievement, and each side beholds the other with contempt. Meanwhile, the republic seems to be unraveling.”
We are all familiar with the spinning wheels and download indicators that signify when our electronic devices are “working”, but are they making us fall for the “labour illusion”?
“When I have talked to designers about this, what they are trying to do is create an experience rather than an accurate representation of time,” says BKC faculty associate Jason Farman.
In 2016, the Russian government orchestrated vast disinformation campaigns that leveraged U.S. race relations to influence the presidential election. In the years since, how much has changed? More importantly, are we any better equipped to fight back heading into 2020?
Mutale Nkonde discusses the role race plays in election interference efforts for the City on the Hill podcast.