Berkman Center for Internet & Society at Harvard Newsfeed

Subscribe to Berkman Center for Internet & Society at Harvard Newsfeed feed
Updated: 1 hour 29 min ago

Coming in from the Cold: A Safe Harbor from the CFAA and DMCA §1201

Wed, 06/13/2018 - 15:40
Teaser

The Assembly program is pleased to announce a new publication proposing a statutory safe harbor from the Computer Fraud and Abuse Act and section 1201 of the Digital Millennium Copyright Act for security research activities using a constructed communication protocol based on a responsible disclosure model.

Publication Date 1 Jun 2018 Thumbnail Image: External Links: Download from SSRN Authored by Daniel Etcovitch and Thyla van der Merwe   The Assembly program is pleased to announce a new publication, titled Coming in from the Cold: A Safe Harbor from the CFAA and DMCA §1201, written by Harvard Law School student Daniel Etcovitch and 2017 Assembly cohort member Thyla van der Merwe.     The paper proposes a statutory safe harbor from the Computer Fraud and Abuse Act and section 1201 of the Digital Millennium Copyright Act for security research activities using a constructed communication protocol based on a responsible disclosure model. The authors explore how such a safe harbor could provide security researchers a greater degree of control over the vulnerability research publication timeline and guarantee researchers safety from legal consequence if they complied with the proposed safe harbor process. ​   The collaboration between Daniel and Thyla was born out of the 2017 Assembly program and the Internet & Society class co-taught by Harvard Law School Professor Jonathan Zittrain and MIT Media Lab Director Joi Ito, where they first met.  As the authors describe it, they “found a common interest in legal barriers to security” during the Internet & Society course and together “began to engage with the reality that some security researchers – particularly academics – were concerned about potential legal liability under computer crime laws.”    Abstract In our paper, we propose a statutory safe harbor from the CFAA and DMCA §1201 for security research activities. Based on a responsible disclosure model in which a researcher and vendor engage in a carefully constructed communication process and vulnerability classification system, our solution would enable security researchers to have a greater degree of control over the vulnerability research publication timeline, allowing for publication regardless of whether or not the vendor in question has effectuated a patch. Any researcher would be guaranteed safety from legal consequences if they comply with the proposed safe harbor process.    About the Berkman Klein Assembly  Assembly, at the Berkman Klein Center & MIT Media Lab, gathers developers, managers, and tech industry professionals for a rigorous spring term course on internet policy and a twelve-week collaborative development period to explore hard problems with running code. Each Assembly cohort comes together around a defined challenge. In 2017, the Assembly cohort focused on digital security. In 2018, the program focused on the ethics and governance of artificial intelligence. For more information, visit the program website, http://bkmla.org.     Producer Intro Authored by
Categories: Tech-n-law-ogy

20 years of the Laws of Cyberspace

Thu, 05/17/2018 - 11:16
Subtitle Berkman Klein event celebrates how Lawrence Lessig's groundbreaking paper provided structure to the Center's field of study Teaser

It’s been two decades since Harvard Law School Professor Lawrence Lessig published “The Laws Of Cyberspace,” which, in the words of Professor Jonathan Zittrain, “imposed some structure over the creative chaos of what maybe was a field that we’d call cyberlaw.”

Thumbnail Image: 

 

 

What if an architecture emerges that permits constant monitoring; an architecture that facilitates the constant tracking of behavior and movement. What if an architecture emerged that would costlessly collect data about individuals, about their behavior, about who they wanted to become. And what if the architecture could do that invisibly, without interfering with an individual’s daily life at all? … This architecture is the world that the net is becoming. This is the picture of control it is growing into. As in real space, we will have passports in cyberspace. As in real space, these passports can be used to track our behavior. But in cyberspace, unlike real space, this monitoring, this tracking, this control of behavior, will all be much less expensive. This control will occur in the background, effectively and invisibly. -Lawrence Lessig, "The Laws of Cyberspace," 1998

It’s been two decades since Harvard Law School Professor Lawrence Lessig published “The Laws Of Cyberspace,” which, in the words of Professor Jonathan Zittrain, “imposed some structure over the creative chaos of what maybe was a field that we’d call cyberlaw.” Lessig’s groundbreaking paper describes four types of constraints that together regulate behavior – law, social norms, the market, and architecture – and argues that due to its special architecture, cyberspace is different from “real” space and thus subject to new possibilities for control by governments and other centers of power. “The world we are entering is not a world where freedom is assured,” Lessig wrote in 1998, but instead, “has the potential to be the most fully, and extensively, regulated space in our history.”

On April 16, the Berkman Klein Center of Internet & Society hosted a special event commemorating the 20th anniversary of the publication of “The Laws of Cyberspace,” with Lessig, Harvard Law School Professors Ruth Okediji and Jonathan Zittrain, and Dr. Laura DeNardis of American University. The panelists reflected on the paper, and where the field of cyberlaw has taken us over the last two decades, and they considered how some of the concerns raised in 1998 might apply today.

“I was sitting on that bench outside the Lewis building,” recollected Okediji of the day 20 years ago when she first read the paper, “and I will never forget both my sense of sheer terror that we were launching something that we had no idea where it would lead us, and then this sense of skepticism: ‘Well, how does he know he’s right?’” She explained that “The Laws of Cyberspace” led to her own work thinking about internet governance, social interaction on the net and the law. “It’s been 20 years, and Larry was right,” she said.

Lessig told the audience that the paper came in part out of a feeling of frustration. He feared that many internet enthusiasts were taking for granted that the freedom the internet allowed in 1998 was the freedom it would always allow, and he wanted to make the point that the regulability of place is a function of its architecture and thus not guaranteed. Without deliberate interventions, the lack of regulation that so many cherished in the early days of the internet could slip away.

“The architecture of the internet as it originally was made it really hard to regulate, but you might imagine the technology evolving to persistently watch everything you’re doing and enable simple traceability,” he said. “All of these evolutions in the architecture increase the regulability of the space, and then we’d need to decide, ‘Do we like that? Do we want that?’”

Lessig explained that even in 1998, governments and private markets seemed to be interested in increasing regulability and the ability to track what people were doing for the purposes of commerce and control.

“Arrangements of technical architecture are arrangements of power,” explained DeNardis. “This often has nothing to do with governments whatsoever.” For example, the World Wide Web Consortium designs accessibility for disabled people into their protocols, she said, which is an example of how technical architecture is determining public interest issues. DeNardis said that often it’s hard for people without a technical background to be involved in decisions like these, but that there’s currently a surge of people from beyond the technical sphere showing interest in participating in the decisions that shape our experience online and affect issues like identity and privacy. However, she said, this increase in public participation coincides with the proliferation of proprietary standards coming out of closed organizations such as the Internet of Things and social media platforms.

Lessig added that as the space of innovation moves into “islands of innovation,” such as the large tech platforms like Google and Facebook, the generativity of innovations become contingent on each platform’s permission, creating the potential scenario where someone would choose not to create something for fear that the company would “pull the rug out.” This is an example of “how technical change and legal ownership work together to change the basic opportunity to innovate,” he said.

DeNardis made the point that while certain platforms might be islands in terms of interoperability, they are tied together in the backend by the third parties that collect and aggregate data about us. It’s important to look below the surface, she said. “That’s where a lot of the power is. The power to do things like censor LGBT people, the power to restrict people based on architecture-embedded intellectual property rights, and the power to monetize us through big data that’s aggregated with companies we’ve never even signed terms of service with.”

Okediji noted that there’s been little innovation in contract law when it comes to technology. “It’s not just that we’re missing the mark in the area of cyberspace. The regimes that surround cyberspace also have not received the attention they should,” she said, suggesting that the rules and norms around what makes a contract and the practice of “signing away all these rights with a click” might not be ideal.

“What troubles me quite significantly is that we have this 911 mentality when it comes to policy,” said Okediji. “Avoiding something in the future requires us to be thinking about it today, not tomorrow when the problem occurs.”  Rather than dealing with problems only as they come up, she said, we need to ask ourselves ‘What’s the vision for what cyberspace should look like 20 years from now?’”

This article originally appeared in Harvard Law Today.

 

Categories: Tech-n-law-ogy

Art that Imitates Art: Computational Creativity and Creative Contracting

Thu, 05/10/2018 - 12:29
Subtitle Jessica Fjeld and Mason Kortz, Cyberlaw Clinicians at Harvard Law Teaser

Join us for our last Tuesday Luncheon of the academic year! Cyberlaw Clinicians Jess Fjeld and Mason Kortz for a discussion about copyright in AI-generated works, the need for a shared understanding of what is and isn’t up for grabs in a license, and how forward-thinking contracts can prevent AI developers and artists from having their rights decided by (often notoriously backwards-looking) legal system.

Parent Event Berkman Klein Luncheon Series Event Date May 22 2018 12:00pm to May 22 2018 12:00pm Thumbnail Image: 

Tuesday, May 22, 2018 at 12:00 pm
Berkman Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Room 1015

RSVP required to attend in person
Event will be recorded and posted here

Complimenary Lunch Served

Computational creativity—a subdomain of artificial intelligence concerned with systems that replicate or assist human creative endeavors—has been the  subject of academic inquiry for decades. Now, with recent improvements in machine learning techniques and the rising popularity of all thing AI, computational creativity is a medium for critically and commercially successful works of art. From a 2016 Rembrandt to Jukedeck’s instant music (or muzak?), AI-assisted and AI-driven works are a reality. This raises mind-bending questions about the nature of creativity, the relationship between the artist and the viewer, even the existence of free will. For many lawyers, it also raises a more immediate question: who owns all of this art?

Join Cyberlaw Clinicians Jess Fjeld and Mason Kortz for a discussion about copyright in AI-generated works, the need for a shared understanding of what is and isn’t up for grabs in a license, and how forward-thinking contracts can prevent AI developers and artists from having their rights decided by (often notoriously backwards-looking) legal system.

About Jessica

Jessica Fjeld is a Clinical Instructor at Harvard Law School's Cyberlaw Clinic. She works in diverse areas including intellectual property, media and entertainment (particularly public media), freedom of expression, and law and policy relating to government and nonprofit entities. Before joining the Clinic, Jessica worked in Business & Legal Affairs for WGBH Educational Foundation, and as an associate at Skadden, Arps, Slate, Meagher & Flom LLP focused in corporate transactions. She received a JD from Columbia Law School, where she was a James Kent Scholar and Managing Editor of the Journal of Law and the Arts; an MFA in Poetry from the University of Massachusetts; and a BA from Columbia University.

About Mason

Mason Kortz is a clinical instructional fellow at the Harvard Law School Cyberlaw Clinic, part of the Berkman Klein Center for Internet & Society. His areas of interest include online speech and privacy and the use of data products (big or small) to advance social justice. Mason has worked as a data manager for the Scripps Institution of Oceanography, a legal fellow in the Technology for Liberty Project at the American Civil Liberties Union of Massachusetts, and a clerk in the District of Massachusetts. He has a JD from Harvard Law School and a BA in Computer Science and Philosophy from Dartmouth College. In his spare time, he enjoys cooking, reading, and game design.

 

Loading...
Categories: Tech-n-law-ogy

Shaping Consumption: How Social Network Manipulation Tactics Are Impacting Amazon and Influencing Consumers

Wed, 05/09/2018 - 10:01
Subtitle featuring Renee DiResta Teaser

This talk examines the ways that these same manipulative tactics are being deployed on Amazon, which is now the dominant product search engine and a battlefield for economically and ideologically motivated actors.

Parent Event Berkman Klein Luncheon Series Event Date May 15 2018 12:00pm to May 15 2018 12:00pm Thumbnail Image: 

Tuesday, May 17, 2016 at 12:00 pm
Berkman Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Room 1015

RSVP required to attend in person
Event will be live webcast at 12:00 pm

Narrative manipulation issues - such as manufactured consensus, brigading, harassment, information laundering, fake accounts, news voids, and more - are increasingly well-documented problems affecting the entire social ecosystem. This has had negative consequences for information integrity, and for trust. This talk examines the ways that these same manipulative tactics are being deployed on Amazon, which is now the dominant product search engine and a battlefield for economically and ideologically motivated actors.

About Renee

Renee DiResta is the Director of Research at New Knowledge, and Head of Policy at nonprofit Data for Democracy. Renee investigates the spread of disinformation and manipulated narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations about understanding and responding to computational propaganda and information operations. In 2017, Renee was named a Presidential Leadership Scholar, and had the opportunity to continue her work with the support of the Presidents Bush, President Clinton, and the LBJ Foundation. In 2018, she received a Mozilla Foundation fellowship and affiliation with the Berkman-Klein Center for Internet & Society at Harvard University to work on their Media, Misinformation, and Trust project. She is a Founding Advisor to the Center for Humane Technology, and a Staff Associate at Columbia University Data Science Institute.

Previously, Renee was part of the founding team of venture-backed supply chain logistics technology platform Haven, where she ran business development and marketing, and a co-founder of Vaccinate California, a parent-led grassroots legislative advocacy group. Renee has also been an investor at O’Reilly AlphaTech Ventures (OATV), focused on hardware and logistics startups, and an emerging markets derivatives trader at Jane Street Capital. Her work and writing have been featured in the New York Times, Politico, Slate, Wired, Fast Company, Inc., and the Economist. She is the author of the O’Reilly book “The Hardware Startup: Building Your Product, Business, and Brand”, and lives on the web at http://reneediresta.com and @noUpside.

 

Loading...
Categories: Tech-n-law-ogy

GAiA releases its annual report highlighting its effort to increase access to medicines to the world’s neediest

Tue, 05/08/2018 - 20:38
Teaser

Global Access in Action (GAiA) launched its annual report today highlighting the major progress made in 2017 to expand access to medicines to the world’s neediest.

Cambridge, May 8, 2018 - Global Access in Action (GAiA) launched its annual report today highlighting the major progress made in 2017 to expand access to medicines to the world’s neediest.

2017 marked a year of significant progress made by GAiA in its effort to improve access to medicines to the vulnerable populations. The annual report showcases major projects undertaken by GAiA, its active engagement with various local and global stakeholders as well as organizational expansion in terms of staffing in the year of 2017.

One of the major projects undertaken by GAiA in 2017 was the expansion of a pilot project that aims to develop a public health sensitive legal framework that allows for sustainability of low-cost medicine supply while providing legal protections that are necessary to incentivize innovations to pharmaceutical companies. The project started in 2016 in Namibia and further expanded to two other countries in sub-Saharan Africa, Malawi and Mozambique. The initiative also involved collaboration with Global Good to fight substandard and falsified (S&F) medicines in sub-Saharan Africa with the use of field detection technology- miniature spectrometer.

While access to medicines is an issue at stake, the problem of S&F medicines can exacerbate the existing access challenge. In the introductory letter of the annual report, GAiA’s Co-Directors, William Fisher and Quentin Palfrey stressed that, “Even those who have access are at risk of consuming counterfeit medicines in many countries that often lead to lethal consequences.” GAiA is envisioning and working to establish a quality assurance network among the countries involved in the pilot project to allow for data sharing on S&F medical products.

Along with the expansion of the pilot project, GAiA also published a green paper, “Expanding Access to Medicines and Promoting Innovation: A Practical Approach” in the April edition of Georgetown Journal on Poverty Law and Policy exploring practical strategies initiated by pharmaceuticals companies to solve the access barriers in low- and middle- income countries.

Click here to read more about annual report.

About Global Access in Action
Global Access in Action, a project of the Berkman Klein Center for Internet & Society at Harvard University, seeks to expand access to lifesaving medicines and combat the communicable disease burden that disproportionately harms the world’s most vulnerable populations. We accomplish this by conducting action-oriented research, supporting breakthrough initiatives, facilitating stakeholder dialogue, and providing policy advice to both public and private sector stakeholders. GAiA seeks to foster dialogue across traditional boundaries between government, industry, civil society, and academia, and to promote new, innovative solutions amongst these parties to create better outcomes.

Categories: Tech-n-law-ogy

Your Guide to BKC@RightsCon 2018

Tue, 05/08/2018 - 11:11
Teaser

Going to RightsCon in Toronto? Connect with members of the Berkman Klein community, and learn about their research

Thumbnail Image:    Going to RightsCon in Toronto? Connect with members of the Berkman Klein community, and learn about their research

 

Wednesday, May 16th, 2018

Is This a New Face of Info War? "Patriotic" Trolling and Disinformation -- the Iran Edition
Simin Kargar
Details: Wednesday, May 16th, 2018; 10:30-11:45pm – 205A
Online harassment and smear campaigns are increasingly applied as a form of information control to curb free speech and exert power in cyberspace. Targeted harassment of dissidents on social media appears as the most recent form of strategic communication, where particular messages are crafted by state-affiliated actors to manipulate public opinion. This session addresses the circumstances under which these coordinated efforts are likely to emerge, the latest practices of Iran to extend its ideological arms across social media, and the ultimate goals that they pursue.

Young, Safe, and Free: Respecting Children's Online Privacy and Freedom of Expression
Patrick Geary,  Sarah Jacobstain, Jasmina Byrne, Fred Carter, Sandra Cortesi, Ariel Fox, Patrik Hiselius, Natasha Jackson
Details: Wednesday, May 16th, 2018; 12:00-1:15pm – 206C
This is chance to talk about practical steps that companies and public authorities can take to protect and empower children online. Companies and Data Protection Authorities will share how they consider risks to children's privacy online while still providing children with full, open and enriching online experiences. Civil society organizations will highlight the work that remains to be done, and academic researchers will ground this in evidence about how children exercise their rights to privacy and freedom of expression online. 

Online Criticism, Falsified Court Orders & the Role of Intermediaries: Coping With Takedown Requests of Questionable Legitimacy
Adam Holland, Daphne Keller, Eugene Volokh
Details:
Wednesday, May 16th, 2018; 2:30-3:45pm – 204B
Lumen is a research project devoted to collecting and analyzing requests to remove online materials. Recently, researchers and advocates, including Professor Eugene Volokh, have uncovered an alarming pattern of falsified court orders used to seek and often achieve the removal of online material. The Lumen team will open the workshop with a brief introduction to Lumen and to the site’s API. Once the attendees are familiar with Lumen, they will facilitate a discussion about the implications of falsified court orders within the takedown request landscape.

New Tools for Visualizing Communities, Projects, and Resources: Inspiring Engagement and Exploration
Sandra Cortesi
Details: 
Wednesday, May 16th, 2018; 2:30-3:45pm – 200A
In this tech demo, we will present interactive tools that have been developed at the Berkman Klein Center for Internet & Society at Harvard University to visualizing communities, projects, and resources. 

Language Access and Humanitarian Response: A Matter of Human Rights
An Xiao Mina, Olly Farshi, Natasha Jimenez
Details: 
Wednesday, May 16th, 2018; 5:15-6:15pm – 205C
The world is seeing an unprecedented scale of migration due to conflict and climate-related natural disasters. People from different linguistic backgrounds are coming together in a number of humanitarian contexts, such as rapid response work and support in refugee sites. Without the ability to communicate effectively, both aid workers and beneficiaries stand to lose significantly. In this panel, members of Meedan and Outside will share their experiences in the field in dialogue with others who are looking at issues of language barriers in humanitarian work.

Teaching AI to Explain Itself
Suchana Seth
Details: Wednesday, May 16th, 2018; 5:15-6:15pm – 205A
A growing body of artificial intelligence algorithms are NOT black-box - they can explain their decision mechanisms. What do "good" explanations look like in the world of accountable algorithms - from the perspective of users, consumers, and regulators of AI? How do we set realistic expectations about explainable or interpretable machine learning algorithms?

Scrutinizing the Little Brothers: Corporate Surveillance and the Roles of the Citizen, Consumer, and Company
Katie McInnis, David O’Brien, Christopher Parsons
Details: 
Wednesday, May 16th, 2018; 5:15-6:15pm – 203B
In this session, we will bring together panelists from Toronto University’s Citizen Lab, the Berkman Klein Center at Harvard University, and Consumer Reports, each of whom are addressing issues of corporate surveillance and accountability. Panelists will share overviews of their organizations’ goals, challenges their programs face, and changes they hope their projects will effectuate. We will present three different perspectives: the consumer, the citizen, and the company. All three projects are responses to pervasive corporate surveillance and aim to lessen the imbalance between corporations and individuals.

 

Thursday, May 17th, 2018

Data Driven Decency: New, Collaborative Experiments to Diminish Online Hate and Harassment Online
Rob Faris, Susan Benesch
Details: 
Thursday, May 17th, 2018; 9:00-10:15am – 205C
In this session we will report on - and brainstorm new possibilities for - experimental methods for diminishing harassment and hate speech online. The speakers will describe the first academic research experiment with an Internet platform that committed in advance to sharing data and allowing publication in a peer-reviewed journal. Participants will be asked to share best practices from their own experiences with collaborative online research. In closing, the moderator will ask for ideas to continue research experiments that aim to diminish hate speech online. Afterward, we will circulate the newly generated ideas, and invite continued collaboration for their implementation.

Secure UX Principles: Let's Build a Checklist of User Security and Good Design
a panel moderated by An Xiao Mina
Details: 
Thursday, May 17th, 2018; 10:30-11:45 – 201C
We present a research and design checklist for people who are developing technologies to help communities at risk. This checklist is designed to promote human rights-centered design by streamlining the process of user research. We believe this resource will aid builders of tools, platforms, and services with limited resources and time. 

Mind the Shark: Informational Flow in Natural Disasters, from Fake News to Rumors
An Xiao Mina, Olly Farshi, Natasha Jimenez, Antonio Martinez
Details: 
Thursday, May 17th, 2018; 12:00-1:15pm – 200B
While misinformation has risen to the top of the agenda in journalism, its impact on humanitarian workers has yet to be fully discussed. Misinformation during natural and human disasters is a consistent theme, causing confusion and leading people to miss access to critical resources - whether that’s the frequent false threat of sharks during hurricanes or confusion about where ICE is detaining people fleeing a disaster site. What are the challenges and opportunities in this space? How can we design solutions that address this? This conversation will look at specific cases of address misinformation after disasters, when rapid responders may not even have access to the most current accurate information.

Cross-Harm Collaboration: Building Strategic Responses to Risks and Harms Online
Nikki Bourassa, Chloe Colliver, Henry Tuck
Details: 
Thursday, May 17th, 2018; 1:20-2:20pm – 206D
Recent revelations linking the use of disinformation, fake accounts, and hate speech to sway elections, coupled with the rise of harm from cyber-bullying, coordinated online harassment, misogyny and child sexual exploitation, demonstrate the range of threats facing internet users. Tech companies are asked to tackle these issues, but often by a huge range of uncoordinated voices. In this workshop, ISD and the Berkman Klein Center will discuss the inefficiency of current silos in online harm prevention work, foster cross-sector collaboration on research and projects, and create actionable suggestions for ways to make collaboration successful and useful for CSOs and technology companies.

Translation Project: A Translation Suite for Humanitarian Organizations
An Xiao Mina, Olly Farshi, and Natasha Jimenez,
Details: 
Thursday, May 17th, 2018; 2:30-3:45pm – 200A
As the global population of forcibly displaced people reaches record levels, the language barrier between refugees and those seeking to help them remains among the first challenges in serving their immediate relief needs. The Translation Project seeks to prototype and develop open-source technology and a community of translators to address this pressing need in a way that is scalable and sustainable.

Reframed! Media Analysis for Digital Inclusion
Belen Febres-Cordero, Nikki Bourassa, Natalie Gyenes
Details: 
Thursday, May 17th, 2018; 4:00-5:00pm – 200B
Access to media analysis tools has generally been limited to academic researchers and industry communications or media professionals. In the absence of tools accessible to community groups or advocacy organizations, there are limited opportunities for more marginalized or vulnerable communities to gather evidence-driven knowledge regarding how their own issues are covered in the media. Global Voices, in partnership with Media Cloud, is piloting an initiative that democratizes access to media analysis tools, bringing them to vulnerable populations so that they can understand, and possibly direct, their own representation in the media.

 

Friday, May 18th, 2018

Countering Media Manipulation: Linking Research and Action
Robert Faris, Joan Donovan, An Xiao Mina, Claire Wardle
Details: 
Friday, May 18th, 2018; 9:00-10:15am – 206D
Although widespread propaganda and disinformation is not a new phenomenon, its occurrence within today’s online networked environments has wrought new challenges for democracy. A mix of legitimate political entities and malicious actors have exploited and leveraged vulnerabilities in platform architectures to surreptitiously insert false news narratives into unwitting media environments. Worse, these campaigns are often coordinated to take advantage of platform algorithms and muddy the difference between genuine and false. Plentiful opportunities remain to foster greater collaboration within the research community and between researchers, journalists, and media watchdogs. In this workshop, we will identify and put into place better mechanisms to coordinate research efforts and to link researchers with practitioners.

Internet Monitor: Real-time Internet censorship research and visualization tools demo
Casey Tilton
Details: Friday, May 18th, 2018, 12:00-1:15pm
Interested in learning more about the technology behind real-time Internet censorship research and contributing to the Internet Monitor project? In this session, researchers from the Berkman Klein Center for Internet & Society at Harvard University will demo two tools developed by the Internet Monitor project. First up is the Internet Monitor Dashboard, a tool that compiles and visualizes data about Internet activity and content controls in over 100 countries. Next up is AccessCheck, a tool that lets users test in real time the availability of websites in countries around the world. Test results include a thumbs up/down notification indicating whether the website is available, as well as a screenshot and more detailed data on status codes, timings, and any errors encountered. In addition to testing single urls, AccessCheck allows users to test the availability of lists of country-specific websites that have been created by experts in the censorship practices of governments around the world

Have We Entered a Brave New World of Global Content Takedown Orders?
Vidushi Marada, Jennifer Daskal, Daphne Keller, Vivek Krishnamurthy, Stefania Milan, Jonathon Penney
Details: 
Friday, May 18th, 2018; 4:00-5:00pm – 206C
From the Supreme Court of Canada's Equustek decision to Germany's "NetzDG" law, concerns of a "race to the bottom" are mounting as every country seeks to enforce its national preferences on the global internet. Now that the brave new world of global content regulation is here, what do we do about it? When is it legitimate for a government to enforce its preferences on a global rather than a national basis? And where do private forms of governance, like algorithmic curation on and by social media platforms, fit into this picture? Join our panel of experts from North America, Europe, and South Asia for an update on some of the biggest recent developments in this area and a wide-ranging discussion of how all those who care about the open, global internet should best respond to these trends.

Artificial Intelligence: Governance and Inclusion
Eduardo Magrani, Chinmayi Arun, Amar Ashar, Christian Djefal, Malavika Jayaram
Details: 
Friday, May 18th, 2018; 5:15-6:15pm – 201B
Even though the developing world will be directly affected by the deployment of AI technologies and services, policy debates about AI have been dominated by organizations and actors in the Global North.. As a follow up to the international event “Artificial Intelligence and Inclusion” held in Rio de Janeiro earlier this year, this discussion will focus on development of AI, and its impact on inclusion in areas such as health and wellbeing, education, low-resource communities, public safety, employment, among others. The goal of this roundtable is to bring these debates to the RightsCon community, enlarging the conversation and deepening the understanding of AI inclusion challenges, governance and opportunities, to identify and discuss areas for research, education and action.

Categories: Tech-n-law-ogy

Governance and Regulation in the land of Crypto-Securities (as told by CryptoKitties)

Mon, 04/30/2018 - 11:41
Subtitle featuring founding members, Dieter Shirley and Alex Shih Teaser

Join founding members of the CryptoKitties team, Dieter Shirley and Alex Shih, as they discuss the unique governance, legal, and regulatory challenges of putting cats on the Ethereum blockchain.

Parent Event Berkman Klein Luncheon Series Event Date May 8 2018 12:00pm to May 8 2018 12:00pm Thumbnail Image: 

Tuesday, May 8, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein East C
Room 2036, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Join founding members of the CryptoKitties team, Dieter Shirley and Alex Shih, as they discuss the unique governance, legal, and regulatory challenges of putting cats on the Ethereum blockchain. CryptoKitties is an early pioneer in the space, and, having navigated securities law early on in its release, will share unique insights on classifications. They will also discuss some of the more ethical challenges they've been facing, and best practices for approach.

 

About Dieter

Dieter is a partner and chief technical architect at Axiom Zen, an award-winning venture studio specialized in applying emerging technologies to unsolved business problems. Products developed by Axiom Zen have touched 200+ million consumers and are used by the world’s leading companies, including Facebook, Microsoft, and NASA, as well as by eminent academic institutions and government organizations.

Dieter is an original participant in the world of cryptocurrency, mining his first Bitcoin on his home computer in 2010. Since then he has served as a technical architect on a series of advanced blockchain projects including as co-founder of CryptoKitties, the most successful collectibles game on the blockchain. Dieter is also the founding CTO of Cornerstone, a real estate transaction platform being developed in partnership with Ross McCredie, former founder and CEO of Sotheby’s Canada, and Dave Carson, former COO at Sotheby’s Global.

Axiom Zen was named first among Canada’s Most Innovative Companies by Canadian Business. They pride themselves in diversity of talent: a team of ~80 creatives includes published authors, over a dozen former founders, diversity from 20+ national origins, and decades of collective experience at startups and Fortune 500s alike.

Axiom Zen is the team behind ZenHub, the world’s leading collaboration solution for technical teams using GitHub; and the developer of Timeline, named Apple’s Best App of the month, Editor’s Choice in 10 countries, and Best New App in 88 countries. Axiom Zen is the creator of Toby, recognized as Top Chrome Extension of the year by both Google and Product Hunt, and the parent company of Hammer & Tusk, a leader in the world of immersive experiences (AR/VR). Axiom Zen's work has been featured in TIME Magazine, The New York Times, and Fast Company.

About Alex

Alex Shih is General Partner and Chief Financial Officer (CFO) at Axiom Zen, an award-winning venture studio specialized in applying emerging technologies to unsolved business problems, including the team behind CryptoKitties, the world’s most successful blockchain application.

Prior to joining Axiom, Alex executed investment strategies across the capital structure in both public and private markets in roles with KKR and Highfields Capital. Alex holds a B.S. / M.S. in Management Science & Engineering from Stanford University.

Axiom Zen was named first among Canada’s Most Innovative Companies by Canadian Business. They pride themselves in diversity of talent: a team of ~80 creatives

includes published authors, over a dozen former founders, diversity from 20+ national origins, and decades of collective experience at startups and Fortune 500s alike.

Axiom Zen is the team behind ZenHub, the world’s leading collaboration solution for technical teams using GitHub; and the developer of Timeline, named Apple’s Best App of the month, Editor’s Choice in 10 countries, and Best New App in 88 countries. Axiom Zen is the creator of Toby, recognized as Top Chrome Extension of the year by both Google and Product Hunt, and the parent company of Hammer & Tusk, a leader in the world of immersive experiences (AR/VR). Axiom Zen's work has been featured in TIME Magazine, The New York Times, and Fast Company.

Links

 

Loading...

Categories: Tech-n-law-ogy