The Law and Ethics of Digital Piracy: Evidence from Harvard Law School Graduates

Subtitle Featuring Dariusz Jemielniak and Jérôme Herguex Teaser

When do Harvard law students perceive digital file sharing (and piracy) as fine?

Parent Event Berkman Klein Luncheon Series Event Date May 1 2018 12:00pm to May 1 2018 12:00pm Thumbnail Image: 

Tuesday, May 1, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein West B
Room 2019, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Harvard Law School is one of the top law schools in the world and educates the intellectual and financial elites. Lawyers are held to the highest professional and ethical standards. And yet, when it comes to digital file sharing, they overwhelmingly perceive file sharing as an acceptable social practice – as long as individuals do not derive monetary benefits from it. We want to discuss this phenomenon, as well as the social contexts in which file sharing is more or less acceptable. We would also like to foster a discussion on the possible changes in regulation, that would catch up with the established social norm. 

About Dariusz

Dariusz Jemielniak is a Wikipedian, Full Professor of Management at Kozminski University, and an entrepreneur (having established the largest online dictionary in Poland,, among others). 

Dariusz currently serves on Wikimedia Foundation Board of Trustees. In his academic life, he studies open collaboration movement (in 2014 he published "Common Knowledge? An Ethnography of Wikipedia" with Stanford University Press), media files sharing practices (among lawyers and free knowledge activists), as well as political memes' communities. 

He had visiting appointments at Cornell University (2004-2005), Harvard (2007, 2011-2012), and University of California, Berkeley (2008), where he studied software engineers' workplace culture.

About Jérôme

Jerome is an Assistant Research Professor at the National Center for Scientific Research (CNRS), a Fellow at the Center for Law and Economics at ETH Zurich, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. From 2011 to 2014, Jerome spent three years as a Research Fellow at the Berkman Klein Center, where he did most of his Ph.D. work.

Jerome is a behavioral economist operating at the boundaries between psychology, economics and computer science. In his research, he typically couples experimental methods with the analysis of big data to uncover how psychological and cognitive traits shape our behavior over the Internet, with a particular focus on online cooperation, peer production and decision making. He is strongly involved with Professor Yochai Benkler in the Cooperation project. He is also involved with the Mindsport Research Network, which he helped launch together with Professor Charles Nesson.

Jerome completed a Ph.D. in Economics at Sciences Po and the University of Strasbourg. He holds Master’s degrees in both International Economics and International Affairs from Sciences Po, and a B.A. in Economics & Finance from the University of Strasbourg.

Jerome originates from the French region of Alsace. He has lived in France, Egypt, the U.S., Jordan and Switzerland. Jerome speaks French, English and Arabic and is heavily interested in public policy and international affairs.



Categories: Tech-n-law-ogy

Blockchain and the Law: The Rule of Code

Subtitle A book talk featuring author, Primavera De Filippi Teaser

Blockchain technology is ultimately a dual-edge technology that can be used to either support or supplant the law. This talk looks at the impact of blockchain technology of a variety of fields (finance, contracts, organizations, etc.), and the benefits and drawbacks of blockchain-based systems.

Event Date Apr 23 2018 4:00pm to Apr 23 2018 4:00pm Thumbnail Image: 

Monday, April 23, 2018 at 4:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein West B
Room 2019, Second Floor
Reception immediately following at HLS Pub
RSVP required to attend in person
Event will be webcast live

This talk will look at how blockchain technology is a dual-edge technology that could be used to either support or supplant the law. After describing the impact of this new technology on a variety of fields (including payments, contracts, communication systems, organizations and the internet of things), it will examine how blockchain technology can be framed as a new form of regulatory technology, while at the same time enabling the creation of new autonomous systems which are harder to regulate. The talk will conclude with an overview of the various ways in which blockchain-based systems can be regulated, and what are the dangers of doing so.

About Primavera De Filipi

Primavera obtained a Master degree in Business & Administration from the Bocconi University of Milan, and a Master degree in Intellectual Property Law at the Queen Mary University of London. She holds a PhD from the European University Institute in Florence, where she explored the legal challenges of copyright law in the digital environment, with special attention to the mechanisms of private ordering (Digital Rights Management systems, Creative Commons licenses, etc). During these years, she spent two months at the University of Buffalo in New York and one year as a visiting scholar at the University of California at Berkeley. Primavera is now a permanent researcher at the National Center of Scientific Research (CNRS), where she founded the Institute of Interdisciplinary Research on Internet & Society ( Primavera was a former fellow and current faculty associate at the Berkmain-Klein Center for Internet & Society at Harvard University. Visit here for additional bio information for Primavera including her online activities, research interests, recent publications, and online videos.




Categories: Tech-n-law-ogy

Force of Nature

Subtitle Celebrating 20 Years of the Laws of Cyberspace Teaser

Join us as we celebrate 20 years of the Laws of Cyberspace and the ways in which it laid the groundwork for our Center's field of study.

Event Date Apr 16 2018 4:00pm to Apr 16 2018 4:00pm Thumbnail Image: 

Monday, April 16, 2018 at 4:00 pm 
Harvard Law School campus
Austin Hall West, Room 111
Reception immediately following event
RSVP required to attend in person
Event will be webcast live

Celebrating 20 years of the Laws of Cyberspace and how it laid the groundwork for Berkman Klein Center's field of study.

Please join us as we recognize the 20th anniversary of the paper The Laws of Cyberspace (Taipei March '98) by Professor Lawrence Lessig. Join Professor Lessig, the Roy L. Furman Professor of Law and Leadership at Harvard Law School, along with Professor Ruth L. Okediji, the Jeremiah Smith, Jr. Professor of Law at Harvard Law School and Co-Director of the Berkman Klein Center, and Dr. Laura DeNardis, Professor in the School of Communication at American University, with moderator, Professor Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Director of the Harvard Law School Library, and Faculty Director of the Berkman Center for Internet & Society. 

About Professor Lessig

Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School. Prior to rejoining the Harvard faculty, Lessig was a professor at Stanford Law School, where he founded the school’s Center for Internet and Society, and at the University of Chicago. He clerked for Judge Richard Posner on the 7th Circuit Court of Appeals and Justice Antonin Scalia on the United States Supreme Court. Lessig serves on the Board of the AXA Research Fund, and on the advisory boards of Creative Commons and the Sunlight Foundation. He is a Member of the American Academy of Arts and Sciences, and the American Philosophical Association, and has received numerous awards, including the Free Software Foundation’s Freedom Award, Fastcase 50 Award and being named one of Scientific American’s Top 50 Visionaries. Lessig holds a BA in economics and a BS in management from the University of Pennsylvania, an MA in philosophy from Cambridge, and a JD from Yale.

About Professor Okediji

Ruth L. Okediji is the Jeremiah Smith. Jr, Professor of Law at Harvard Law School and Co-Director of the Berkman Klein Center. A renowned scholar in international intellectual property (IP) law and a foremost authority on the role of intellectual property in social and economic development, Professor Okediji has advised inter-governmental organizations, regional economic communities, and national governments on a range of matters related to technology, innovation policy, and development. Her widely cited scholarship on IP and development has influenced government policies in sub-Saharan Africa, the Caribbean, Latin America, and South America. Her ideas have helped shape national strategies for the implementation of the WTO's Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement). She works closely with several United Nations agencies, research centers, and international organizations on the human development effects of international IP policy, including access to knowledge, access to essential medicines and issues related to indigenous innovation systems.

About Dr. DeNardis

Dr. Laura DeNardis is a globally recognized Internet governance scholar and a Professor in the School of Communication at American University in Washington, DC. She also serves as Faculty Director of the Internet Governance Lab at American University. Her books include The Global War for Internet Governance (Yale University Press 2014); Opening Standards: The Global Politics of Interoperability (MIT Press 2011); Protocol Politics: The Globalization of Internet Governance (MIT Press 2009); Information Technology in Theory (Thompson 2007 with Pelin Aksoy), and a new co-edited book The Turn to Infrastructure in Internet Governance (Palgrave 2016). With a background in information engineering and a doctorate in Science and Technology Studies (STS), her research studies the social and political implications of Internet technical architecture and governance. 

She is an affiliated fellow of the Yale Law School Information Society Project and served as its Executive Director from 2008-2011. She is an adjunct Senior Research Scholar in the faculty of international and public affairs at Columbia University and a frequent keynote speaker at the world’s most prestigious universities and institutions. She has previously taught at New York University and Yale Law School. 

About Professor Zittrain

Jonathan Zittrain is the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Vice Dean for Library and Information Resources at the Harvard Law School Library, and co-founder of the Berkman Klein Center for Internet & Society.  His research interests include battles for control of digital property and content, cryptography, electronic privacy, the roles of intermediaries within Internet architecture, human computing, and the useful and unobtrusive deployment of technology in education.

He performed the first large-scale tests of Internet filtering in China and Saudi Arabia, and as part of the OpenNet Initiative co-edited a series of studies of Internet filtering by national governments: Access Denied: The Practice and Policy of Global Internet FilteringAccess Controlled: The Shaping of Power, Rights, and Rule in Cyberspace; and Access Contested: Security, Identity, and Resistance in Asian Cyberspace.

He is a member of the Board of Directors of the Electronic Frontier Foundation and the Board of Advisors for Scientific American.  He has served as a Trustee of the Internet Society and as a Forum Fellow of the World Economic Forum, which named him a Young Global Leader. He was a Distinguished Scholar-in-Residence at the Federal Communications Commission, and previously chaired the FCC’s Open Internet Advisory Committee. His book The Future of the Internet -- And How to Stop Itpredicted the end of general purpose client computing and the corresponding rise of new gatekeepers.  That and other works may be found at <>.



Categories: Tech-n-law-ogy

Honoring All Expertise: Social Responsibility and Ethics in Tech

Subtitle featuring Kathy Pham & Friends from the Berkman Klein Community Teaser

Learn more about social responsibility and ethics in tech from cross functional perspectives featuring social scientists, computer scientists, historians, lawyers, political scientists, architects, and philosophers.

Parent Event Berkman Klein Luncheon Series Event Date Apr 17 2018 12:00pm to Apr 17 2018 12:00pm Thumbnail Image: 

Tuesday, April 17, 2018 at 12:00 pm
Harvard Law School campus
[UPDATED] Wasserstein Hall, Milstein West B
Room 2019, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

The Ethical Tech working group at the Berkman Klein Center will host a series of lighting talks about social responsibility and ethics in tech from cross functional perspectives featuring social scientists, computer scientists, historians, lawyers, political scientists, architects, and philosophers. The Ethical Tech working group meets weekly to discuss and debate current tech events, experiencing the deep value of different expertise in the room to discuss the issues from different angles. 

Doaa Abu-Elyounes

Doaa Abu-Elyounes is a second year S.J.D. candidate at Harvard Law School, where she researches the effect of artificial intelligence algorithms on the criminal justice system. Before starting her S.J.D, Doaa Completed an LL.M at Harvard Law School. Doaa is originally from Israel, where she completed an LL.B and LL.M in the University of Haifa with a special focus on law and technology. After law school, Doaa worked at the Supreme Court of Israel as a law clerk; and at the Israeli Ministry of Justice as an advisor to the Director General of the Ministry. During her time in the Berkman Center, Doaa will focus on algorithmic accountability and governance of AI in criminal justice. In particular, she will analyze the impact of risk assessment tools involving AI on the criminal justice system.

Joanne Cheung

Joanne K. Cheung is an artist and designer. Her work focuses on how people, buildings, and media contribute to democratic governance. She enjoys thinking across scales and collaborating across differences. 

She received her B.A. from Dartmouth College, M.F.A. from Bard College Milton Avery Graduate School of the Arts, and is currently pursuing her M.Arch at Harvard Graduate School of Design. 

Mary Gray

Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society and Senior Researcher at Microsoft Research. She chairs the Microsoft Research Lab Ethics Advisory Board. Mary maintains a faculty position in the School of Informatics, Computing, and Engineering with affiliations in Anthropology, Gender Studies and the Media School, at Indiana University. Mary’s research looks at how technology access, social conditions, and everyday uses of media transform people’s lives.  Her most recent book, Out in the Country: Youth, Media, and Queer Visibility in Rural America, looked at how youth in the rural United States use media to negotiate their identities, local belonging, and connections to broader, political communities. Mary’s current project combines ethnography, interviews, and survey data with large-scale platform transaction data to understand the impact of automation on the future of work and workers’ lives. Mary’s research has been covered in the popular press, including The New York Times, Los Angeles Times, and the Guardian. She served on the American Anthropological Association’s Executive Board and chaired its 113th Annual Meeting. Mary currently sits on the Executive Board of Public Responsibility in Medicine and Research (PRIM&R). In 2017, Mary joined Stanford University’s “One-Hundred-Year Study on Artificial Intelligence” (AI100), looking at the future of AI and its policy implications.

Jenn Halen

Jenn Halen is a fellow at the Berkman Klein Center. She works on research and community activities for the Ethics and Governance of Artificial Intelligence Initiative. Jenn is a doctoral candidate in Political Scientist at the University of Minnesota and a former National Science Foundation Graduate Research Fellow. Her research broadly focuses on the ways that new and emerging technologies influence, and are influenced by, politics. She will study the complex social and political implications of advanced machine learning and artificial intelligence, especially as it relates to issues of governance. She also works on issues of cyber security, human rights, and social justice. Jenn enjoys ballet, almost everything geek-related, and good vegan food.  She makes excellent vegan mac and cheese, and she will probably tell you about it.

Jenny Korn

Jenny Korn is an activist of color for social justice and scholar of race, gender, and media with academic training in communication, sociology, theater, public policy, and gender studies from Princeton, Harvard, Northwestern, and the University of Illinois at Chicago. She will examine identity and representation through online and in-person discourses, focusing on how popular concepts of race and gender are influenced by digital interactions, political protest, and institutional kyriarchy.

Kathy Pham

Kathy Pham is a computer scientist, cancer patient sidekick, product manager, and leader with a love for developing products, operations, hacking bureaucracy, building and and leading teams, all things data, healthcare, and weaving public service and advocacy into all aspects of  life.  As a 2017-2018 fellow at the Berkman Klein Center, Kathy will explore artificial intelligence, and the ethics and social impact responsibility of engineers when writing code and shipping products. Most recently, Kathy was a founding product and engineering member of the of the United States Digital Service, a tech startup in government at the White House, where she led and contributed to public services across the Veterans Affairs, Department of Defense, Talent, and Precision Medicine. She sits on the advisory boards of the Anita Borg Institute local, and the “Make the Breast Pump Not Suck” initiative. Previously, Kathy held a variety of roles in product, engineering, and data science at Google, IBM, and Harris Healthcare Solutions. In the non-work world, Kathy founded the Cancer Sidekick Foundation to spread Leukemia knowledge and build a cancer community, started Google's First Internal Business Intelligence Summit, founded Atlanta United For Sight, placed first at the Imagine Cup competition (basically the World Cup but for tech geeks) representing the United States with a news Sentiment Analysis engine, spoke at the White House State of STEM 2015, and invited as of First Lady Michelle Obama’s Guest at the 2015 State of the Union address. She has also been spotted at the gaming finals for the After Hours Gaming League for StarCraft II, speaking at tech conferences, and hosting food themed Formula 1 Racing hangouts. Kathy holds a Bachelors and Masters of Computer Science from the Georgia Institute of Technology in Atlanta, Georgia, and from Supelec in Metz, France.

Luke Stark

Luke Stark is a Postdoctoral Fellow in the Department of Sociology at Dartmouth College, and studies the intersections of digital media and behavioral science. Luke’s work at the Berkman Klein Center will explore the ways in which psychological techniques are incorporated into social media platforms, mobile apps, and artificial intelligence (AI) systems — and how these behavioral technologies affect human privacy, emotional expression, and digital labor. His scholarship highlights the asymmetries of power, access and justice that are emerging as these systems are deployed in the world, and the social and political challenges that technologists, policymakers, and the wider public will face as a result. Luke holds a PhD from the Department of Media, Culture, and Communication at New York University, and an Honours BA and MA from the University of Toronto; he has been a Fellow of the NYU School of Law’s Information Law Institute (ILI), and an inaugural Fellow with the University of California Berkeley’s Center for Technology, Society, and Policy (CTSP). He tweets @luke_stark; learn more at

Salome Viljoen

Salome is a Fellow in the Privacy Initiatives Project at the Berkman Klein Center for Internet and Society. Salome’s professional interest is the intersection between privacy, technology and inequality. Before coming to the Berkman Center, Salome was an associate at Fenwick &amp; West, LLP, where she worked with technology company clients on a broad variety of matters. She has a JD from Harvard Law School, an MsC from the London School of Economics, and a BA in Political Economy from Georgetown University. In her spare time, she enjoys reading, gardening, and hanging out with her cat.

Photo courtesy of socialmediasl444


Categories: Tech-n-law-ogy

THEFT! A History of Music

Subtitle Professors James Boyle and Jennifer Jenkins (Duke Law School) discuss Theft! A History of Music, their graphic novel about musical borrowing. Teaser

Theft! A History of Music is a graphic novel laying out a 2000-year long history of musical borrowing from Plato to rap.

Parent Event Berkman Klein Luncheon Series Event Date Apr 10 2018 12:00pm to Apr 10 2018 12:00pm Thumbnail Image: 

Tuesday, April 10, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein East A
Room 2036, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

You can download the book here. Complimentary copies available at event!

This comic book lays out 2000 years of musical history. A neglected part of musical history. Again and again there have been attempts to police music; to restrict borrowing and cultural cross-fertilization. But music builds on itself. To those who think that mash-ups and sampling started with YouTube or the DJ’s turntables, it might be shocking to find that musicians have been borrowing—extensively borrowing—from each other since music began. Then why try to stop that process? The reasons varied. Philosophy, religion, politics, race—again and again, race—and law. And because music affects us so deeply, those struggles were passionate ones. They still are.

The history in this book runs from Plato to Blurred Lines and beyond. You will read about the Holy Roman Empire’s attempts to standardize religious music with the first great musical technology (notation) and the inevitable backfire of that attempt. You will read about troubadours and church composers, swapping tunes (and remarkably profane lyrics), changing both religion and music in the process. You will see diatribes against jazz for corrupting musical culture, against rock and roll for breaching the color-line. You will learn about the lawsuits that, surprisingly, shaped rap. You will read the story of some of music’s iconoclasts—from Handel and Beethoven to Robert JohnsonChuck BerryLittle RichardRay Charles, the British Invasion and Public Enemy.

To understand this history fully, one has to roam wider still—into musical technologies from notation to the sample deck, aesthetics, the incentive systems that got musicians paid, and law’s 250 year struggle to assimilate music, without destroying it in the process. Would jazz, soul or rock and roll be legal if they were reinvented today? We are not sure and that seems...  worrying. We look forward to playing you some of the music, showing the pictures and hearing your views.  

About James

James Boyle is William Neal Reynolds Professor of Law at Duke Law School and the former Chairman of the Board of Creative Commons. He has written for The New York TimesThe Financial TimesNewsweek and many other newspapers and magazines. His other books include The Public Domain: Enclosing the Commons of the MindShamans, Software and Spleens: Law and the Construction of the Information Society, and Bound By Law a comic book about fair use, copyright and creativity (with Jennifer Jenkins).  

About Jennifer

Jennifer Jenkins is a Clinical Professor of Law at Duke Law School and the Director of the Center for the Study of the Public Domain. Apart from her legal qualifications, she also plays the piano and holds an MA in English from Duke University, where she studied creative writing with the late Reynolds Price and Milton with Stanley Fish. Her most recent book is Intellectual Property: Cases and Materials (3rd ed, 2016) (with James Boyle). Her recent articles include In Ambiguous Battle: The Promise (and Pathos) of Public Domain Day, and Last Sale? Libraries’ Rights in the Digital Age.



Categories: Tech-n-law-ogy

Remedies for Cyber Defamation: Criminal Libel, Anti-Speech Injunctions, Forgeries, Frauds, and More

Subtitle Featuring Professor Eugene Volokh, UCLA School of Law Teaser

“Cheap speech” has massively increased ordinary people’s access to mass communications -- both for good and for ill. How has the system of remedies for defamatory, privacy-invading, and harassing speech reacted? Some ways are predictable; some are surprising; some are shocking. Prof. Eugene Volokh (UCLA) will lay it out at a special Berkman Klein Luncheon on Monday, April 9th. Please join us!

Parent Event Berkman Klein Luncheon Series Event Date Apr 9 2018 12:00pm to Apr 9 2018 12:00pm Thumbnail Image: 

Monday, April 9, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein West A
Room 2019, Second Floor
RSVP required to attend in person

Watch Live Starting at 12pm
(video and audio will be archived on this page following the event)

If you experience a video disruption reload to refresh the webcast.

This event is being sponsored by Lumen, a project of the Berkman Klein Center for Internet & Society at Harvard University.

“Cheap speech” has massively increased ordinary people’s access to mass communications -- both for good and for ill.  How has the system of remedies for defamatory, privacy-invading, and harassing speech reacted?  Some ways are predictable; some are surprising; some are shocking. Prof. Eugene Volokh (UCLA) will lay it out at a special Berkman Klein Luncheon on Monday, April 9th. 

About Professor Volokh

Eugene Volokh teaches free speech law, tort law, religious freedom law, church-state relations law, and a First Amendment amicus brief clinic at UCLA School of Law, where he has also often taught copyright law, criminal law, and a seminar on firearms regulation policy. Before coming to UCLA, he clerked for Justice Sandra Day O'Connor on the U.S. Supreme Court and for Judge Alex Kozinski on the U.S. Court of Appeals for the Ninth Circuit.

Volokh is the author of the textbooks The First Amendment and Related Statutes (5th ed. 2013), The Religion Clauses and Related Statutes (2005), and Academic Legal Writing (4th ed. 2010), as well as over 75 law review articles and over 80 op-eds, listed below. He is a member of The American Law Institute, a member of the American Heritage Dictionary Usage Panel, and the founder and coauthor of The Volokh Conspiracy, a Weblog that gets about 35-40,000 pageviews per weekday. He is among the five most cited then-under-45 faculty members listed in the Top 25 Law Faculties in Scholarly Impact, 2005-2009 study, and among the forty most cited faculty members on that list without regard to age. These citation counts refer to citations in law review articles, but his works have also been cited by courts. Six of his law review articles have been cited by opinions of the Supreme Court Justices; twenty-nine of his works (mostly articles but also a textbook, an op-ed, and a blog post) have been cited by federal circuit courts; and several others have been cited by district courts or state courts.

Volokh is also an Academic Affiliate for the Mayer Brown LLP law firm; he generally consults on other lawyers' cases, but he has argued before the Seventh Circuit, the Ninth Circuit, the Indiana Supreme Court, and the Nebraska Supreme Court, and has also filed briefs in the U.S. Supreme Court, in the Fifth, Sixth, Eighth, Eleventh, and D.C. Circuits, and state appellate courts in California, Michigan, New Mexico, and Texas.

Volokh worked for 12 years as a computer programmer. He graduated from UCLA with a B.S. in math-computer science at age 15, and has written many articles on computer software. Volokh was born in the USSR; his family emigrated to the U.S. when he was seven years old.

About Lumen

Lumen is an independent 3rd party research project studying cease and desist letters concerning online content. We collect and analyze requests to remove material from the web. Our goals are to educate the public, to facilitate research about the different kinds of complaints and requests for removal--both legitimate and questionable--that are being sent to Internet publishers and service providers, and to provide as much transparency as possible about the “ecology” of such notices, in terms of who is sending them and why, and to what effect.

Our database contains millions of notices, some of them with valid legal basis, some of them without, and some on the murky border. Our posting of a notice does not indicate a judgment among these possibilities, nor are we authenticating the provenance of notices or making any judgment on the validity of the claims they raise.

Lumen is a unique collaboration among law school clinics and the Electronic Frontier Foundation. Conceived and developed at the Berkman Center for Internet & Society (now the Berkman Klein Center) by then-Berkman Fellow Wendy Seltzer, Lumen was nurtured with help from law clinics at Harvard, Berkeley, Stanford, University of San Francisco, University of Maine, George Washington School of Law, and Santa Clara University School of Law.

Lumen is supported by gifts from Google. All individual and corporate donors to the Berkman Klein Center agree to contribute their funds as gifts rather than grants, for which there are no promised products, results, or deliverables.


Categories: Tech-n-law-ogy

Big Data, Health Law, and Bioethics


This timely, groundbreaking volume explores key questions from a variety of perspectives, examining how law promotes or discourages the use of big data in the health care sphere, and also what we can learn from other sectors.

Publication Date 1 Apr 2018 Thumbnail Image: External Links: Download the Introduction from SSRNOrder the book


Edited by I. Glenn Cohen, Holly Fernandez Lynch, Effy Vayena, and Urs Gasser
Cambridge University Press,  March 2018

About the Book:

When data from all aspects of our lives can be relevant to our health - from our habits at the grocery store and our Google searches to our FitBit data and our medical records - can we really differentiate between big data and health big data? Will health big data be used for good, such as to improve drug safety, or ill, as in insurance discrimination? Will it disrupt health care (and the health care system) as we know it? Will it be possible to protect our health privacy? What barriers will there be to collecting and utilizing health big data? What role should law play, and what ethical concerns may arise? This timely, groundbreaking volume explores these questions and more from a variety of perspectives, examining how law promotes or discourages the use of big data in the health care sphere, and also what we can learn from other sectors.

This edited volume stems from the Petrie-Flom Center’s 2016 annual conference, organized in collaboration with the Berkman Klein Center and the Health Ethics and Policy Lab, University of Zurich which brought together leading experts to identify the various ways in which law and ethics intersect with the use of big data in health care and health research, particularly in the United States; understand the way U.S. law (and potentially other legal systems) currently promotes or stands as an obstacle to these potential uses; determine what might be learned from the legal and ethical treatment of uses of big data in other sectors and countries; and examine potential solutions (industry best practices, common law, legislative, executive, domestic and international) for better use of big data in health care and health research in the U.S.


Producer Intro Authored by
Categories: Tech-n-law-ogy

Practical Approaches to Big Data Privacy Over Time


This article analyzes how privacy risks multiply as large quantities of personal data are collected over longer periods of time, draws attention to the relative weakness of data protections in the corporate and public sectors, and provides practical recommendations for protecting privacy when collecting and managing commercial and government data over extended periods of time.

Publication Date 12 Mar 2018 Thumbnail Image: External Links: Download article from Oxford University PressDownload from DASH

Authored by Micah Altman, Alexandra Wood, David O’Brien, and Urs Gasser

The Berkman Klein Center is pleased to announce a new publication from the Privacy Tools project, authored by a multidisciplinary group of project collaborators from the Berkman Klein Center and the Program on Information Science at MIT Libraries. This article, titled "Practical approaches to data privacy over time," analyzes how privacy risks multiply as large quantities of personal data are collected over longer periods of time, draws attention to the relative weakness of data protections in the corporate and public sectors, and provides practical recommendations for protecting privacy when collecting and managing commercial and government data over extended periods of time.

Increasingly, corporations and governments are collecting, analyzing, and sharing detailed information about individuals over long periods of time. Vast quantities of data from new sources and novel methods for large-scale data analysis are yielding deeper understandings of individuals’ characteristics, behavior, and relationships. It is now possible to measure human activity at more frequent intervals, collect and store data relating to longer periods of activity, and analyze data long after they were collected. These developments promise to advance the state of science, public policy, and innovation. At the same time, they are creating heightened privacy risks, by increasing the potential to link data to individuals and apply data to new uses that were unanticipated at the time of collection. Moreover, these risks multiply rapidly, through the combination of long-term data collection and accumulations of increasingly “broad” data measuring dozens or even thousands of attributes relating to an individual.

Existing regulatory requirements and privacy practices in common use are not sufficient to address the risks associated with long-term, large-scale data activities. In practice, organizations often rely on a limited subset of controls, such as notice and consent or de-identification, rather than drawing from the wide range of privacy interventions available. There is a growing recognition that privacy policies often do not adequately inform individuals about how their data will be used, especially over the long term. The expanding scale of personal data collection and storage is eroding the feasibility and effectiveness of techniques that aim to protect privacy simply by removing identifiable information.

Recent concerns about commercial and government big data programs parallel earlier conversations regarding the risks associated with long-term human subjects research studies. For decades, researchers and institutional review boards have intensively studied long-term data privacy risks and developed practices that address many of the challenges associated with assessing risk, obtaining informed consent, and handling data responsibly. Longitudinal research data carry risks similar to those associated with personal data held by corporations and governments. However, in general, personal information is protected more strongly when used in research than when it is used in commercial and public sectors—even in cases where the risks and uses are nearly identical.

Combining traditional privacy approaches with additional safeguards identified from exemplar practices in long-term longitudinal research and new methods emerging from the privacy literature can offer more robust privacy protection. Corporations and governments may consider adopting review processes like those implemented by research ethics boards to systematically analyze the risks and benefits associated with data collection, retention, use, and disclosure over time. Rather than relying on a single intervention such as de-identification or consent, corporate and government actors may explore new procedural, legal, and technical tools for evaluating and mitigating risk, balancing privacy and utility, and providing enhanced transparency, review, accountability, as potential components of data management programs. Adopting new technological solutions to privacy can help ensure stronger privacy protection for individuals and adaptability to respond to emerging sophisticated attacks on data privacy. Risks associated with long-term big data management can be mitigated by combining sets of privacy and security controls, such as notice and consent, de-identification, ethical review processes, differential privacy, and secure data enclaves, when tailored to risk the factors present in a specific case and informed by the state of the art and practice.

This article was published by Oxford University Press in International Data Privacy Law, available at The research underlying this article was presented at the 2016 Brussels Privacy Symposium on Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization, hosted by the Brussels Privacy Hub of the Vrije Universiteit Brussel and the Future of Privacy Forum, on November 8, 2016. This material is based upon work supported by the National Science Foundation under Grant No. CNS-1237235, the Alfred P. Sloan Foundation, and the John D. and Catherine T. MacArthur Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the Alfred P. Sloan Foundation, or the John D. and Catherine T. MacArthur Foundation.

About the Privacy Tools for Sharing Research Data Project
Funded by the National Science Foundation and the Alfred P. Sloan Foundation, the Privacy Tools for Sharing Research Data project is a collaboration between the Berkman Klein Center for Internet & Society, the Center for Research on Computation and Society (CRCS), the Institute for Quantitative Social Science, and the Data Privacy Lab at Harvard University, as well as the Program on Information Science at MIT Libraries, that seeks to develop methods, tools, and policies to facilitate the sharing of data while preserving individual privacy and data utility.

Executive Director and Harvard Law School Professor of Practice Urs Gasser leads the Berkman Klein Center's role in this exciting initiative, which brings the Center's institutional knowledge and practical experience to help tackle the legal and policy-based issues in the larger project.

More information about the project is available on the official project website.

  Producer Intro Authored by
Categories: Tech-n-law-ogy

A Conversation on Data and Privacy with former Facebook GC Chris Kelly


Chris Kelly worked extensively in developing Facebook’s early approaches to public policy challenges including privacy. This event will provide a free form discussion about Kelly’s career path, the goals of Facebook’s privacy policies, their interplay with Facebook’s business model, and strategies for implementation.

Event Date Apr 4 2018 12:00pm to Apr 4 2018 12:00pm Thumbnail Image: 



Tuesday, April 4, 2018 at 12:00 pm
Harvard Law School campus
Pound Hall, Rm 201

This event is co-sponsored by Harvard Law School's Center on the Legal Profession.

Chris Kelly worked extensively in developing Facebook’s early approaches to public policy challenges including privacy.  This event will provide a free form discussion about Kelly’s career path, the goals of Facebook’s privacy policies, their interplay with Facebook’s business model, and strategies for implementation. We will also discuss more generally the current political environment in which user-data-driven technology companies find themselves, potential re-implementation, and the possible role of domestic and international privacy regulation. Finally, we’ll find out what Kelly has been involved with since leaving Facebook professionally, politically, and personally.  Kelly will be in discussion with Prof. Ron Dolin, who is currently teaching “Law 2.0: Technology’s Impact on the Practice of Law” at HLS.

About Chris Kelly:
Chris Kelly, HLS ’97, is an entrepreneur, attorney, and activist. From September 2005 to August 2009, he served as the first General Counsel, Chief Privacy Officer and Head of Global Public Policy at Facebook. As an early leader at Facebook, he helped it grow from its college roots to the ubiquitous communications medium it is today. In 2010, Kelly was a candidate for the Democratic nomination for California Attorney General. Since his departure from Facebook and campaign for Attorney General, he has become a prominent investor in award-winning independent films, restaurants, and technology start-ups including MoviePass, Fandor, Organizer, and rentLEVER. Kelly became a co-owner of the NBA’s Sacramento Kings in May 2013.

Categories: Tech-n-law-ogy

Scheduling Jekyll Posts with Netlify and AWS

Not too long ago I moved this site from a custom setup on Amazon Web Services (AWS) to Netlify[1]. My AWS setup was a bit cumbersome, consisting of a Jenkins machine that pulled from a private GitHub repository, built the site using Jekyll[2], and published the result to S3. The benefit of this setup over using GitHub pages was that I could schedule posts to be published later. Jenkins was run every morning and new posts were automatically published without manual intervention. (Jenkins was also triggered whenever I pushed to the GitHub repository for instant builds.)

My custom AWS setup worked well, but it cost around $14 every month and I wasn't happy about that, especially given how infrequently I've been writing new posts in the past couple of years. I decided in the short-term to just move this site to Netlify and not worry about scheduling posts because I didn't think I would be writing that much for the foreseeable future. If I ever wanted to post something, I could do so manually, and in the meantime I'd be saving $14 a month. As it turned out, scheduling posts on Netlify was a lot simpler than I thought it would be. All I needed was an AWS Lambda function and an AWS Cloudwatch event.

Note: This post assumes you already have a site setup on Netlify using a GitHub repository. While I assume the solution works the same for other source code repository types, like BitBucket, I'm not entirely sure. This post also assumes that you have an AWS account.

Configuring Jekyll

By default, Jekyll generates all blog posts in the _posts directory regardless of the publish date associated with each. That obviously doesn't work well when you want to schedule posts to be published in the future, so the first step is to configure Jekyll to ignore future posts. To do so, add this key to Jekyll's config.yml:

future: false

Setting future to false tells Jekyll to skip any posts with a publish date in the future. You can then set the date field in the front matter of a post to a future date and know that the post will not be generated until then, like this:

--- layout: post title: "My future post" date: 2075-01-01 00:00:00 ---

This post will be published on January 1, 2075, so it will not be built by Jekyll until that point in time. I find it easier to schedule all posts for midnight so that whenever the site gets published, so long as the date matches, the post will always be generated.

Generating a Netlify build hook

One of the things I like about Netlify is that you can trigger a new site build whenever you want, either manually or programmatically. Netlify has a useful feature called a build hook[3], which is a URL that triggers a new build. To generate a new build hook, go to the Netlify dashboard for your domain and go Site Settings and then to the Build & Deploy page. When you scroll down, you'll see a section for Build Hooks. Click "Add build hook", give your new hook a name (something like "Daily Cron Job" would be appropriate here), and choose the branch to build from.

You'll be presented with a new URL that looks something like this:{some long unique identifier}

Whenever you send a POST request to the build hook, Netlify will pull the latest files from the GitHub repository, build the site, and deploy it. This is quite useful because you don't need to worry about authenticating against the Netlify API; you can use this URL without credentials. Just make sure to keep this URL a secret. You can see the URL in your list of build hooks on the same page.

(Don't worry, the build hook URL in the screenshot has already been deleted.)

Creating the AWS Lambda function

AWS Lambda functions are standalone functions that don't require you to setup and manage a server. As such, they are especially useful when you have very simple processes to run infrequently. All you need to do is create a Lambda function that sends a POST request to the build URL.

The first step is to create a local Node.js application that will become the executable code for the Lamda function. Create a new directory (build-netlify-lambda, for example) and install the request module, which will make it easy to send an HTTP request:

$ cd build-netlify-lambda $ npm i request

You can create a package.json file if you want, but it's not necessary.

Next, create a file called index.js inside of build-netlify-lamda and paste the following code into it:

"use strict"; const request = require("request"); exports.handler = (event, context, callback) => {, callback); };

All Lamda functions export a handler function that receives three parameters: an event object with information about the event that triggered the function call, a context object with information about the runtime environment, and a callback function to call when the function is finished. In this case, you only need the callback function. The Netlify build hook will be stored in an environment variable called URL in the Lambda function, which you access using process.env.URL. That value is passed directly to along with the callback, making this Lamda function as small as possible.

Now, you just need to zip up the entire build-netlify-lambda directory so it can be deployed to AWS Lambda:

$ zip -r index.js node_modules/

Make sure the top level of the zip file has both index.js and node_modules/. If you mistakenly zip up the entire directory so that build-netlify-lambda is at the top level, AWS will not find the executable files.

The last step is to upload this zip file to AWS. To do so, go to the AWS Console[4] and click "Create Function".

You'll be presented with a form to fill out. Enter a name for the function, such as "publishNetlifySiteExample" and select one of the Node.js options as your runtime. The last field is for the Lambda role. If you already have other roles defined, you can use one that already exists; otherwise, select "Create role from template(s)". This Lambda function doesn't need a lot of permissions, so you can just add "Basic Edge Lambda Permissions" to allow access to logs. Click "Create Function".

When the Lambda function has been created, a new screen will load. This screen is a bit difficult to parse due to the amount of information on it. If this is your first Lambda function, don't worry, you'll get used to it quickly. Scroll down to the section called "Function Code" and select "Upload a .ZIP file" from the "Code entry type" dropdown. You can then select your zip file to upload to the Lambda function.

Beneath the "Function Code" section is the "Environment Variables" section. Create a new environment variable named URL with its value set to your Netlify build hook. Once that's complete, click "Save" at the top of the screen to upload the zip file and save your environment variables.

You can test the Lambda function by creating a new test event. At the top of the screen, click the "Select a Test Event Dropdown" and select "Configure Test Events".

A new dialog will open to create a test event. Since this Lambda function doesn't use any incoming data, you can keep the default settings and give the event a meaningful name like "TestNetlifyBuild". Click the "Create" button to save the test event.

In order to run the test, make sure "TestNetlifyBuild" is selected in the dropdown at the top of the screen and click the "Test" button. This will execute the function. If you look at your Netlify Deploys dashboard, you should see a new build begin.

Setting up the Cloudwatch event

At this point, the Lambda function is operational and will trigger a new Netlify deploy when executed. That's somewhat useful but isn't much more powerful than logging into the Netlify dashboard and manually triggering a build. The goal is to have Netlify build automatically on a certain schedule and Cloudwatch is the perfect solution.

Cloudwatch is a service that generates events based on any number of criteria. You can use it to monitor your services on a variety of criteria and then respond with certain actions. For the purposes of this post, Cloudwatch will be set to run periodically and then trigger the Lambda function that builds the Netlify website.

On the Cloudwatch console[5], click "Events" on the left menu and then the "Create Rule" button.

Under "Event Source" select "Schedule". You're now able to select the frequency with which you want the event to be triggered. You can select an interval of minutes, hours, or days, or you can create a custom schedule using a Cron expression. (If you want to control the exact time that an event is triggered, it's best to use a Cron expression.) Under "Targets", select "Lambda function" and your function name. There's no need to configure the version/alias or input because the Lambda function isn't using any of those. Click the "Configure Details" button. You'll be brought to a second dialog.

In this dialog, fill in a meaningful name for your event (and optional description) and then click "Create Rule". Rules are on by default so your new event should be triggered at the next interval. The Lambda function will then be called and regenerate the website.


This website has been running on the setup described in this post for over a month. In fact, this post was written ahead of time and published using my AWS Cloudwatch event and Lambda function. The functionality is the same as my previous setup with Jenkins and S3, however, this setup costs $0 compared to $14. I only run my Cloudwatch event two times a week (I'm not posting much these days) and each run of the Lambda function takes under two seconds to complete, which means I fall into the free tier and I'm not charged anything.

The Lambda free tier is one million requests and 400,000 GB-seconds per month. A GB-second is one second of execution time with 1 GB of allocated memory. The Lambda function created in this post uses the default memory allocation of 128 MB. If you figure out the match, you'll still be in the free tier even if you run your Lambda function every hour of the day each month. As the Lambda function only sends off an HTTPS request and then Netlify does the build, the real work isn't done inside of Lambda.

I've found this setup to be very simple and cost-efficient, not to mention a lot less complicated. I no longer have to log into a Jenkins server to figure out why a build of the website failed. There's just one small function to manage and all of the important information is displayed in the Netlify dashboard.

The most important thing to remember when using this setup is to set the date field of each post to some time in the future. When the Cloudwatch event triggers the Lambda function to execute, only those posts with a date in the past will be generated. You can play around with the timing of the Cloudwatch event to best suit your frequency of posts, and keep in mind that Netlify automatically builds the site whenever a change is pushed, so you still have just-in-time updates as needed.

  1. Netlify (
  2. Jekyll (
  3. Netlify Webhooks - Incoming Hooks (
  4. AWS Console - Lambda (
  5. AWS Console - Cloudwatch (
Categories: Tech-n-law-ogy

Dividing Lines: Why Is Internet Access Still Considered a Luxury in America?

Subtitle featuring Maria Smith of the Berkman Klein Center Teaser

Internet access is a major social and economic justice issue of our time. Dividing Lines, a four-part documentary video series, sheds a light on who is being left behind as big telecom flourishes.

Parent Event Berkman Klein Luncheon Series Event Date Mar 27 2018 12:00pm to Mar 27 2018 12:00pm Thumbnail Image: 

Tuesday, March 27, 2018 at 12:00 pm
Harvard Law School campus
Pound Hall Room 101
Ballantine Classroom
RSVP required to attend in person
Event will be live webcast at 12:00 pm

The online world is no longer a distinct world. It is an extension of our social, economic, and political lives. Internet access, however, is still often considered a luxury good in the United States. Millions of Americans have been priced out of, or entirely excluded from, the reach of modern internet networks. Maria Smith, an affiliate of Berkman Klein and the Cyberlaw Clinic, created a four-part documentary series to highlight these stark divides in connectivity, from Appalachia to San Francisco, and to uncover the complex web of political and economic forces behind them.   

About Maria Maria Smith is a Project Coordinator working with Professor Susan Crawford in Harvard Law School's Cyberlaw Clinic and leading the efforts of the Responsive Communities project within Berkman Klein. She is focused on the intersection of technology deployment and social and economic justice. Maria is also a documentary filmmaker whose productions expose the impacts of and forces behind America's stark digital divides. She made her directorial debut in college with the film One Nation, Disconnected, in cooperation with the Harvard Law Documentary Studio, that details the hardship of a teenager growing up in New York City without internet access at home. Dividing Lines, a four-part series, is in production this year.      Maria first joined the Berkman Klein and Harvard Law communities as an undergraduate conducting teaching, research, and project support for Professor Susan Crawford. Maria graduated from Harvard College with a B.A. in Economics. In college she was invested in work with the Global Health and AIDS Coalition and co-chaired the annual Women’s Leadership Conference. She worked as an intern for the Public Defender Service for the District of Columbia, Connecting for Good, and Morgan Stanley.  




Categories: Tech-n-law-ogy

A talk with Marilù Capparelli, PhD

Subtitle Legal Director at Google Teaser

Please join the Harvard Italian Law Association and the Berkman Klein Center for Internet & Society for a discussion on several legal and regulatory issues concerning digital platforms: controversial content, brand safety, privacy and GDPR compliance, scope of removal and CJEU pending cases, tax, copyright, and antitrust enforcement.

Event Date Apr 5 2018 12:00pm to Apr 5 2018 12:00pm Thumbnail Image: 

Thursday, April  5, 2018 at 12:00 pm
Harvard Law School campus
[NEW LOCATION] Hauser Hall 104
Complimentary lunch provided

Please join the Harvard Italian Law Association and the Berkman Klein Center for Internet & Society for a discussion on several legal and regulatory issues concerning digital platforms: controversial content, brand safety, privacy and GDPR compliance, scope of removal and CJEU pending cases, tax, copyright, and antitrust enforcement.

Ms. Marilù Capparelli is managing director of Google Legal Department in the EMEA area. Before joining Google, she was Head of Legal and Government Affairs at eBay Inc. She is the author of several legal articles and regularly lectures in master degrees on law and technology.  She has been recently listed amongst the most influential Italian women lawyers. 

This event is being co-sponsored by the Harvard Italian Law Association at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University.

Categories: Tech-n-law-ogy

The Right of Publicity: Privacy Reimagined for a Public World

Subtitle featuring author, Jennifer E. Rothman, Professor of Law and Joseph Scott Fellow, Loyola Law School Teaser

Jennifer E. Rothman will be talking about her book, The Right of Publicity: Privacy Reimagined for a Public World (Harvard University Press 2018).She challenges the conventional story of the right of publicity's development, and questions the transformation of people into intellectual property.

Parent Event Berkman Klein Luncheon Series Event Date Apr 3 2018 12:00pm to Apr 3 2018 12:00pm Thumbnail Image: 

Tuesday, April 3, 2018 at 12:00 pm
Berkman Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Milstein East A (Room 2036, second floor)
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Who controls how one's identity is used by others? This legal question, centuries old, demands greater scrutiny in the Internet Age. Jennifer Rothman uses the right of publicity - a little-known law, often wielded by celebrities - to answer that question not just for the famous, but for everyone. Rothman challenges the conventional story of the right of publicity's development, and questions its transformation of people into intellectual property. This shift and the right's subsequent expansion undermine individual liberty, restrict free speech, and suppress artistic works.

About Jennifer

Jennifer E. Rothman is Professor of Law and the Joseph Scott Fellow at Loyola Law School, Los Angeles.  She joined the Loyola faculty from Washington University in St. Louis, where she was an Associate Professor of Law.  Professor Rothman currently teaches Trademarks and Unfair Competition, Torts, Intellectual Property Theory and the Right of Publicity. She is an elected member of the American Law Institute and an affiliated fellow at the Yale Information Society Project at Yale Law School. 

Professor Rothman is nationally recognized for her scholarship in the intellectual property field, and has become the leading expert on the right of publicity. She researches and writes primarily in the areas of intellectual property and constitutional law. In addition to focusing on conflicts between IP rights and other constitutionally protected rights, such as the freedom of speech, her work also explores the intersections of tort and property law, particularly in the context of the right of publicity and trademark and unfair competition law. Her forthcoming book, The Right of Publicity: Privacy Reimagined for a Public World, will be published by Harvard University Press. Professor Rothman created Rothman’s Roadmap to the Right of, the go-to-website for right-of-publicity questions and news.

Rothman’s essays and articles regularly appear in top law reviews and journals, including Cornell Law Review, Georgetown Law JournalVirginia Law Review, Harvard Journal of Law & Public Policy and the Stanford Law & Policy Review. She is regularly invited to speak at a variety of esteemed institutions, including Columbia, Michigan, Stanford, University of Chicago, University of Pennsylvania, U.C. Berkeley, UCLA and Yale.

Rothman received her A.B. from Princeton University where she received the Asher Hinds Book Prize and the Grace May Tilton Prize.  Rothman received an M.F.A. in film production from the University of Southern California’s School of Cinematic Arts, where she directed an award-winning documentary.  Rothman then worked in the film industry for a number of years, including positions at Paramount Pictures and Castle Rock Entertainment.

Rothman received her J.D. from UCLA, where she graduated first in her class and won the Jerry Pacht Memorial Constitutional Law Award for her scholarship in that field.  Rothman served as law clerk to the Honorable Marsha S. Berzon of the United States Court of Appeals for the Ninth Circuit in San Francisco and then practiced as an entertainment and intellectual property litigator in Los Angeles at Irell & Manella.




Categories: Tech-n-law-ogy

The Accuracy, Fairness, and Limits of Predicting Recidivism

Subtitle featuring Julia Dressel Teaser

COMPAS is a software used across the country to predict who will commit future crimes. It doesn’t perform any better than untrained people who responded to an online survey.

Parent Event Berkman Klein Luncheon Series Event Date Mar 6 2018 12:00pm to Mar 6 2018 12:00pm Thumbnail Image: 

Tuesday, March 6, 2018 at 12:00 pm
Harvard Law School campus
Pound Hall, Ballantine Classroom
Room 101
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. However, our study shows that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise.   This event is supported by the Ethics and Governance of Artificial Intelligence Initiative at the Berkman Klein Center for Internet & Society. In conjunction with the MIT Media Lab, the Initiative is developing activities, research, and tools to ensure that fast-advancing AI serves the public good. Learn more at  

About Julia

Julia Dressel recently graduated from Dartmouth College, where she majored in both Computer Science and Women’s, Gender, & Sexuality Studies. She is currently a software engineer in Silicon Valley. Her interests are in the intersection of technology and bias.   



Categories: Tech-n-law-ogy

The Yemen War Online: Propagation of Censored Content on Twitter


This study documents and analyzes the sharing of information on Twitter among different political groups related to the ongoing conflict in Yemen. 

Publication Date 28 Feb 2018 Author(s) Thumbnail Image: External Links: Online reportDownload from DASH

This study, conducted by the Internet Monitor project at the Berkman Klein Center for Internet & Society, analyzes the sharing of information on Twitter among different political groups related to the ongoing conflict in Yemen. The study finds that the networks on Twitter are organized around and segregate along political lines. The networks cite web content, including censored websites, that reflects and informs their collective framing of the politically sensitive issues. Each of the factions relies almost entirely on their own sources of information.

The study also tests for the availability of this open web content shared on Twitter in the countries most engaged in the public debate over the conflict and find that national filtering policies also seek to shape the narrative by blocking views and perspectives that diverge from government positions on the conflict. While selective exposure to web content is often associated with polarization, the paper shows that social media—in this case Twitter—is used to propagate censored content from the open web, making it more visible to users behind open-web filtering regimes. The evidence shows that government attempts to corral social media users into government-friendly media bubbles does not work, although government filters make it more difficult to access some content. Instead, social media users coalesce into self-defined media spheres aligned around social and political affinities.

Producer Intro Authored by
Categories: Tech-n-law-ogy

Your Guide to BKC@SXSW 2018


Headed to SXSW this year? If so, be sure to check out some of these panels and discussions led by members of the Berkman Klein community.

Thumbnail Image: 

Headed to SXSW this year? If so, be sure to check out some of these panels and discussions led by members of the Berkman Klein community.

The Future of Secrets

Sarah Newman, Jessica Yurkofsky, and Rachel Kalmar

Details: March 9-17 - Fairmont Verbena Room
Are secrets uniquely​ ​human?​ Our​ ​private​ ​lives ​are mediated and​ ​recorded​ ​by​ ​digital​ ​devices. ​Where​ ​are​ ​our​ ​secrets​ ​now? Where​ ​will​ ​they​ ​be​ ​in​ ​the​ ​future,​ ​and​ ​who—or what—might​ ​read​ ​them?​ ​How​ ​will​ ​intelligent systems​ ​of​ ​the​ ​future​ ​process​ ​the​ ​data ​we​ ​leave​ ​behind?​ ​Will​ ​they​ ​know​ ​things​ ​about​ ​us that​ ​we​ ​don’t​ ​(and​ ​never​ ​could)​ ​know​ ​about​ ​ourselves?

The​ ​Future​ ​of​ ​Secrets​​ ​is​ ​an​ ​interactive​ ​installation​ created by Sarah Newman, Jessica Yurkofsky, and Rachel Kalmar from metaLAB at Harvard. It is an immersive experience that includes sound, projection, and interaction; the installation asks​ ​participants​ ​to​ ​anonymously share​ ​their​ ​secrets ​as​ ​a​ ​way​ ​to​ question​ ​the​ ​trust we​ ​place​ ​in​ ​machines​, and ultimately​ ​reflect​ back​ ​our​ ​own​ ​humanness.​ ​What​ ​does​ ​it​ ​mean for​ ​us​ ​to​ ​share​ ​so​ ​much​ ​of​ ​ourselves​ ​through ​complex ​systems and digitally distributed networks?​ The installation inspires delight, surprise, and reflection while evoking questions about uncertain technological futures.

Keep the Internet International, Not Internal!
Fabro Steibel, Barbora Bukovská, Malavika Jayaram, Jan Gerlach
Details: Friday, March 9th, 2018; 11am-12pm – Hilton Austin Downtown Salon F
The internet enables access to knowledge for everyone and across national borders. However, legislators and courts around the world are now seeking to enforce national laws globally. Such extraterritorial jurisdiction to remove content from the web is a worrying trend both for fundamental rights online and the cohesion of the internet itself. Our panel explores the threat of creating many disconnected national networks and what should be done to avoid it.

What Does it Take to Change People’s Minds?
Laura Dawn, Elizabeth Spiers, James Slezak

Details: Saturday, March 10th, 2018; 11am-12pm – Fairmont Congressional B

In the era of Trump, the notion of truth is under attack today as never before. With digital media rapidly displacing models that served us for two generations, we face crucial choices. Will the new landscape further divide and misinform us, or can new forms of digital communities, campaigns and services reverse the slide? Four leading figures from the worlds of digital media, advocacy and data join for an interactive session to debate emerging solutions and threats, and explore what we can do.


Ending the Dangerous Disconnect Between DC and AI
Tim Hwang, John Delaney, Terah Lyons, Clark Jennings

Details: Saturday, March 10th, 2018; 5-6pm – Hilton Austin Downtown Salon F

The AI and DC communities are just beginning a crucial conversation about how to ensure the benefits of the AI revolution are shared and its risks are minimized. In 2016, the Obama Administration published a roadmap to help policymakers prepare for AI. In this session, experts from DC and Silicon Valley advance the debate, addressing how best to engage policymakers, what issues require the most urgent attention, and how to work constructively with the stakeholders shaping our intelligent future.


Smashing the Firewall: Reporting in Iran

Simin Kargar, Fred Petrossians, Anastasia Kolobrodova, Amin Sabeti

Details: Monday, March 12th, 2018; 2-3pm – JW Marriot Salon FG

How can the power of the internet be harnessed for change in countries with strict censorship? The right talent and tools can facilitate political debate, connect persecuted minorities, embolden women, and amplify voices otherwise unheard in Iran. Three organizations fighting for internet and media freedom will discuss the innovative digital tools that are breaking through government censorship to connect with – and empower – Iranians.


A Game-Changing Shift in Control of Personal Data

Nicky Hickman, Karen McCabe, Doc Searls

Details: Monday, March 12th, 2018; 3.30-4.30pm – Fairmont Manchester EFG

An extinction-level event is occurring in the digital economy. Power will soon shift from organizations to people as legal, social and market forces give citizens new rights. New AI, machine learning and blockchain solutions will empower individuals to sovereignly govern their own data and relationships, and new business models will replace the non-compliant and/or illegal tracking-based practices of the past. Explore GDPR and more with Doc Searls and our IEEE Tech for Humanity Series experts!


Starting the Internet All Over Again

Sara Watson, Muneeb Ali, Dries Buytaert, Andrei Sambra

Details: Wednesday, March 14th, 2018; 5-6pm – JW Marriott Salon E

On the Internet, all of the power seems consolidated with a few companies like Facebook, Google and Amazon. Consumers blindly exchange personal data for services, with little regard for the long-term consequences. But there's a movement afoot to build a secondary portal to the web that relies on Blockchain technology to give users freedom over their data. Join pioneers of the first web and the second, decentralized web to discuss how we'll experience it all in the next 5 to 10 years.


AI Creativity in Art, Neuroscience, and the Law
Sarah Schwettmann, Jessica Fjeld, Sarah Newman, Alexander Reben

Details: Thursday, March 15th, 2018; 12.30-1.30pm – Fairmont Manchester A

Artificial intelligence now produces compelling works of art, raising questions both metaphysical—does AI creativity raise it on par with the human?—and practical—how will we license its inputs and outputs? Will the creative outputs of AIs upend our conception of autonomy and personhood? Will they change our basic understandings of human intelligence and subjectivity? Two artists, an attorney, and a neuroscientist will grapple with these questions in a provocative conversation and demo.


BKC Alums
Why Black Women are 2018’s Best Investment

Cheryl Contee,* Kathryn Finney, Sarah Koch

Details: Tuesday, March 13th, 2018; 3.30-4.30pm – Hilton Austin Downtown Salon B

Fewer than twenty African American women have raised more than a million dollars in venture capital. What’s going on here? Meet some of those women and the investors who back them. Learn why they are building the next breakthrough businesses that will change America.

Hacking the Brain: The Power of Neuroenhancement
William ‘Jamie’ Tyler, Miriam Meckel,* Léa Steinacker, Henry Greely

Details: Sunday, March 11th, 2018; 3.30-4.30pm – Fairmont Manchester EFG

Advances in neuroscience and consumer electronics have elevated the brain as a resource for self-optimization. Using electrodes and implants, a new industry now offers to effectively alter numerous neurological functions, including cognitive skills, motor ability, and mood. While such technological developments can help those with disabilities unlock their potential, they also commercialize artificial enhancement of humans and raise ethical questions about the brain as a productivity factor.

Categories: Tech-n-law-ogy

Seeking Research Assistant for the Harmful Speech Online Project

The Harmful Speech Online Project is seeking a Research Assistant! The goals of this project are to map the complex sphere within which harmful speech online occurs, convene and connect people working on these issues, and translate academic findings into useful information for policy makers. The RA will review and synthesize relevant literature and news, and assist with ongoing research projects. Time commitment is approximately 6-10 hours/week. Please send cover letter, resume, and short (2-4 page) writing sample to Nikki Bourassa at Start date is immediate; for summer inquiries, please apply to our summer internship program.

Research Assistant Information and Eligibility:

* The wage is $11.50 per hour.

* Time commitment is 5-10 hours per week.

* RAs do not have to be students.

* RAs do not have to be affiliated with Harvard University.

* We are unable to hire RAs who will conduct their work outside of the state of Massachusetts.

* We do not have the ability to provide authorization to work in the U.S.

Categories: Tech-n-law-ogy

The inception of ESLint

If you're like me, you probably use a lot of open source tools every day without thinking about how they got started. Few projects share the "why" of their creation: the actual problem they were trying to solve and when they first came across that problem. You can, of course, benefit from open source projects without understanding their origin story, but I always find it interesting to hear about how it all started.

I recently realized that I'd never shared the origin story of ESLint. I've shared some of the decisions I made along the way in previous posts but never the initial domino that fell and led to ESLint's creation. As you will see, ESLint wasn't created through some divine intervention or stroke of insight, but rather through a series of events that eventually built up to ESLint's creation.

The bug

I was still fairly new at Box when a teammate was working on a strange bug. A client had reported problems using the web application in Internet Explorer 7 (we were probably one of the last companies supporting IE7 at that point). A developer had apparently used the native XMLHttpReqest object in some JavaScript code instead of our in-house wrapper. This wasn't a problem for any other browser, and there wasn't a problem testing with IE7 internally. The problem occurred because the client had an internal security policy that disabled ActiveX in Internet Explorer, and since the native XMLHttpRequest object in IE7 was really just a wrapper around the ActiveX object, it was blocked as well.

The solution was easy enough, just make sure everyone knows to use the in-house Ajax wrapper instead of the native XMLHttpRequest object. But how could we enforce this? It turned out that Box had a JavaScript "linter" as part of the build system. I put the word linter in quotes because it was really just a series of regular expressions that were run against JavaScript code. For this case, my teammate added a regular expression for "XMLHttpRequest" and that was the solution. The build would break going forward if someone tried to commit a JavaScript file matching that pattern.

In my experience, using regular expressions on source code was never a good idea. I wished that there was a better way to do checks like this one during the build. I figured that someone must have already solved this problem and so I started looking for solutions.

Could it be JSHint?

The first thing I did was email the maintainer of JSHint at that time, Anton Kovalyov[1]. I had remembered reading a blog post[2] that said JSHint was planning to support plugins but couldn't find any information about that feature being implemented. From past experience contributing to JSHint and from modifying JSLint for a project at Yahoo, I knew JSHint hadn't initially been setup to support plugins, and without formal support there wouldn't be an easy way to modify JSHint to do what I wanted.

Anton informed me that the plugin proposal had stalled and didn't look like it would be implemented. I was disappointed, as this seemed like the most direct path to solving the problem. I thanked him and asked him to please not be offended if I create a linter that did what I needed. I wanted to support JSHint, but I felt like this was a problem that needed to be solved with JSHint or without it.

The inspiration

After digging around in the build system at Box, I found there was actually a PHP linter running in addition to the makeshift JavaScript linter. The PHP linter, however, was a lot more involved that the JavaScript one. Instead of using regular expressions, the PHP linter parsed the code into an abstract syntax tree (AST) and then inspected the AST for the patterns to report.

I was probably shaking my head "yes" as I read through that code. Immediately I realized that this was exactly what I needed to do for JavaScript. If only there was some way to parse JavaScript into an AST and then inspect the AST for problems.

The groundwork

With all of this floating around in my brain, I invited Ariya Hidayat[3] to give a talk at Box about whatever topic he pleased. It just so happened that he gave a talk on Esprima[4], the JavaScript parser he wrote in JavaScript. During that talk, Ariya discussed why having an AST representation of JavaScript was useful and referenced several already-existing tools built on top of Esprima. Among those tools were estraverse for traversing the AST and escope for scope analysis, both written by Yusuke Suzuki.

As Ariya continued talking and giving examples of the types of problems an AST can solve, the idea for a new tool formed in my head. It made sense to me that there should be one tool that could perform all of the evaluations Ariya mentioned. After all, they are all just using the AST for difference purposes. Why not have one AST they all can use?

The beginning

Thanks largely to the availability of Esprima, estraverse, and escope, I was able to put together the first prototype of ESLint over a couple of weekends. To me, these three utilities represented everything that I needed to create a new tool that could easily find problem patterns in JavaScript code. If I had to create those from scratch, I have no doubt that ESLint would not exist today. Building on top of those tools, I was able to iterate quickly, and eventually, the tool you know today as ESLint was born.

(I feel it's important to point out that I was not the only one looking to create an AST-based linter at the time. JSCS[5] was also under development at around the same time, and current ESLint maintainer Ilya Volodin was working on his own project before discovering ESLint. If I had not come up with something like ESLint then someone else undoubtedly would have. All of the pieces were already out there thanks to Ariya and Yusuke, someone just had to put them together in a useful way.)

  1. Anton Kovalyov (
  2. JSHint 3 Plans (
  3. Ariya Hidayat (
  4. JavaScript Code Analysis (
  5. JSCS (
Categories: Tech-n-law-ogy

New Website Draws on International Perspectives to Highlight Issues related to Inclusion and Artificial Intelligence


This new suite of resources aims to establish key themes, questions, and opportunities for ensuring that voices and perspectives from diverse populations help shape the future of AI.

Thumbnail Image: 

New suite of resources aims to establish key themes, questions, and opportunities for ensuring that voices and perspectives from diverse populations help shape the future of AI.

The Berkman Klein Center for Internet & Society is pleased to share a newly-published interactive webpage,, which highlights salient topics and offers a broad range of resources related to issues of AI and inclusion. The materials contribute to the Diversity and Inclusion track of the broader Ethics and Governance of Artificial Intelligence Initiative. Launched in Spring 2017, the initiative is anchored by the Berkman Klein Center and the MIT Media Lab, who have been working in conjunction over the past year to conduct evidence-based research, bolster AI for the social good, and construct a collective knowledge base on the ethics and governance of AI.

The site reflects lessons learned from a wide-ranging international effort, and includes a number of resources produced from the Global Symposium on AI and Inclusion, which convened 170 participants from over 40 countries in Rio de Janeiro last November on behalf of the Global Network of Centers to discuss the impact of AI and related technologies on marginalized populations and the risks of amplifying digital inequalities across the world.

Some of the primary resources available on the webpage include foundational materials that address overarching themes, key research questions, the initial framing of a research roadmap, and an overview of some of the most relevant opportunities and challenges identified pertaining to AI, inclusion, and governance. The research, findings, and ideas presented throughout the page both illuminate lessons learned from the past year, and lay the groundwork for the initiative’s continued work on issues of inclusion, acknowledging that the resources found here are only a starting point for this important conversation.

We welcome your feedback and suggestions. If you have any questions about the webpage or about the Ethics and Governance of Artificial Intelligence initiative, please contact  

Learn more about this effort in the Medium post "Why Inclusion Matters for the Future of Artificial Intelligence" by Amar Ashar and Sandra Cortesi

Categories: Tech-n-law-ogy

Iran's National Information Network: Faster Speeds, but at What Cost?


In this Internet Monitor research bulletin, Berkman Klein Center Affiliate Simin Kargar analyzes the effectiveness of the Iranian government’s campaign to encourage domestic content consumption and hosting through its National Information Network.

Thumbnail Image: 

In this Internet Monitor research bulletin, Berkman Klein Center Affiliate Simin Kargar analyzes the effectiveness of the Iranian government’s campaign to encourage domestic content consumption and hosting through its National Information Network.

With over $6 billion invested, the NIN is the most costly national telecommunications project in the history of the Islamic Republic. Other affiliated costs align well with the NIN’s overarching goals: $1.5 billion on a domestic search engines project and $135,000 in additional subsidies to go toward mature development of domestic messaging applications. This strategy is to substantially cut reliance on international applications such as Telegram.

The recent events in Iran put the investment to the test and underscored the challenges of fundamentally changing user behavior. While an increase in speed allows for services that potentially improve access and more sophisticated information sharing, these benefits only apply to domestically hosted platforms that have not been popular. As the recent protests affirmed, when popular international tools became inaccessible, users showed little interest to limit their traffic to domestic websites and tools, even at a discounted price. Despite Iran’s concerted efforts to popularize the NIN’s application, appealing to users and acquiring their trust may be much harder than the government had envisioned.

Read the complete bulletin on the Internet Monitor site.

Categories: Tech-n-law-ogy


Subscribe to aggregator - Tech-n-law-ogy