Tech-n-law-ogy

Coming in from the Cold: A Safe Harbor from the CFAA and DMCA §1201

Teaser

The Assembly program is pleased to announce a new publication proposing a statutory safe harbor from the Computer Fraud and Abuse Act and section 1201 of the Digital Millennium Copyright Act for security research activities using a constructed communication protocol based on a responsible disclosure model.

Publication Date 1 Jun 2018 Thumbnail Image: External Links: Download from SSRN Authored by Daniel Etcovitch and Thyla van der Merwe   The Assembly program is pleased to announce a new publication, titled Coming in from the Cold: A Safe Harbor from the CFAA and DMCA §1201, written by Harvard Law School student Daniel Etcovitch and 2017 Assembly cohort member Thyla van der Merwe.     The paper proposes a statutory safe harbor from the Computer Fraud and Abuse Act and section 1201 of the Digital Millennium Copyright Act for security research activities using a constructed communication protocol based on a responsible disclosure model. The authors explore how such a safe harbor could provide security researchers a greater degree of control over the vulnerability research publication timeline and guarantee researchers safety from legal consequence if they complied with the proposed safe harbor process. ​   The collaboration between Daniel and Thyla was born out of the 2017 Assembly program and the Internet & Society class co-taught by Harvard Law School Professor Jonathan Zittrain and MIT Media Lab Director Joi Ito, where they first met.  As the authors describe it, they “found a common interest in legal barriers to security” during the Internet & Society course and together “began to engage with the reality that some security researchers – particularly academics – were concerned about potential legal liability under computer crime laws.”    Abstract In our paper, we propose a statutory safe harbor from the CFAA and DMCA §1201 for security research activities. Based on a responsible disclosure model in which a researcher and vendor engage in a carefully constructed communication process and vulnerability classification system, our solution would enable security researchers to have a greater degree of control over the vulnerability research publication timeline, allowing for publication regardless of whether or not the vendor in question has effectuated a patch. Any researcher would be guaranteed safety from legal consequences if they comply with the proposed safe harbor process.    About the Berkman Klein Assembly  Assembly, at the Berkman Klein Center & MIT Media Lab, gathers developers, managers, and tech industry professionals for a rigorous spring term course on internet policy and a twelve-week collaborative development period to explore hard problems with running code. Each Assembly cohort comes together around a defined challenge. In 2017, the Assembly cohort focused on digital security. In 2018, the program focused on the ethics and governance of artificial intelligence. For more information, visit the program website, http://bkmla.org.     Producer Intro Authored by
Categories: Tech-n-law-ogy

20 years of the Laws of Cyberspace

Subtitle Berkman Klein event celebrates how Lawrence Lessig's groundbreaking paper provided structure to the Center's field of study Teaser

It’s been two decades since Harvard Law School Professor Lawrence Lessig published “The Laws Of Cyberspace,” which, in the words of Professor Jonathan Zittrain, “imposed some structure over the creative chaos of what maybe was a field that we’d call cyberlaw.”

Thumbnail Image: 

 

 

What if an architecture emerges that permits constant monitoring; an architecture that facilitates the constant tracking of behavior and movement. What if an architecture emerged that would costlessly collect data about individuals, about their behavior, about who they wanted to become. And what if the architecture could do that invisibly, without interfering with an individual’s daily life at all? … This architecture is the world that the net is becoming. This is the picture of control it is growing into. As in real space, we will have passports in cyberspace. As in real space, these passports can be used to track our behavior. But in cyberspace, unlike real space, this monitoring, this tracking, this control of behavior, will all be much less expensive. This control will occur in the background, effectively and invisibly. -Lawrence Lessig, "The Laws of Cyberspace," 1998

It’s been two decades since Harvard Law School Professor Lawrence Lessig published “The Laws Of Cyberspace,” which, in the words of Professor Jonathan Zittrain, “imposed some structure over the creative chaos of what maybe was a field that we’d call cyberlaw.” Lessig’s groundbreaking paper describes four types of constraints that together regulate behavior – law, social norms, the market, and architecture – and argues that due to its special architecture, cyberspace is different from “real” space and thus subject to new possibilities for control by governments and other centers of power. “The world we are entering is not a world where freedom is assured,” Lessig wrote in 1998, but instead, “has the potential to be the most fully, and extensively, regulated space in our history.”

On April 16, the Berkman Klein Center of Internet & Society hosted a special event commemorating the 20th anniversary of the publication of “The Laws of Cyberspace,” with Lessig, Harvard Law School Professors Ruth Okediji and Jonathan Zittrain, and Dr. Laura DeNardis of American University. The panelists reflected on the paper, and where the field of cyberlaw has taken us over the last two decades, and they considered how some of the concerns raised in 1998 might apply today.

“I was sitting on that bench outside the Lewis building,” recollected Okediji of the day 20 years ago when she first read the paper, “and I will never forget both my sense of sheer terror that we were launching something that we had no idea where it would lead us, and then this sense of skepticism: ‘Well, how does he know he’s right?’” She explained that “The Laws of Cyberspace” led to her own work thinking about internet governance, social interaction on the net and the law. “It’s been 20 years, and Larry was right,” she said.

Lessig told the audience that the paper came in part out of a feeling of frustration. He feared that many internet enthusiasts were taking for granted that the freedom the internet allowed in 1998 was the freedom it would always allow, and he wanted to make the point that the regulability of place is a function of its architecture and thus not guaranteed. Without deliberate interventions, the lack of regulation that so many cherished in the early days of the internet could slip away.

“The architecture of the internet as it originally was made it really hard to regulate, but you might imagine the technology evolving to persistently watch everything you’re doing and enable simple traceability,” he said. “All of these evolutions in the architecture increase the regulability of the space, and then we’d need to decide, ‘Do we like that? Do we want that?’”

Lessig explained that even in 1998, governments and private markets seemed to be interested in increasing regulability and the ability to track what people were doing for the purposes of commerce and control.

“Arrangements of technical architecture are arrangements of power,” explained DeNardis. “This often has nothing to do with governments whatsoever.” For example, the World Wide Web Consortium designs accessibility for disabled people into their protocols, she said, which is an example of how technical architecture is determining public interest issues. DeNardis said that often it’s hard for people without a technical background to be involved in decisions like these, but that there’s currently a surge of people from beyond the technical sphere showing interest in participating in the decisions that shape our experience online and affect issues like identity and privacy. However, she said, this increase in public participation coincides with the proliferation of proprietary standards coming out of closed organizations such as the Internet of Things and social media platforms.

Lessig added that as the space of innovation moves into “islands of innovation,” such as the large tech platforms like Google and Facebook, the generativity of innovations become contingent on each platform’s permission, creating the potential scenario where someone would choose not to create something for fear that the company would “pull the rug out.” This is an example of “how technical change and legal ownership work together to change the basic opportunity to innovate,” he said.

DeNardis made the point that while certain platforms might be islands in terms of interoperability, they are tied together in the backend by the third parties that collect and aggregate data about us. It’s important to look below the surface, she said. “That’s where a lot of the power is. The power to do things like censor LGBT people, the power to restrict people based on architecture-embedded intellectual property rights, and the power to monetize us through big data that’s aggregated with companies we’ve never even signed terms of service with.”

Okediji noted that there’s been little innovation in contract law when it comes to technology. “It’s not just that we’re missing the mark in the area of cyberspace. The regimes that surround cyberspace also have not received the attention they should,” she said, suggesting that the rules and norms around what makes a contract and the practice of “signing away all these rights with a click” might not be ideal.

“What troubles me quite significantly is that we have this 911 mentality when it comes to policy,” said Okediji. “Avoiding something in the future requires us to be thinking about it today, not tomorrow when the problem occurs.”  Rather than dealing with problems only as they come up, she said, we need to ask ourselves ‘What’s the vision for what cyberspace should look like 20 years from now?’”

This article originally appeared in Harvard Law Today.

 

Categories: Tech-n-law-ogy

Art that Imitates Art: Computational Creativity and Creative Contracting

Subtitle Jessica Fjeld and Mason Kortz, Cyberlaw Clinicians at Harvard Law Teaser

Join us for our last Tuesday Luncheon of the academic year! Cyberlaw Clinicians Jess Fjeld and Mason Kortz for a discussion about copyright in AI-generated works, the need for a shared understanding of what is and isn’t up for grabs in a license, and how forward-thinking contracts can prevent AI developers and artists from having their rights decided by (often notoriously backwards-looking) legal system.

Parent Event Berkman Klein Luncheon Series Event Date May 22 2018 12:00pm to May 22 2018 12:00pm Thumbnail Image: 

Tuesday, May 22, 2018 at 12:00 pm
Berkman Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Room 1015

RSVP required to attend in person
Event will be recorded and posted here

Complimenary Lunch Served

Computational creativity—a subdomain of artificial intelligence concerned with systems that replicate or assist human creative endeavors—has been the  subject of academic inquiry for decades. Now, with recent improvements in machine learning techniques and the rising popularity of all thing AI, computational creativity is a medium for critically and commercially successful works of art. From a 2016 Rembrandt to Jukedeck’s instant music (or muzak?), AI-assisted and AI-driven works are a reality. This raises mind-bending questions about the nature of creativity, the relationship between the artist and the viewer, even the existence of free will. For many lawyers, it also raises a more immediate question: who owns all of this art?

Join Cyberlaw Clinicians Jess Fjeld and Mason Kortz for a discussion about copyright in AI-generated works, the need for a shared understanding of what is and isn’t up for grabs in a license, and how forward-thinking contracts can prevent AI developers and artists from having their rights decided by (often notoriously backwards-looking) legal system.

About Jessica

Jessica Fjeld is a Clinical Instructor at Harvard Law School's Cyberlaw Clinic. She works in diverse areas including intellectual property, media and entertainment (particularly public media), freedom of expression, and law and policy relating to government and nonprofit entities. Before joining the Clinic, Jessica worked in Business & Legal Affairs for WGBH Educational Foundation, and as an associate at Skadden, Arps, Slate, Meagher & Flom LLP focused in corporate transactions. She received a JD from Columbia Law School, where she was a James Kent Scholar and Managing Editor of the Journal of Law and the Arts; an MFA in Poetry from the University of Massachusetts; and a BA from Columbia University.

About Mason

Mason Kortz is a clinical instructional fellow at the Harvard Law School Cyberlaw Clinic, part of the Berkman Klein Center for Internet & Society. His areas of interest include online speech and privacy and the use of data products (big or small) to advance social justice. Mason has worked as a data manager for the Scripps Institution of Oceanography, a legal fellow in the Technology for Liberty Project at the American Civil Liberties Union of Massachusetts, and a clerk in the District of Massachusetts. He has a JD from Harvard Law School and a BA in Computer Science and Philosophy from Dartmouth College. In his spare time, he enjoys cooking, reading, and game design.

 

Loading...
Categories: Tech-n-law-ogy

Shaping Consumption: How Social Network Manipulation Tactics Are Impacting Amazon and Influencing Consumers

Subtitle featuring Renee DiResta Teaser

This talk examines the ways that these same manipulative tactics are being deployed on Amazon, which is now the dominant product search engine and a battlefield for economically and ideologically motivated actors.

Parent Event Berkman Klein Luncheon Series Event Date May 15 2018 12:00pm to May 15 2018 12:00pm Thumbnail Image: 

Tuesday, May 17, 2016 at 12:00 pm
Berkman Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Room 1015

RSVP required to attend in person
Event will be live webcast at 12:00 pm

Narrative manipulation issues - such as manufactured consensus, brigading, harassment, information laundering, fake accounts, news voids, and more - are increasingly well-documented problems affecting the entire social ecosystem. This has had negative consequences for information integrity, and for trust. This talk examines the ways that these same manipulative tactics are being deployed on Amazon, which is now the dominant product search engine and a battlefield for economically and ideologically motivated actors.

About Renee

Renee DiResta is the Director of Research at New Knowledge, and Head of Policy at nonprofit Data for Democracy. Renee investigates the spread of disinformation and manipulated narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations about understanding and responding to computational propaganda and information operations. In 2017, Renee was named a Presidential Leadership Scholar, and had the opportunity to continue her work with the support of the Presidents Bush, President Clinton, and the LBJ Foundation. In 2018, she received a Mozilla Foundation fellowship and affiliation with the Berkman-Klein Center for Internet & Society at Harvard University to work on their Media, Misinformation, and Trust project. She is a Founding Advisor to the Center for Humane Technology, and a Staff Associate at Columbia University Data Science Institute.

Previously, Renee was part of the founding team of venture-backed supply chain logistics technology platform Haven, where she ran business development and marketing, and a co-founder of Vaccinate California, a parent-led grassroots legislative advocacy group. Renee has also been an investor at O’Reilly AlphaTech Ventures (OATV), focused on hardware and logistics startups, and an emerging markets derivatives trader at Jane Street Capital. Her work and writing have been featured in the New York Times, Politico, Slate, Wired, Fast Company, Inc., and the Economist. She is the author of the O’Reilly book “The Hardware Startup: Building Your Product, Business, and Brand”, and lives on the web at http://reneediresta.com and @noUpside.

 

Loading...
Categories: Tech-n-law-ogy

GAiA releases its annual report highlighting its effort to increase access to medicines to the world’s neediest

Teaser

Global Access in Action (GAiA) launched its annual report today highlighting the major progress made in 2017 to expand access to medicines to the world’s neediest.

Cambridge, May 8, 2018 - Global Access in Action (GAiA) launched its annual report today highlighting the major progress made in 2017 to expand access to medicines to the world’s neediest.

2017 marked a year of significant progress made by GAiA in its effort to improve access to medicines to the vulnerable populations. The annual report showcases major projects undertaken by GAiA, its active engagement with various local and global stakeholders as well as organizational expansion in terms of staffing in the year of 2017.

One of the major projects undertaken by GAiA in 2017 was the expansion of a pilot project that aims to develop a public health sensitive legal framework that allows for sustainability of low-cost medicine supply while providing legal protections that are necessary to incentivize innovations to pharmaceutical companies. The project started in 2016 in Namibia and further expanded to two other countries in sub-Saharan Africa, Malawi and Mozambique. The initiative also involved collaboration with Global Good to fight substandard and falsified (S&F) medicines in sub-Saharan Africa with the use of field detection technology- miniature spectrometer.

While access to medicines is an issue at stake, the problem of S&F medicines can exacerbate the existing access challenge. In the introductory letter of the annual report, GAiA’s Co-Directors, William Fisher and Quentin Palfrey stressed that, “Even those who have access are at risk of consuming counterfeit medicines in many countries that often lead to lethal consequences.” GAiA is envisioning and working to establish a quality assurance network among the countries involved in the pilot project to allow for data sharing on S&F medical products.

Along with the expansion of the pilot project, GAiA also published a green paper, “Expanding Access to Medicines and Promoting Innovation: A Practical Approach” in the April edition of Georgetown Journal on Poverty Law and Policy exploring practical strategies initiated by pharmaceuticals companies to solve the access barriers in low- and middle- income countries.

Click here to read more about annual report.

About Global Access in Action
Global Access in Action, a project of the Berkman Klein Center for Internet & Society at Harvard University, seeks to expand access to lifesaving medicines and combat the communicable disease burden that disproportionately harms the world’s most vulnerable populations. We accomplish this by conducting action-oriented research, supporting breakthrough initiatives, facilitating stakeholder dialogue, and providing policy advice to both public and private sector stakeholders. GAiA seeks to foster dialogue across traditional boundaries between government, industry, civil society, and academia, and to promote new, innovative solutions amongst these parties to create better outcomes.

Categories: Tech-n-law-ogy

Your Guide to BKC@RightsCon 2018

Teaser

Going to RightsCon in Toronto? Connect with members of the Berkman Klein community, and learn about their research

Thumbnail Image:    Going to RightsCon in Toronto? Connect with members of the Berkman Klein community, and learn about their research

 

Wednesday, May 16th, 2018

Is This a New Face of Info War? "Patriotic" Trolling and Disinformation -- the Iran Edition
Simin Kargar
Details: Wednesday, May 16th, 2018; 10:30-11:45pm – 205A
Online harassment and smear campaigns are increasingly applied as a form of information control to curb free speech and exert power in cyberspace. Targeted harassment of dissidents on social media appears as the most recent form of strategic communication, where particular messages are crafted by state-affiliated actors to manipulate public opinion. This session addresses the circumstances under which these coordinated efforts are likely to emerge, the latest practices of Iran to extend its ideological arms across social media, and the ultimate goals that they pursue.

Young, Safe, and Free: Respecting Children's Online Privacy and Freedom of Expression
Patrick Geary,  Sarah Jacobstain, Jasmina Byrne, Fred Carter, Sandra Cortesi, Ariel Fox, Patrik Hiselius, Natasha Jackson
Details: Wednesday, May 16th, 2018; 12:00-1:15pm – 206C
This is chance to talk about practical steps that companies and public authorities can take to protect and empower children online. Companies and Data Protection Authorities will share how they consider risks to children's privacy online while still providing children with full, open and enriching online experiences. Civil society organizations will highlight the work that remains to be done, and academic researchers will ground this in evidence about how children exercise their rights to privacy and freedom of expression online. 

Online Criticism, Falsified Court Orders & the Role of Intermediaries: Coping With Takedown Requests of Questionable Legitimacy
Adam Holland, Daphne Keller, Eugene Volokh
Details:
Wednesday, May 16th, 2018; 2:30-3:45pm – 204B
Lumen is a research project devoted to collecting and analyzing requests to remove online materials. Recently, researchers and advocates, including Professor Eugene Volokh, have uncovered an alarming pattern of falsified court orders used to seek and often achieve the removal of online material. The Lumen team will open the workshop with a brief introduction to Lumen and to the site’s API. Once the attendees are familiar with Lumen, they will facilitate a discussion about the implications of falsified court orders within the takedown request landscape.

New Tools for Visualizing Communities, Projects, and Resources: Inspiring Engagement and Exploration
Sandra Cortesi
Details: 
Wednesday, May 16th, 2018; 2:30-3:45pm – 200A
In this tech demo, we will present interactive tools that have been developed at the Berkman Klein Center for Internet & Society at Harvard University to visualizing communities, projects, and resources. 

Language Access and Humanitarian Response: A Matter of Human Rights
An Xiao Mina, Olly Farshi, Natasha Jimenez
Details: 
Wednesday, May 16th, 2018; 5:15-6:15pm – 205C
The world is seeing an unprecedented scale of migration due to conflict and climate-related natural disasters. People from different linguistic backgrounds are coming together in a number of humanitarian contexts, such as rapid response work and support in refugee sites. Without the ability to communicate effectively, both aid workers and beneficiaries stand to lose significantly. In this panel, members of Meedan and Outside will share their experiences in the field in dialogue with others who are looking at issues of language barriers in humanitarian work.

Teaching AI to Explain Itself
Suchana Seth
Details: Wednesday, May 16th, 2018; 5:15-6:15pm – 205A
A growing body of artificial intelligence algorithms are NOT black-box - they can explain their decision mechanisms. What do "good" explanations look like in the world of accountable algorithms - from the perspective of users, consumers, and regulators of AI? How do we set realistic expectations about explainable or interpretable machine learning algorithms?

Scrutinizing the Little Brothers: Corporate Surveillance and the Roles of the Citizen, Consumer, and Company
Katie McInnis, David O’Brien, Christopher Parsons
Details: 
Wednesday, May 16th, 2018; 5:15-6:15pm – 203B
In this session, we will bring together panelists from Toronto University’s Citizen Lab, the Berkman Klein Center at Harvard University, and Consumer Reports, each of whom are addressing issues of corporate surveillance and accountability. Panelists will share overviews of their organizations’ goals, challenges their programs face, and changes they hope their projects will effectuate. We will present three different perspectives: the consumer, the citizen, and the company. All three projects are responses to pervasive corporate surveillance and aim to lessen the imbalance between corporations and individuals.

 

Thursday, May 17th, 2018

Data Driven Decency: New, Collaborative Experiments to Diminish Online Hate and Harassment Online
Rob Faris, Susan Benesch
Details: 
Thursday, May 17th, 2018; 9:00-10:15am – 205C
In this session we will report on - and brainstorm new possibilities for - experimental methods for diminishing harassment and hate speech online. The speakers will describe the first academic research experiment with an Internet platform that committed in advance to sharing data and allowing publication in a peer-reviewed journal. Participants will be asked to share best practices from their own experiences with collaborative online research. In closing, the moderator will ask for ideas to continue research experiments that aim to diminish hate speech online. Afterward, we will circulate the newly generated ideas, and invite continued collaboration for their implementation.

Secure UX Principles: Let's Build a Checklist of User Security and Good Design
a panel moderated by An Xiao Mina
Details: 
Thursday, May 17th, 2018; 10:30-11:45 – 201C
We present a research and design checklist for people who are developing technologies to help communities at risk. This checklist is designed to promote human rights-centered design by streamlining the process of user research. We believe this resource will aid builders of tools, platforms, and services with limited resources and time. 

Mind the Shark: Informational Flow in Natural Disasters, from Fake News to Rumors
An Xiao Mina, Olly Farshi, Natasha Jimenez, Antonio Martinez
Details: 
Thursday, May 17th, 2018; 12:00-1:15pm – 200B
While misinformation has risen to the top of the agenda in journalism, its impact on humanitarian workers has yet to be fully discussed. Misinformation during natural and human disasters is a consistent theme, causing confusion and leading people to miss access to critical resources - whether that’s the frequent false threat of sharks during hurricanes or confusion about where ICE is detaining people fleeing a disaster site. What are the challenges and opportunities in this space? How can we design solutions that address this? This conversation will look at specific cases of address misinformation after disasters, when rapid responders may not even have access to the most current accurate information.

Cross-Harm Collaboration: Building Strategic Responses to Risks and Harms Online
Nikki Bourassa, Chloe Colliver, Henry Tuck
Details: 
Thursday, May 17th, 2018; 1:20-2:20pm – 206D
Recent revelations linking the use of disinformation, fake accounts, and hate speech to sway elections, coupled with the rise of harm from cyber-bullying, coordinated online harassment, misogyny and child sexual exploitation, demonstrate the range of threats facing internet users. Tech companies are asked to tackle these issues, but often by a huge range of uncoordinated voices. In this workshop, ISD and the Berkman Klein Center will discuss the inefficiency of current silos in online harm prevention work, foster cross-sector collaboration on research and projects, and create actionable suggestions for ways to make collaboration successful and useful for CSOs and technology companies.

Translation Project: A Translation Suite for Humanitarian Organizations
An Xiao Mina, Olly Farshi, and Natasha Jimenez,
Details: 
Thursday, May 17th, 2018; 2:30-3:45pm – 200A
As the global population of forcibly displaced people reaches record levels, the language barrier between refugees and those seeking to help them remains among the first challenges in serving their immediate relief needs. The Translation Project seeks to prototype and develop open-source technology and a community of translators to address this pressing need in a way that is scalable and sustainable.

Reframed! Media Analysis for Digital Inclusion
Belen Febres-Cordero, Nikki Bourassa, Natalie Gyenes
Details: 
Thursday, May 17th, 2018; 4:00-5:00pm – 200B
Access to media analysis tools has generally been limited to academic researchers and industry communications or media professionals. In the absence of tools accessible to community groups or advocacy organizations, there are limited opportunities for more marginalized or vulnerable communities to gather evidence-driven knowledge regarding how their own issues are covered in the media. Global Voices, in partnership with Media Cloud, is piloting an initiative that democratizes access to media analysis tools, bringing them to vulnerable populations so that they can understand, and possibly direct, their own representation in the media.

 

Friday, May 18th, 2018

Countering Media Manipulation: Linking Research and Action
Robert Faris, Joan Donovan, An Xiao Mina, Claire Wardle
Details: 
Friday, May 18th, 2018; 9:00-10:15am – 206D
Although widespread propaganda and disinformation is not a new phenomenon, its occurrence within today’s online networked environments has wrought new challenges for democracy. A mix of legitimate political entities and malicious actors have exploited and leveraged vulnerabilities in platform architectures to surreptitiously insert false news narratives into unwitting media environments. Worse, these campaigns are often coordinated to take advantage of platform algorithms and muddy the difference between genuine and false. Plentiful opportunities remain to foster greater collaboration within the research community and between researchers, journalists, and media watchdogs. In this workshop, we will identify and put into place better mechanisms to coordinate research efforts and to link researchers with practitioners.

Internet Monitor: Real-time Internet censorship research and visualization tools demo
Casey Tilton
Details: Friday, May 18th, 2018, 12:00-1:15pm
Interested in learning more about the technology behind real-time Internet censorship research and contributing to the Internet Monitor project? In this session, researchers from the Berkman Klein Center for Internet & Society at Harvard University will demo two tools developed by the Internet Monitor project. First up is the Internet Monitor Dashboard, a tool that compiles and visualizes data about Internet activity and content controls in over 100 countries. Next up is AccessCheck, a tool that lets users test in real time the availability of websites in countries around the world. Test results include a thumbs up/down notification indicating whether the website is available, as well as a screenshot and more detailed data on status codes, timings, and any errors encountered. In addition to testing single urls, AccessCheck allows users to test the availability of lists of country-specific websites that have been created by experts in the censorship practices of governments around the world

Have We Entered a Brave New World of Global Content Takedown Orders?
Vidushi Marada, Jennifer Daskal, Daphne Keller, Vivek Krishnamurthy, Stefania Milan, Jonathon Penney
Details: 
Friday, May 18th, 2018; 4:00-5:00pm – 206C
From the Supreme Court of Canada's Equustek decision to Germany's "NetzDG" law, concerns of a "race to the bottom" are mounting as every country seeks to enforce its national preferences on the global internet. Now that the brave new world of global content regulation is here, what do we do about it? When is it legitimate for a government to enforce its preferences on a global rather than a national basis? And where do private forms of governance, like algorithmic curation on and by social media platforms, fit into this picture? Join our panel of experts from North America, Europe, and South Asia for an update on some of the biggest recent developments in this area and a wide-ranging discussion of how all those who care about the open, global internet should best respond to these trends.

Artificial Intelligence: Governance and Inclusion
Eduardo Magrani, Chinmayi Arun, Amar Ashar, Christian Djefal, Malavika Jayaram
Details: 
Friday, May 18th, 2018; 5:15-6:15pm – 201B
Even though the developing world will be directly affected by the deployment of AI technologies and services, policy debates about AI have been dominated by organizations and actors in the Global North.. As a follow up to the international event “Artificial Intelligence and Inclusion” held in Rio de Janeiro earlier this year, this discussion will focus on development of AI, and its impact on inclusion in areas such as health and wellbeing, education, low-resource communities, public safety, employment, among others. The goal of this roundtable is to bring these debates to the RightsCon community, enlarging the conversation and deepening the understanding of AI inclusion challenges, governance and opportunities, to identify and discuss areas for research, education and action.

Categories: Tech-n-law-ogy

Governance and Regulation in the land of Crypto-Securities (as told by CryptoKitties)

Subtitle featuring founding members, Dieter Shirley and Alex Shih Teaser

Join founding members of the CryptoKitties team, Dieter Shirley and Alex Shih, as they discuss the unique governance, legal, and regulatory challenges of putting cats on the Ethereum blockchain.

Parent Event Berkman Klein Luncheon Series Event Date May 8 2018 12:00pm to May 8 2018 12:00pm Thumbnail Image: 

Tuesday, May 8, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein East C
Room 2036, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Join founding members of the CryptoKitties team, Dieter Shirley and Alex Shih, as they discuss the unique governance, legal, and regulatory challenges of putting cats on the Ethereum blockchain. CryptoKitties is an early pioneer in the space, and, having navigated securities law early on in its release, will share unique insights on classifications. They will also discuss some of the more ethical challenges they've been facing, and best practices for approach.

 

About Dieter

Dieter is a partner and chief technical architect at Axiom Zen, an award-winning venture studio specialized in applying emerging technologies to unsolved business problems. Products developed by Axiom Zen have touched 200+ million consumers and are used by the world’s leading companies, including Facebook, Microsoft, and NASA, as well as by eminent academic institutions and government organizations.

Dieter is an original participant in the world of cryptocurrency, mining his first Bitcoin on his home computer in 2010. Since then he has served as a technical architect on a series of advanced blockchain projects including as co-founder of CryptoKitties, the most successful collectibles game on the blockchain. Dieter is also the founding CTO of Cornerstone, a real estate transaction platform being developed in partnership with Ross McCredie, former founder and CEO of Sotheby’s Canada, and Dave Carson, former COO at Sotheby’s Global.

Axiom Zen was named first among Canada’s Most Innovative Companies by Canadian Business. They pride themselves in diversity of talent: a team of ~80 creatives includes published authors, over a dozen former founders, diversity from 20+ national origins, and decades of collective experience at startups and Fortune 500s alike.

Axiom Zen is the team behind ZenHub, the world’s leading collaboration solution for technical teams using GitHub; and the developer of Timeline, named Apple’s Best App of the month, Editor’s Choice in 10 countries, and Best New App in 88 countries. Axiom Zen is the creator of Toby, recognized as Top Chrome Extension of the year by both Google and Product Hunt, and the parent company of Hammer & Tusk, a leader in the world of immersive experiences (AR/VR). Axiom Zen's work has been featured in TIME Magazine, The New York Times, and Fast Company.

About Alex

Alex Shih is General Partner and Chief Financial Officer (CFO) at Axiom Zen, an award-winning venture studio specialized in applying emerging technologies to unsolved business problems, including the team behind CryptoKitties, the world’s most successful blockchain application.

Prior to joining Axiom, Alex executed investment strategies across the capital structure in both public and private markets in roles with KKR and Highfields Capital. Alex holds a B.S. / M.S. in Management Science & Engineering from Stanford University.

Axiom Zen was named first among Canada’s Most Innovative Companies by Canadian Business. They pride themselves in diversity of talent: a team of ~80 creatives

includes published authors, over a dozen former founders, diversity from 20+ national origins, and decades of collective experience at startups and Fortune 500s alike.

Axiom Zen is the team behind ZenHub, the world’s leading collaboration solution for technical teams using GitHub; and the developer of Timeline, named Apple’s Best App of the month, Editor’s Choice in 10 countries, and Best New App in 88 countries. Axiom Zen is the creator of Toby, recognized as Top Chrome Extension of the year by both Google and Product Hunt, and the parent company of Hammer & Tusk, a leader in the world of immersive experiences (AR/VR). Axiom Zen's work has been featured in TIME Magazine, The New York Times, and Fast Company.

Links

 

Loading...

Categories: Tech-n-law-ogy

Encryption Policy And Its International Impacts: A Framework For Understanding Extraterritorial Ripple Effects

Teaser

This paper explores the potential international ripple effects that can occur following changes to domestic encryption policies.

Publication Date 2 May 2018 Author(s) Thumbnail Image: External Links: Download from Hoover.orgDownload from DASH

This paper explores the potential international ripple effects that can occur following changes to domestic encryption policies.  Whether these changes take the form of a single coherent national policy or a collection of independent (or even conflicting) policies, the impacts can be unexpected and wide-ranging.  This paper offers a conceptual model for how the ripple effects from national encryption policies might propagate beyond national borders. And we provide a set of factors that can help policy-makers anticipate some of the most likely ripple effects of proposed encryption policies.

Read Ryan Budish's post from May 2, 2018, about the paper on Lawfare.

Producer Intro Authored by
Categories: Tech-n-law-ogy

The Law and Ethics of Digital Piracy: Evidence from Harvard Law School Graduates

Subtitle Featuring Dariusz Jemielniak and Jérôme Herguex Teaser

When do Harvard law students perceive digital file sharing (and piracy) as fine?

Parent Event Berkman Klein Luncheon Series Event Date May 1 2018 12:00pm to May 1 2018 12:00pm Thumbnail Image: 

Tuesday, May 1, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein West B
Room 2019, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Harvard Law School is one of the top law schools in the world and educates the intellectual and financial elites. Lawyers are held to the highest professional and ethical standards. And yet, when it comes to digital file sharing, they overwhelmingly perceive file sharing as an acceptable social practice – as long as individuals do not derive monetary benefits from it. We want to discuss this phenomenon, as well as the social contexts in which file sharing is more or less acceptable. We would also like to foster a discussion on the possible changes in regulation, that would catch up with the established social norm. 

About Dariusz

Dariusz Jemielniak is a Wikipedian, Full Professor of Management at Kozminski University, and an entrepreneur (having established the largest online dictionary in Poland, ling.pl, among others). 

Dariusz currently serves on Wikimedia Foundation Board of Trustees. In his academic life, he studies open collaboration movement (in 2014 he published "Common Knowledge? An Ethnography of Wikipedia" with Stanford University Press), media files sharing practices (among lawyers and free knowledge activists), as well as political memes' communities. 

He had visiting appointments at Cornell University (2004-2005), Harvard (2007, 2011-2012), and University of California, Berkeley (2008), where he studied software engineers' workplace culture.

About Jérôme

Jerome is an Assistant Research Professor at the National Center for Scientific Research (CNRS), a Fellow at the Center for Law and Economics at ETH Zurich, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. From 2011 to 2014, Jerome spent three years as a Research Fellow at the Berkman Klein Center, where he did most of his Ph.D. work.

Jerome is a behavioral economist operating at the boundaries between psychology, economics and computer science. In his research, he typically couples experimental methods with the analysis of big data to uncover how psychological and cognitive traits shape our behavior over the Internet, with a particular focus on online cooperation, peer production and decision making. He is strongly involved with Professor Yochai Benkler in the Cooperation project. He is also involved with the Mindsport Research Network, which he helped launch together with Professor Charles Nesson.

Jerome completed a Ph.D. in Economics at Sciences Po and the University of Strasbourg. He holds Master’s degrees in both International Economics and International Affairs from Sciences Po, and a B.A. in Economics & Finance from the University of Strasbourg.

Jerome originates from the French region of Alsace. He has lived in France, Egypt, the U.S., Jordan and Switzerland. Jerome speaks French, English and Arabic and is heavily interested in public policy and international affairs.

 

Loading...

Categories: Tech-n-law-ogy

Blockchain and the Law: The Rule of Code

Subtitle A book talk featuring author, Primavera De Filippi Teaser

Blockchain technology is ultimately a dual-edge technology that can be used to either support or supplant the law. This talk looks at the impact of blockchain technology of a variety of fields (finance, contracts, organizations, etc.), and the benefits and drawbacks of blockchain-based systems.

Event Date Apr 23 2018 4:00pm to Apr 23 2018 4:00pm Thumbnail Image: 

Monday, April 23, 2018 at 4:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein West B
Room 2019, Second Floor
Reception immediately following at HLS Pub
RSVP required to attend in person
Event will be webcast live

This talk will look at how blockchain technology is a dual-edge technology that could be used to either support or supplant the law. After describing the impact of this new technology on a variety of fields (including payments, contracts, communication systems, organizations and the internet of things), it will examine how blockchain technology can be framed as a new form of regulatory technology, while at the same time enabling the creation of new autonomous systems which are harder to regulate. The talk will conclude with an overview of the various ways in which blockchain-based systems can be regulated, and what are the dangers of doing so.

About Primavera De Filipi

Primavera obtained a Master degree in Business & Administration from the Bocconi University of Milan, and a Master degree in Intellectual Property Law at the Queen Mary University of London. She holds a PhD from the European University Institute in Florence, where she explored the legal challenges of copyright law in the digital environment, with special attention to the mechanisms of private ordering (Digital Rights Management systems, Creative Commons licenses, etc). During these years, she spent two months at the University of Buffalo in New York and one year as a visiting scholar at the University of California at Berkeley. Primavera is now a permanent researcher at the National Center of Scientific Research (CNRS), where she founded the Institute of Interdisciplinary Research on Internet & Society (www.iriis.fr). Primavera was a former fellow and current faculty associate at the Berkmain-Klein Center for Internet & Society at Harvard University. Visit here for additional bio information for Primavera including her online activities, research interests, recent publications, and online videos.

Links:

 

Loading...

Categories: Tech-n-law-ogy

Force of Nature

Subtitle Celebrating 20 Years of the Laws of Cyberspace Teaser

Join us as we celebrate 20 years of the Laws of Cyberspace and the ways in which it laid the groundwork for our Center's field of study.

Event Date Apr 16 2018 4:00pm to Apr 16 2018 4:00pm Thumbnail Image: 

Monday, April 16, 2018 at 4:00 pm 
Harvard Law School campus
Austin Hall West, Room 111
Reception immediately following event
RSVP required to attend in person
Event will be webcast live

Celebrating 20 years of the Laws of Cyberspace and how it laid the groundwork for Berkman Klein Center's field of study.

Please join us as we recognize the 20th anniversary of the paper The Laws of Cyberspace (Taipei March '98) by Professor Lawrence Lessig. Join Professor Lessig, the Roy L. Furman Professor of Law and Leadership at Harvard Law School, along with Professor Ruth L. Okediji, the Jeremiah Smith, Jr. Professor of Law at Harvard Law School and Co-Director of the Berkman Klein Center, and Dr. Laura DeNardis, Professor in the School of Communication at American University, with moderator, Professor Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Director of the Harvard Law School Library, and Faculty Director of the Berkman Center for Internet & Society. 

About Professor Lessig

Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School. Prior to rejoining the Harvard faculty, Lessig was a professor at Stanford Law School, where he founded the school’s Center for Internet and Society, and at the University of Chicago. He clerked for Judge Richard Posner on the 7th Circuit Court of Appeals and Justice Antonin Scalia on the United States Supreme Court. Lessig serves on the Board of the AXA Research Fund, and on the advisory boards of Creative Commons and the Sunlight Foundation. He is a Member of the American Academy of Arts and Sciences, and the American Philosophical Association, and has received numerous awards, including the Free Software Foundation’s Freedom Award, Fastcase 50 Award and being named one of Scientific American’s Top 50 Visionaries. Lessig holds a BA in economics and a BS in management from the University of Pennsylvania, an MA in philosophy from Cambridge, and a JD from Yale.

About Professor Okediji

Ruth L. Okediji is the Jeremiah Smith. Jr, Professor of Law at Harvard Law School and Co-Director of the Berkman Klein Center. A renowned scholar in international intellectual property (IP) law and a foremost authority on the role of intellectual property in social and economic development, Professor Okediji has advised inter-governmental organizations, regional economic communities, and national governments on a range of matters related to technology, innovation policy, and development. Her widely cited scholarship on IP and development has influenced government policies in sub-Saharan Africa, the Caribbean, Latin America, and South America. Her ideas have helped shape national strategies for the implementation of the WTO's Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement). She works closely with several United Nations agencies, research centers, and international organizations on the human development effects of international IP policy, including access to knowledge, access to essential medicines and issues related to indigenous innovation systems.

About Dr. DeNardis

Dr. Laura DeNardis is a globally recognized Internet governance scholar and a Professor in the School of Communication at American University in Washington, DC. She also serves as Faculty Director of the Internet Governance Lab at American University. Her books include The Global War for Internet Governance (Yale University Press 2014); Opening Standards: The Global Politics of Interoperability (MIT Press 2011); Protocol Politics: The Globalization of Internet Governance (MIT Press 2009); Information Technology in Theory (Thompson 2007 with Pelin Aksoy), and a new co-edited book The Turn to Infrastructure in Internet Governance (Palgrave 2016). With a background in information engineering and a doctorate in Science and Technology Studies (STS), her research studies the social and political implications of Internet technical architecture and governance. 

She is an affiliated fellow of the Yale Law School Information Society Project and served as its Executive Director from 2008-2011. She is an adjunct Senior Research Scholar in the faculty of international and public affairs at Columbia University and a frequent keynote speaker at the world’s most prestigious universities and institutions. She has previously taught at New York University and Yale Law School. 

About Professor Zittrain

Jonathan Zittrain is the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Vice Dean for Library and Information Resources at the Harvard Law School Library, and co-founder of the Berkman Klein Center for Internet & Society.  His research interests include battles for control of digital property and content, cryptography, electronic privacy, the roles of intermediaries within Internet architecture, human computing, and the useful and unobtrusive deployment of technology in education.

He performed the first large-scale tests of Internet filtering in China and Saudi Arabia, and as part of the OpenNet Initiative co-edited a series of studies of Internet filtering by national governments: Access Denied: The Practice and Policy of Global Internet FilteringAccess Controlled: The Shaping of Power, Rights, and Rule in Cyberspace; and Access Contested: Security, Identity, and Resistance in Asian Cyberspace.

He is a member of the Board of Directors of the Electronic Frontier Foundation and the Board of Advisors for Scientific American.  He has served as a Trustee of the Internet Society and as a Forum Fellow of the World Economic Forum, which named him a Young Global Leader. He was a Distinguished Scholar-in-Residence at the Federal Communications Commission, and previously chaired the FCC’s Open Internet Advisory Committee. His book The Future of the Internet -- And How to Stop Itpredicted the end of general purpose client computing and the corresponding rise of new gatekeepers.  That and other works may be found at <http://www.jz.org>.

Links

Loading...

Categories: Tech-n-law-ogy

Honoring All Expertise: Social Responsibility and Ethics in Tech

Subtitle featuring Kathy Pham & Friends from the Berkman Klein Community Teaser

Learn more about social responsibility and ethics in tech from cross functional perspectives featuring social scientists, computer scientists, historians, lawyers, political scientists, architects, and philosophers.

Parent Event Berkman Klein Luncheon Series Event Date Apr 17 2018 12:00pm to Apr 17 2018 12:00pm Thumbnail Image: 

Tuesday, April 17, 2018 at 12:00 pm
Harvard Law School campus
[UPDATED] Wasserstein Hall, Milstein West B
Room 2019, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

The Ethical Tech working group at the Berkman Klein Center will host a series of lighting talks about social responsibility and ethics in tech from cross functional perspectives featuring social scientists, computer scientists, historians, lawyers, political scientists, architects, and philosophers. The Ethical Tech working group meets weekly to discuss and debate current tech events, experiencing the deep value of different expertise in the room to discuss the issues from different angles. 

Doaa Abu-Elyounes

Doaa Abu-Elyounes is a second year S.J.D. candidate at Harvard Law School, where she researches the effect of artificial intelligence algorithms on the criminal justice system. Before starting her S.J.D, Doaa Completed an LL.M at Harvard Law School. Doaa is originally from Israel, where she completed an LL.B and LL.M in the University of Haifa with a special focus on law and technology. After law school, Doaa worked at the Supreme Court of Israel as a law clerk; and at the Israeli Ministry of Justice as an advisor to the Director General of the Ministry. During her time in the Berkman Center, Doaa will focus on algorithmic accountability and governance of AI in criminal justice. In particular, she will analyze the impact of risk assessment tools involving AI on the criminal justice system.

Joanne Cheung

Joanne K. Cheung is an artist and designer. Her work focuses on how people, buildings, and media contribute to democratic governance. She enjoys thinking across scales and collaborating across differences. 

She received her B.A. from Dartmouth College, M.F.A. from Bard College Milton Avery Graduate School of the Arts, and is currently pursuing her M.Arch at Harvard Graduate School of Design. 

Mary Gray

Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society and Senior Researcher at Microsoft Research. She chairs the Microsoft Research Lab Ethics Advisory Board. Mary maintains a faculty position in the School of Informatics, Computing, and Engineering with affiliations in Anthropology, Gender Studies and the Media School, at Indiana University. Mary’s research looks at how technology access, social conditions, and everyday uses of media transform people’s lives.  Her most recent book, Out in the Country: Youth, Media, and Queer Visibility in Rural America, looked at how youth in the rural United States use media to negotiate their identities, local belonging, and connections to broader, political communities. Mary’s current project combines ethnography, interviews, and survey data with large-scale platform transaction data to understand the impact of automation on the future of work and workers’ lives. Mary’s research has been covered in the popular press, including The New York Times, Los Angeles Times, and the Guardian. She served on the American Anthropological Association’s Executive Board and chaired its 113th Annual Meeting. Mary currently sits on the Executive Board of Public Responsibility in Medicine and Research (PRIM&R). In 2017, Mary joined Stanford University’s “One-Hundred-Year Study on Artificial Intelligence” (AI100), looking at the future of AI and its policy implications.

Jenn Halen

Jenn Halen is a fellow at the Berkman Klein Center. She works on research and community activities for the Ethics and Governance of Artificial Intelligence Initiative. Jenn is a doctoral candidate in Political Scientist at the University of Minnesota and a former National Science Foundation Graduate Research Fellow. Her research broadly focuses on the ways that new and emerging technologies influence, and are influenced by, politics. She will study the complex social and political implications of advanced machine learning and artificial intelligence, especially as it relates to issues of governance. She also works on issues of cyber security, human rights, and social justice. Jenn enjoys ballet, almost everything geek-related, and good vegan food.  She makes excellent vegan mac and cheese, and she will probably tell you about it.

Jenny Korn

Jenny Korn is an activist of color for social justice and scholar of race, gender, and media with academic training in communication, sociology, theater, public policy, and gender studies from Princeton, Harvard, Northwestern, and the University of Illinois at Chicago. She will examine identity and representation through online and in-person discourses, focusing on how popular concepts of race and gender are influenced by digital interactions, political protest, and institutional kyriarchy.

Kathy Pham

Kathy Pham is a computer scientist, cancer patient sidekick, product manager, and leader with a love for developing products, operations, hacking bureaucracy, building and and leading teams, all things data, healthcare, and weaving public service and advocacy into all aspects of  life.  As a 2017-2018 fellow at the Berkman Klein Center, Kathy will explore artificial intelligence, and the ethics and social impact responsibility of engineers when writing code and shipping products. Most recently, Kathy was a founding product and engineering member of the of the United States Digital Service, a tech startup in government at the White House, where she led and contributed to public services across the Veterans Affairs, Department of Defense, Talent, and Precision Medicine. She sits on the advisory boards of the Anita Borg Institute local, and the “Make the Breast Pump Not Suck” initiative. Previously, Kathy held a variety of roles in product, engineering, and data science at Google, IBM, and Harris Healthcare Solutions. In the non-work world, Kathy founded the Cancer Sidekick Foundation to spread Leukemia knowledge and build a cancer community, started Google's First Internal Business Intelligence Summit, founded Atlanta United For Sight, placed first at the Imagine Cup competition (basically the World Cup but for tech geeks) representing the United States with a news Sentiment Analysis engine, spoke at the White House State of STEM 2015, and invited as of First Lady Michelle Obama’s Guest at the 2015 State of the Union address. She has also been spotted at the gaming finals for the After Hours Gaming League for StarCraft II, speaking at tech conferences, and hosting food themed Formula 1 Racing hangouts. Kathy holds a Bachelors and Masters of Computer Science from the Georgia Institute of Technology in Atlanta, Georgia, and from Supelec in Metz, France.

Luke Stark

Luke Stark is a Postdoctoral Fellow in the Department of Sociology at Dartmouth College, and studies the intersections of digital media and behavioral science. Luke’s work at the Berkman Klein Center will explore the ways in which psychological techniques are incorporated into social media platforms, mobile apps, and artificial intelligence (AI) systems — and how these behavioral technologies affect human privacy, emotional expression, and digital labor. His scholarship highlights the asymmetries of power, access and justice that are emerging as these systems are deployed in the world, and the social and political challenges that technologists, policymakers, and the wider public will face as a result. Luke holds a PhD from the Department of Media, Culture, and Communication at New York University, and an Honours BA and MA from the University of Toronto; he has been a Fellow of the NYU School of Law’s Information Law Institute (ILI), and an inaugural Fellow with the University of California Berkeley’s Center for Technology, Society, and Policy (CTSP). He tweets @luke_stark; learn more at https://starkcontrast.co.

Salome Viljoen

Salome is a Fellow in the Privacy Initiatives Project at the Berkman Klein Center for Internet and Society. Salome’s professional interest is the intersection between privacy, technology and inequality. Before coming to the Berkman Center, Salome was an associate at Fenwick &amp; West, LLP, where she worked with technology company clients on a broad variety of matters. She has a JD from Harvard Law School, an MsC from the London School of Economics, and a BA in Political Economy from Georgetown University. In her spare time, she enjoys reading, gardening, and hanging out with her cat.

Photo courtesy of socialmediasl444
 

Loading...

Categories: Tech-n-law-ogy

THEFT! A History of Music

Subtitle Professors James Boyle and Jennifer Jenkins (Duke Law School) discuss Theft! A History of Music, their graphic novel about musical borrowing. Teaser

Theft! A History of Music is a graphic novel laying out a 2000-year long history of musical borrowing from Plato to rap.

Parent Event Berkman Klein Luncheon Series Event Date Apr 10 2018 12:00pm to Apr 10 2018 12:00pm Thumbnail Image: 

Tuesday, April 10, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein East A
Room 2036, Second Floor
RSVP required to attend in person
Event will be live webcast at 12:00 pm

You can download the book here. Complimentary copies available at event!

This comic book lays out 2000 years of musical history. A neglected part of musical history. Again and again there have been attempts to police music; to restrict borrowing and cultural cross-fertilization. But music builds on itself. To those who think that mash-ups and sampling started with YouTube or the DJ’s turntables, it might be shocking to find that musicians have been borrowing—extensively borrowing—from each other since music began. Then why try to stop that process? The reasons varied. Philosophy, religion, politics, race—again and again, race—and law. And because music affects us so deeply, those struggles were passionate ones. They still are.

The history in this book runs from Plato to Blurred Lines and beyond. You will read about the Holy Roman Empire’s attempts to standardize religious music with the first great musical technology (notation) and the inevitable backfire of that attempt. You will read about troubadours and church composers, swapping tunes (and remarkably profane lyrics), changing both religion and music in the process. You will see diatribes against jazz for corrupting musical culture, against rock and roll for breaching the color-line. You will learn about the lawsuits that, surprisingly, shaped rap. You will read the story of some of music’s iconoclasts—from Handel and Beethoven to Robert JohnsonChuck BerryLittle RichardRay Charles, the British Invasion and Public Enemy.

To understand this history fully, one has to roam wider still—into musical technologies from notation to the sample deck, aesthetics, the incentive systems that got musicians paid, and law’s 250 year struggle to assimilate music, without destroying it in the process. Would jazz, soul or rock and roll be legal if they were reinvented today? We are not sure and that seems...  worrying. We look forward to playing you some of the music, showing the pictures and hearing your views.  

About James

James Boyle is William Neal Reynolds Professor of Law at Duke Law School and the former Chairman of the Board of Creative Commons. He has written for The New York TimesThe Financial TimesNewsweek and many other newspapers and magazines. His other books include The Public Domain: Enclosing the Commons of the MindShamans, Software and Spleens: Law and the Construction of the Information Society, and Bound By Law a comic book about fair use, copyright and creativity (with Jennifer Jenkins).  

About Jennifer

Jennifer Jenkins is a Clinical Professor of Law at Duke Law School and the Director of the Center for the Study of the Public Domain. Apart from her legal qualifications, she also plays the piano and holds an MA in English from Duke University, where she studied creative writing with the late Reynolds Price and Milton with Stanley Fish. Her most recent book is Intellectual Property: Cases and Materials (3rd ed, 2016) (with James Boyle). Her recent articles include In Ambiguous Battle: The Promise (and Pathos) of Public Domain Day, and Last Sale? Libraries’ Rights in the Digital Age.

 Links

Loading...

Categories: Tech-n-law-ogy

Remedies for Cyber Defamation: Criminal Libel, Anti-Speech Injunctions, Forgeries, Frauds, and More

Subtitle Featuring Professor Eugene Volokh, UCLA School of Law Teaser

“Cheap speech” has massively increased ordinary people’s access to mass communications -- both for good and for ill. How has the system of remedies for defamatory, privacy-invading, and harassing speech reacted? Some ways are predictable; some are surprising; some are shocking. Prof. Eugene Volokh (UCLA) will lay it out at a special Berkman Klein Luncheon on Monday, April 9th. Please join us!

Parent Event Berkman Klein Luncheon Series Event Date Apr 9 2018 12:00pm to Apr 9 2018 12:00pm Thumbnail Image: 

Monday, April 9, 2018 at 12:00 pm
Harvard Law School campus
Wasserstein Hall, Milstein West A
Room 2019, Second Floor
RSVP required to attend in person

Watch Live Starting at 12pm
(video and audio will be archived on this page following the event)

If you experience a video disruption reload to refresh the webcast.

This event is being sponsored by Lumen, a project of the Berkman Klein Center for Internet & Society at Harvard University.

“Cheap speech” has massively increased ordinary people’s access to mass communications -- both for good and for ill.  How has the system of remedies for defamatory, privacy-invading, and harassing speech reacted?  Some ways are predictable; some are surprising; some are shocking. Prof. Eugene Volokh (UCLA) will lay it out at a special Berkman Klein Luncheon on Monday, April 9th. 

About Professor Volokh

Eugene Volokh teaches free speech law, tort law, religious freedom law, church-state relations law, and a First Amendment amicus brief clinic at UCLA School of Law, where he has also often taught copyright law, criminal law, and a seminar on firearms regulation policy. Before coming to UCLA, he clerked for Justice Sandra Day O'Connor on the U.S. Supreme Court and for Judge Alex Kozinski on the U.S. Court of Appeals for the Ninth Circuit.

Volokh is the author of the textbooks The First Amendment and Related Statutes (5th ed. 2013), The Religion Clauses and Related Statutes (2005), and Academic Legal Writing (4th ed. 2010), as well as over 75 law review articles and over 80 op-eds, listed below. He is a member of The American Law Institute, a member of the American Heritage Dictionary Usage Panel, and the founder and coauthor of The Volokh Conspiracy, a Weblog that gets about 35-40,000 pageviews per weekday. He is among the five most cited then-under-45 faculty members listed in the Top 25 Law Faculties in Scholarly Impact, 2005-2009 study, and among the forty most cited faculty members on that list without regard to age. These citation counts refer to citations in law review articles, but his works have also been cited by courts. Six of his law review articles have been cited by opinions of the Supreme Court Justices; twenty-nine of his works (mostly articles but also a textbook, an op-ed, and a blog post) have been cited by federal circuit courts; and several others have been cited by district courts or state courts.

Volokh is also an Academic Affiliate for the Mayer Brown LLP law firm; he generally consults on other lawyers' cases, but he has argued before the Seventh Circuit, the Ninth Circuit, the Indiana Supreme Court, and the Nebraska Supreme Court, and has also filed briefs in the U.S. Supreme Court, in the Fifth, Sixth, Eighth, Eleventh, and D.C. Circuits, and state appellate courts in California, Michigan, New Mexico, and Texas.

Volokh worked for 12 years as a computer programmer. He graduated from UCLA with a B.S. in math-computer science at age 15, and has written many articles on computer software. Volokh was born in the USSR; his family emigrated to the U.S. when he was seven years old.

About Lumen

Lumen is an independent 3rd party research project studying cease and desist letters concerning online content. We collect and analyze requests to remove material from the web. Our goals are to educate the public, to facilitate research about the different kinds of complaints and requests for removal--both legitimate and questionable--that are being sent to Internet publishers and service providers, and to provide as much transparency as possible about the “ecology” of such notices, in terms of who is sending them and why, and to what effect.

Our database contains millions of notices, some of them with valid legal basis, some of them without, and some on the murky border. Our posting of a notice does not indicate a judgment among these possibilities, nor are we authenticating the provenance of notices or making any judgment on the validity of the claims they raise.

Lumen is a unique collaboration among law school clinics and the Electronic Frontier Foundation. Conceived and developed at the Berkman Center for Internet & Society (now the Berkman Klein Center) by then-Berkman Fellow Wendy Seltzer, Lumen was nurtured with help from law clinics at Harvard, Berkeley, Stanford, University of San Francisco, University of Maine, George Washington School of Law, and Santa Clara University School of Law.

Lumen is supported by gifts from Google. All individual and corporate donors to the Berkman Klein Center agree to contribute their funds as gifts rather than grants, for which there are no promised products, results, or deliverables.

Loading...

Categories: Tech-n-law-ogy

Big Data, Health Law, and Bioethics

Teaser

This timely, groundbreaking volume explores key questions from a variety of perspectives, examining how law promotes or discourages the use of big data in the health care sphere, and also what we can learn from other sectors.

Publication Date 1 Apr 2018 Thumbnail Image: External Links: Download the Introduction from SSRNOrder the book

 

Edited by I. Glenn Cohen, Holly Fernandez Lynch, Effy Vayena, and Urs Gasser
Cambridge University Press,  March 2018

About the Book:

When data from all aspects of our lives can be relevant to our health - from our habits at the grocery store and our Google searches to our FitBit data and our medical records - can we really differentiate between big data and health big data? Will health big data be used for good, such as to improve drug safety, or ill, as in insurance discrimination? Will it disrupt health care (and the health care system) as we know it? Will it be possible to protect our health privacy? What barriers will there be to collecting and utilizing health big data? What role should law play, and what ethical concerns may arise? This timely, groundbreaking volume explores these questions and more from a variety of perspectives, examining how law promotes or discourages the use of big data in the health care sphere, and also what we can learn from other sectors.

This edited volume stems from the Petrie-Flom Center’s 2016 annual conference, organized in collaboration with the Berkman Klein Center and the Health Ethics and Policy Lab, University of Zurich which brought together leading experts to identify the various ways in which law and ethics intersect with the use of big data in health care and health research, particularly in the United States; understand the way U.S. law (and potentially other legal systems) currently promotes or stands as an obstacle to these potential uses; determine what might be learned from the legal and ethical treatment of uses of big data in other sectors and countries; and examine potential solutions (industry best practices, common law, legislative, executive, domestic and international) for better use of big data in health care and health research in the U.S.

 

Producer Intro Authored by
Categories: Tech-n-law-ogy

Practical Approaches to Big Data Privacy Over Time

Teaser

This article analyzes how privacy risks multiply as large quantities of personal data are collected over longer periods of time, draws attention to the relative weakness of data protections in the corporate and public sectors, and provides practical recommendations for protecting privacy when collecting and managing commercial and government data over extended periods of time.

Publication Date 12 Mar 2018 Thumbnail Image: External Links: Download article from Oxford University PressDownload from DASH

Authored by Micah Altman, Alexandra Wood, David O’Brien, and Urs Gasser

The Berkman Klein Center is pleased to announce a new publication from the Privacy Tools project, authored by a multidisciplinary group of project collaborators from the Berkman Klein Center and the Program on Information Science at MIT Libraries. This article, titled "Practical approaches to data privacy over time," analyzes how privacy risks multiply as large quantities of personal data are collected over longer periods of time, draws attention to the relative weakness of data protections in the corporate and public sectors, and provides practical recommendations for protecting privacy when collecting and managing commercial and government data over extended periods of time.

Increasingly, corporations and governments are collecting, analyzing, and sharing detailed information about individuals over long periods of time. Vast quantities of data from new sources and novel methods for large-scale data analysis are yielding deeper understandings of individuals’ characteristics, behavior, and relationships. It is now possible to measure human activity at more frequent intervals, collect and store data relating to longer periods of activity, and analyze data long after they were collected. These developments promise to advance the state of science, public policy, and innovation. At the same time, they are creating heightened privacy risks, by increasing the potential to link data to individuals and apply data to new uses that were unanticipated at the time of collection. Moreover, these risks multiply rapidly, through the combination of long-term data collection and accumulations of increasingly “broad” data measuring dozens or even thousands of attributes relating to an individual.

Existing regulatory requirements and privacy practices in common use are not sufficient to address the risks associated with long-term, large-scale data activities. In practice, organizations often rely on a limited subset of controls, such as notice and consent or de-identification, rather than drawing from the wide range of privacy interventions available. There is a growing recognition that privacy policies often do not adequately inform individuals about how their data will be used, especially over the long term. The expanding scale of personal data collection and storage is eroding the feasibility and effectiveness of techniques that aim to protect privacy simply by removing identifiable information.

Recent concerns about commercial and government big data programs parallel earlier conversations regarding the risks associated with long-term human subjects research studies. For decades, researchers and institutional review boards have intensively studied long-term data privacy risks and developed practices that address many of the challenges associated with assessing risk, obtaining informed consent, and handling data responsibly. Longitudinal research data carry risks similar to those associated with personal data held by corporations and governments. However, in general, personal information is protected more strongly when used in research than when it is used in commercial and public sectors—even in cases where the risks and uses are nearly identical.

Combining traditional privacy approaches with additional safeguards identified from exemplar practices in long-term longitudinal research and new methods emerging from the privacy literature can offer more robust privacy protection. Corporations and governments may consider adopting review processes like those implemented by research ethics boards to systematically analyze the risks and benefits associated with data collection, retention, use, and disclosure over time. Rather than relying on a single intervention such as de-identification or consent, corporate and government actors may explore new procedural, legal, and technical tools for evaluating and mitigating risk, balancing privacy and utility, and providing enhanced transparency, review, accountability, as potential components of data management programs. Adopting new technological solutions to privacy can help ensure stronger privacy protection for individuals and adaptability to respond to emerging sophisticated attacks on data privacy. Risks associated with long-term big data management can be mitigated by combining sets of privacy and security controls, such as notice and consent, de-identification, ethical review processes, differential privacy, and secure data enclaves, when tailored to risk the factors present in a specific case and informed by the state of the art and practice.

This article was published by Oxford University Press in International Data Privacy Law, available at https://doi.org/10.1093/idpl/ipx027. The research underlying this article was presented at the 2016 Brussels Privacy Symposium on Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization, hosted by the Brussels Privacy Hub of the Vrije Universiteit Brussel and the Future of Privacy Forum, on November 8, 2016. This material is based upon work supported by the National Science Foundation under Grant No. CNS-1237235, the Alfred P. Sloan Foundation, and the John D. and Catherine T. MacArthur Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the Alfred P. Sloan Foundation, or the John D. and Catherine T. MacArthur Foundation.

About the Privacy Tools for Sharing Research Data Project
Funded by the National Science Foundation and the Alfred P. Sloan Foundation, the Privacy Tools for Sharing Research Data project is a collaboration between the Berkman Klein Center for Internet & Society, the Center for Research on Computation and Society (CRCS), the Institute for Quantitative Social Science, and the Data Privacy Lab at Harvard University, as well as the Program on Information Science at MIT Libraries, that seeks to develop methods, tools, and policies to facilitate the sharing of data while preserving individual privacy and data utility.

Executive Director and Harvard Law School Professor of Practice Urs Gasser leads the Berkman Klein Center's role in this exciting initiative, which brings the Center's institutional knowledge and practical experience to help tackle the legal and policy-based issues in the larger project.

More information about the project is available on the official project website.

  Producer Intro Authored by
Categories: Tech-n-law-ogy

A Conversation on Data and Privacy with former Facebook GC Chris Kelly

Teaser

Chris Kelly worked extensively in developing Facebook’s early approaches to public policy challenges including privacy. This event will provide a free form discussion about Kelly’s career path, the goals of Facebook’s privacy policies, their interplay with Facebook’s business model, and strategies for implementation.

Event Date Apr 4 2018 12:00pm to Apr 4 2018 12:00pm Thumbnail Image: 

 

 

Tuesday, April 4, 2018 at 12:00 pm
Harvard Law School campus
Pound Hall, Rm 201

This event is co-sponsored by Harvard Law School's Center on the Legal Profession.

Chris Kelly worked extensively in developing Facebook’s early approaches to public policy challenges including privacy.  This event will provide a free form discussion about Kelly’s career path, the goals of Facebook’s privacy policies, their interplay with Facebook’s business model, and strategies for implementation. We will also discuss more generally the current political environment in which user-data-driven technology companies find themselves, potential re-implementation, and the possible role of domestic and international privacy regulation. Finally, we’ll find out what Kelly has been involved with since leaving Facebook professionally, politically, and personally.  Kelly will be in discussion with Prof. Ron Dolin, who is currently teaching “Law 2.0: Technology’s Impact on the Practice of Law” at HLS.

About Chris Kelly:
Chris Kelly, HLS ’97, is an entrepreneur, attorney, and activist. From September 2005 to August 2009, he served as the first General Counsel, Chief Privacy Officer and Head of Global Public Policy at Facebook. As an early leader at Facebook, he helped it grow from its college roots to the ubiquitous communications medium it is today. In 2010, Kelly was a candidate for the Democratic nomination for California Attorney General. Since his departure from Facebook and campaign for Attorney General, he has become a prominent investor in award-winning independent films, restaurants, and technology start-ups including MoviePass, Fandor, Organizer, and rentLEVER. Kelly became a co-owner of the NBA’s Sacramento Kings in May 2013.

Categories: Tech-n-law-ogy

Scheduling Jekyll Posts with Netlify and AWS

Not too long ago I moved this site from a custom setup on Amazon Web Services (AWS) to Netlify[1]. My AWS setup was a bit cumbersome, consisting of a Jenkins machine that pulled from a private GitHub repository, built the site using Jekyll[2], and published the result to S3. The benefit of this setup over using GitHub pages was that I could schedule posts to be published later. Jenkins was run every morning and new posts were automatically published without manual intervention. (Jenkins was also triggered whenever I pushed to the GitHub repository for instant builds.)

My custom AWS setup worked well, but it cost around $14 every month and I wasn't happy about that, especially given how infrequently I've been writing new posts in the past couple of years. I decided in the short-term to just move this site to Netlify and not worry about scheduling posts because I didn't think I would be writing that much for the foreseeable future. If I ever wanted to post something, I could do so manually, and in the meantime I'd be saving $14 a month. As it turned out, scheduling posts on Netlify was a lot simpler than I thought it would be. All I needed was an AWS Lambda function and an AWS Cloudwatch event.

Note: This post assumes you already have a site setup on Netlify using a GitHub repository. While I assume the solution works the same for other source code repository types, like BitBucket, I'm not entirely sure. This post also assumes that you have an AWS account.

Configuring Jekyll

By default, Jekyll generates all blog posts in the _posts directory regardless of the publish date associated with each. That obviously doesn't work well when you want to schedule posts to be published in the future, so the first step is to configure Jekyll to ignore future posts. To do so, add this key to Jekyll's config.yml:

future: false

Setting future to false tells Jekyll to skip any posts with a publish date in the future. You can then set the date field in the front matter of a post to a future date and know that the post will not be generated until then, like this:

--- layout: post title: "My future post" date: 2075-01-01 00:00:00 ---

This post will be published on January 1, 2075, so it will not be built by Jekyll until that point in time. I find it easier to schedule all posts for midnight so that whenever the site gets published, so long as the date matches, the post will always be generated.

Generating a Netlify build hook

One of the things I like about Netlify is that you can trigger a new site build whenever you want, either manually or programmatically. Netlify has a useful feature called a build hook[3], which is a URL that triggers a new build. To generate a new build hook, go to the Netlify dashboard for your domain and go Site Settings and then to the Build & Deploy page. When you scroll down, you'll see a section for Build Hooks. Click "Add build hook", give your new hook a name (something like "Daily Cron Job" would be appropriate here), and choose the branch to build from.

You'll be presented with a new URL that looks something like this:

https://api.netlify.com/build_hooks/{some long unique identifier}

Whenever you send a POST request to the build hook, Netlify will pull the latest files from the GitHub repository, build the site, and deploy it. This is quite useful because you don't need to worry about authenticating against the Netlify API; you can use this URL without credentials. Just make sure to keep this URL a secret. You can see the URL in your list of build hooks on the same page.

(Don't worry, the build hook URL in the screenshot has already been deleted.)

Creating the AWS Lambda function

AWS Lambda functions are standalone functions that don't require you to setup and manage a server. As such, they are especially useful when you have very simple processes to run infrequently. All you need to do is create a Lambda function that sends a POST request to the build URL.

The first step is to create a local Node.js application that will become the executable code for the Lamda function. Create a new directory (build-netlify-lambda, for example) and install the request module, which will make it easy to send an HTTP request:

$ cd build-netlify-lambda $ npm i request

You can create a package.json file if you want, but it's not necessary.

Next, create a file called index.js inside of build-netlify-lamda and paste the following code into it:

"use strict"; const request = require("request"); exports.handler = (event, context, callback) => { request.post(process.env.URL, callback); };

All Lamda functions export a handler function that receives three parameters: an event object with information about the event that triggered the function call, a context object with information about the runtime environment, and a callback function to call when the function is finished. In this case, you only need the callback function. The Netlify build hook will be stored in an environment variable called URL in the Lambda function, which you access using process.env.URL. That value is passed directly to request.post() along with the callback, making this Lamda function as small as possible.

Now, you just need to zip up the entire build-netlify-lambda directory so it can be deployed to AWS Lambda:

$ zip -r build-netlify-lamda.zip index.js node_modules/

Make sure the top level of the zip file has both index.js and node_modules/. If you mistakenly zip up the entire directory so that build-netlify-lambda is at the top level, AWS will not find the executable files.

The last step is to upload this zip file to AWS. To do so, go to the AWS Console[4] and click "Create Function".

You'll be presented with a form to fill out. Enter a name for the function, such as "publishNetlifySiteExample" and select one of the Node.js options as your runtime. The last field is for the Lambda role. If you already have other roles defined, you can use one that already exists; otherwise, select "Create role from template(s)". This Lambda function doesn't need a lot of permissions, so you can just add "Basic Edge Lambda Permissions" to allow access to logs. Click "Create Function".

When the Lambda function has been created, a new screen will load. This screen is a bit difficult to parse due to the amount of information on it. If this is your first Lambda function, don't worry, you'll get used to it quickly. Scroll down to the section called "Function Code" and select "Upload a .ZIP file" from the "Code entry type" dropdown. You can then select your zip file to upload to the Lambda function.

Beneath the "Function Code" section is the "Environment Variables" section. Create a new environment variable named URL with its value set to your Netlify build hook. Once that's complete, click "Save" at the top of the screen to upload the zip file and save your environment variables.

You can test the Lambda function by creating a new test event. At the top of the screen, click the "Select a Test Event Dropdown" and select "Configure Test Events".

A new dialog will open to create a test event. Since this Lambda function doesn't use any incoming data, you can keep the default settings and give the event a meaningful name like "TestNetlifyBuild". Click the "Create" button to save the test event.

In order to run the test, make sure "TestNetlifyBuild" is selected in the dropdown at the top of the screen and click the "Test" button. This will execute the function. If you look at your Netlify Deploys dashboard, you should see a new build begin.

Setting up the Cloudwatch event

At this point, the Lambda function is operational and will trigger a new Netlify deploy when executed. That's somewhat useful but isn't much more powerful than logging into the Netlify dashboard and manually triggering a build. The goal is to have Netlify build automatically on a certain schedule and Cloudwatch is the perfect solution.

Cloudwatch is a service that generates events based on any number of criteria. You can use it to monitor your services on a variety of criteria and then respond with certain actions. For the purposes of this post, Cloudwatch will be set to run periodically and then trigger the Lambda function that builds the Netlify website.

On the Cloudwatch console[5], click "Events" on the left menu and then the "Create Rule" button.

Under "Event Source" select "Schedule". You're now able to select the frequency with which you want the event to be triggered. You can select an interval of minutes, hours, or days, or you can create a custom schedule using a Cron expression. (If you want to control the exact time that an event is triggered, it's best to use a Cron expression.) Under "Targets", select "Lambda function" and your function name. There's no need to configure the version/alias or input because the Lambda function isn't using any of those. Click the "Configure Details" button. You'll be brought to a second dialog.

In this dialog, fill in a meaningful name for your event (and optional description) and then click "Create Rule". Rules are on by default so your new event should be triggered at the next interval. The Lambda function will then be called and regenerate the website.

Conclusion

This website has been running on the setup described in this post for over a month. In fact, this post was written ahead of time and published using my AWS Cloudwatch event and Lambda function. The functionality is the same as my previous setup with Jenkins and S3, however, this setup costs $0 compared to $14. I only run my Cloudwatch event two times a week (I'm not posting much these days) and each run of the Lambda function takes under two seconds to complete, which means I fall into the free tier and I'm not charged anything.

The Lambda free tier is one million requests and 400,000 GB-seconds per month. A GB-second is one second of execution time with 1 GB of allocated memory. The Lambda function created in this post uses the default memory allocation of 128 MB. If you figure out the match, you'll still be in the free tier even if you run your Lambda function every hour of the day each month. As the Lambda function only sends off an HTTPS request and then Netlify does the build, the real work isn't done inside of Lambda.

I've found this setup to be very simple and cost-efficient, not to mention a lot less complicated. I no longer have to log into a Jenkins server to figure out why a build of the website failed. There's just one small function to manage and all of the important information is displayed in the Netlify dashboard.

The most important thing to remember when using this setup is to set the date field of each post to some time in the future. When the Cloudwatch event triggers the Lambda function to execute, only those posts with a date in the past will be generated. You can play around with the timing of the Cloudwatch event to best suit your frequency of posts, and keep in mind that Netlify automatically builds the site whenever a change is pushed, so you still have just-in-time updates as needed.

References
  1. Netlify (netlify.com)
  2. Jekyll (jekyllrb.com/)
  3. Netlify Webhooks - Incoming Hooks (netlify.com)
  4. AWS Console - Lambda (console.aws.amazon.com)
  5. AWS Console - Cloudwatch (console.aws.amazon.com)
Categories: Tech-n-law-ogy

Dividing Lines: Why Is Internet Access Still Considered a Luxury in America?

Subtitle featuring Maria Smith of the Berkman Klein Center Teaser

Internet access is a major social and economic justice issue of our time. Dividing Lines, a four-part documentary video series, sheds a light on who is being left behind as big telecom flourishes.

Parent Event Berkman Klein Luncheon Series Event Date Mar 27 2018 12:00pm to Mar 27 2018 12:00pm Thumbnail Image: 

Tuesday, March 27, 2018 at 12:00 pm
Harvard Law School campus
Pound Hall Room 101
Ballantine Classroom
RSVP required to attend in person
Event will be live webcast at 12:00 pm

The online world is no longer a distinct world. It is an extension of our social, economic, and political lives. Internet access, however, is still often considered a luxury good in the United States. Millions of Americans have been priced out of, or entirely excluded from, the reach of modern internet networks. Maria Smith, an affiliate of Berkman Klein and the Cyberlaw Clinic, created a four-part documentary series to highlight these stark divides in connectivity, from Appalachia to San Francisco, and to uncover the complex web of political and economic forces behind them.   

About Maria Maria Smith is a Project Coordinator working with Professor Susan Crawford in Harvard Law School's Cyberlaw Clinic and leading the efforts of the Responsive Communities project within Berkman Klein. She is focused on the intersection of technology deployment and social and economic justice. Maria is also a documentary filmmaker whose productions expose the impacts of and forces behind America's stark digital divides. She made her directorial debut in college with the film One Nation, Disconnected, in cooperation with the Harvard Law Documentary Studio, that details the hardship of a teenager growing up in New York City without internet access at home. Dividing Lines, a four-part series, is in production this year.      Maria first joined the Berkman Klein and Harvard Law communities as an undergraduate conducting teaching, research, and project support for Professor Susan Crawford. Maria graduated from Harvard College with a B.A. in Economics. In college she was invested in work with the Global Health and AIDS Coalition and co-chaired the annual Women’s Leadership Conference. She worked as an intern for the Public Defender Service for the District of Columbia, Connecting for Good, and Morgan Stanley.  

   

 

Loading...

Categories: Tech-n-law-ogy

A talk with Marilù Capparelli, PhD

Subtitle Legal Director at Google Teaser

Please join the Harvard Italian Law Association and the Berkman Klein Center for Internet & Society for a discussion on several legal and regulatory issues concerning digital platforms: controversial content, brand safety, privacy and GDPR compliance, scope of removal and CJEU pending cases, tax, copyright, and antitrust enforcement.

Event Date Apr 5 2018 12:00pm to Apr 5 2018 12:00pm Thumbnail Image: 

Thursday, April  5, 2018 at 12:00 pm
Harvard Law School campus
[NEW LOCATION] Hauser Hall 104
Complimentary lunch provided

Please join the Harvard Italian Law Association and the Berkman Klein Center for Internet & Society for a discussion on several legal and regulatory issues concerning digital platforms: controversial content, brand safety, privacy and GDPR compliance, scope of removal and CJEU pending cases, tax, copyright, and antitrust enforcement.

Ms. Marilù Capparelli is managing director of Google Legal Department in the EMEA area. Before joining Google, she was Head of Legal and Government Affairs at eBay Inc. She is the author of several legal articles and regularly lectures in master degrees on law and technology.  She has been recently listed amongst the most influential Italian women lawyers. 

This event is being co-sponsored by the Harvard Italian Law Association at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University.

Categories: Tech-n-law-ogy

Pages

Subscribe to www.dgbutterworth.com aggregator - Tech-n-law-ogy