Tech-n-law-ogy

Community-Owned Fiber Networks: Value Leaders in America

Pricing Review Shows They Provide Least-Expensive Local "Broadband" Teaser

Our examination of advertised prices shows that community-owned fiber-to-the-home (FTTH) networks in the United States generally charge less for entry-level broadband service than do competing private providers, and don’t use initial low “teaser” rates that sharply rise months later. 

Publication Date 10 Jan 2018 Thumbnail Image: External Links: Download from DASH

by David Talbot, Kira Hessekiel, and Danielle Kehl

By one recent estimate about 8.9 percent of Americans, or about 29 million people, lack access to wired home “broadband” service, which the U.S. Federal Communications Commission defines as an internet access connection providing speeds of at least 25 Mbps download and 3 Mbps upload. Even where home broadband is available, high prices inhibit adoption; in one national survey, 33 percent of non-subscribers cited cost of service as the primary barrier. Municipally and other community-owned networks have been proposed as a driver of competition and resulting better service and prices.

We examined prices advertised by a subset of community-owned networks that use fiber-to-the-home (FTTH) technology. In late 2015 and 2016 we collected advertised prices for residential data plans offered by 40 community-owned (typically municipally-owned) FTTH networks. We then identified the least-expensive service that meets the federal definition of broadband (regardless of the exact speeds provided) and compared advertised prices to those of private competitors in the same markets. We were able to make comparisons in 27 communities and found that in 23 cases, the community-owned FTTH providers’ pricing was lower when the service costs and fees were averaged over four years. (Using a three year-average changed this fraction to 22 out of 27.) In the other 13 communities, comparisons were not possible, either because the private providers’ website terms of service deterred or prohibited data collection or because no competitor offered service that qualified as broadband. We also found that almost all community-owned FTTH networks offered prices that were clear and unchanging, whereas private ISPs typically charged initial low promotional or “teaser” rates that later sharply rose, usually after 12 months.

We made the incidental finding that Comcast advertised different prices and terms for the same service in different regions. We do not have enough information to draw conclusions about the impacts of these practices. In general, our ability to study broadband pricing was constrained by the lack of standardization in internet service offerings and a shortage of available data. The FCC doesn't collect data from ISPs on advertised prices, prices actually charged, service availability by address, consumer adoption by address, or the length of time consumers retain service.

Producer Intro Authored by
Categories: Tech-n-law-ogy

MACHINE EXPERIENCE II: Art Perspectives on Artificial Intelligence

Teaser

A showcase of works by metaLAB artists exploring the emotional effects of algorithms, the uncanny experiences of sensor-enabled computers, and what intelligent machines might reveal about understandings of the nature of intelligence itself.

Thumbnail Image: 

January 19 - February 4
Rainbow Unicorn
Anklamer Str. 50, 10115. Berlin

The possibilities of artificial intelligence have long seemed futuristic and far-fetched. Today, however, AI technology is making its impact felt in such real-world realms as autonomous vehicles, online searches and feeds, and the criminal justice system.

metaLAB at Harvard presents MACHINE EXPERIENCE II, a showcase of works by metaLAB artists exploring the emotional effects of algorithms, the uncanny experiences of sensor-enabled computers, and what intelligent machines might reveal about understandings of the nature of intelligence itself. This work is presented in conjunction with the Ethics and Governance of AI Initiative at Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab.

The exhibition includes works by: Kim Albrecht, Matthew Battles, Joanne K. Cheung, Hannah Davis, Sands Fish, Adam Horowitz & Oscar Rosello, Maia Leandra, Sarah Newman, Rachel Kalmar & Jessica Yurkofsky, Mindy Seu, Jie Qi & Artem Dementyev.

Rainbow Unicorn is a Berlin based design agency working in fields of art direction and coding founded by Anna Niedhart, Christian Reich and Alex Tolar.

Since 2016 Rainbow Unicorn expanded to a gallery dedicated to contemporary art.

Contact: snewman@metalab.harvard.edu Related Content: Ethics and Governance of Artificial Intelligence
Categories: Tech-n-law-ogy

Who Owns Your Ideas and How Does Creativity Happen?

Subtitle A Conversation with Professor Orly Lobel on her new book You Don’t Own Me: How Mattel v. MGA Entertainment Exposed Barbie’s Dark Side (Norton) Teaser

Who owns your ideas? How are cultural icons created and who gets to control their image and message? Orly Lobel’s new book You Don’t Own Me is about how intellectual property both fuels and impedes entrepreneurship, innovation, ideas, and talent. The story is also about how the courtroom interacts with consumer psychology, corporate ethics, brand control, feminism, ethnicity and our values about parenting and womanhood. "Colorful and dramatic. ...Orly Lobel masterfully draws us in with rich details, urging us to consider the future of innovation and the many ways in which companies employ litigation to achieve market domination." -- Jonathan Zittrain, Professor of Law at Harvard Law School and author of The Future of the Internet

Parent Event Berkman Klein Luncheon Series Event Date Jan 16 2018 12:00pm to Jan 16 2018 12:00pm Thumbnail Image: 

Tuesday, January 16, 2018 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Milstein East A (Room 2036, second floor)

RSVP required to attend in person
Event will be live webcast at 12:00 pm

Orly Lobel, award-winning author of Talent Wants to be Free and the Don Weckstein Professor of Law at the University of San Diego, delves into the legal disputes between toy powerhouses to expose the ways IP is used as a sledgehammer in today’s innovation battles. YOU DON’T OWN ME is not just a thrilling story of business battles and courtroom drama, but the book brings a critical eye to our ideas about the American Dream, the rise of feminism, consumer psychology and the making of icons alongside betrayal, spying, and racism in the courtroom. Deeply researched, Lobel interviewed the major players, including the executives behind questionable corporate and legal strategies and the controversial appellate court judge Alex Koziniski. With compelling Michael Lewis style storytelling, Lobel shows that our current markets too often allow anticompetitive practices by the enforcement of draconian assignment contracts, NDAs, and covenant not to competes against employees and by overly expansive definitions of copyright, trademark and trade secrecy.

About Orly

Orly Lobel is the award winning author several books and numerous articles. She is a prolific speaker, commentators and scholar who travels the world with an impact on policy and industry. Her book Talent Wants to Be Free: Why We Should Learn to Love Leaks, Raids and Free Riding (Yale University Press 2013), is the winner of several prestigious awards, including Gold Medal Axiom Best Business Books 2014, Gold Medal Independent Publisher’s Award 2014, the 2015 Gold Medal of Next Generation Indie Books and Winner of the International Book Awards for Best Business Book. In 2016 Lobel was invited to Washington DC to present Talent Wants to be Free at the White House, a meeting which resulted in a presidential call for action.

Lobel is the author as well as two earlier books about employment and labor law and economics and numerous articles on behavioral law and economics, innovation policy, intellectual property, human capital, the sharing economy and the rise of the digital platform, regulation and governance. Lobel is the Don Weckstein Professor of Law and founding member of the Center for Intellectual Property Law and Markets at the University of San Diego. A graduate of Harvard Law School, Lobel’s interdisciplinary research is published widely in the top journals in law, economics, and psychology. Lobel is currently writing a book about innovation battles and how policy has shaped the dynamics of competition and play in the toy industry forthcoming 2017.

Lobel’s work has been featured in The New York Times, The Economist, BusinessWeek, Wall Street Journal, Forbes, Fortune, The Sunday Times, Globe and Mail, Marketplace, Huffington Post, CNBC, and CNN Money. Her scholarship and research has received significant grants and awards, including from the ABA, the Robert Wood Johnson Foundation, Fulbright, and the Searle-Kauffman Foundation.

She is a member of the American Law Institute and served as a fellow at Harvard University Center for Ethics and the Professions, the Kennedy School of Government, and the Weatherhead Center for International Affairs. She serves on the advisory boards of the San Diego Lawyer Chapter of the American Constitution Society, the Employee Rights Center, and the Oxford Handbook on Governance.

A world traveler, Lobel has lectured at Yale, Harvard, University of California San Diego, University of San Diego and Tel Aviv University and is a frequent speaker at top research institutions, industry, and government forums throughout Europe, Asia, Australia and North America. A celebrated author and scholar, Lobel’s writing has won several awards including the Thorsnes Prize for Outstanding Legal Scholarship and the Irving Oberman Memorial Award. In 2013, Lobel was named one of the 50 Sharpest Minds in Research by The Marker Magazine. Lobel lives in La Jolla, California, with her husband and three daughters.

Lobel is regularly interviewed featured in the nation’s leading media outlets, journals and radio, such as the New York Times, BusinessWeek, and NPR’s Marketplace. She is a sought after public speaker and is a regular contributor to the Harvard Business Review. Recently, she was invited to speak at leading associations and companies, such as Intel, Samsung, AlphaSights, ERE. Lobel is also active on Twitter and is a regular blogger. In May 2015, Lobel gave a fascinating TEDx talk entitled Secrets & Sparks about the expansion of secrecy and intellectual property in contemporary markets.

Links

Loading...
Categories: Tech-n-law-ogy

Berkman Klein at IGF 2017

Teaser

This week marks the 12th annual meeting of the Internet Governance Forum (IGF), a multistakeholder forum for policy dialogue on issues of Internet governance. The Berkman Klein Center is pleased to be an active participant in key discussions about some of the most pressing issues of our increasingly networked world, including the ethics and governance of artificial intelligence and harmful speech online

Thumbnail Image: 

This week (December 17-21) marks the 12th annual meeting of the Internet Governance Forum (IGF), a multistakeholder forum for policy dialogue on issues of Internet governance, held this year in Geneva, Switzerland.  As in years past, the Berkman Klein Center is pleased to be an active participant in key discussions about some of the most pressing issues of our increasingly networked world, including the ethics and governance of artificial intelligence, harmful speech online, and youth in the digital economy.

A few sessions we are particularly excited about are highlighted below:

Social Responsibility and Ethics in Artificial Intelligence

Moderated by Urs Gasser and featuring Danit Gal and others

The breakthroughs in AI will rapidly transform digital society and greatly improve labor productivity, but also will raise a host of new and difficult issues concerning e.g. employment, ethics, the digital divide, privacy, law and regulation. In consequence, there is a growing recognition that all stakeholders will need to engage in a new and difficult dialogue to ensure that AI is implemented in a manner that balances legitimate competing objectives in a manner that leaves society better off.

While engineers may share technical ideas within transnational expert networks, broader public discussions about the social consequences and potential governance of artificial intelligence have tended to be concentrated within linguistic communities and civilizations. However, many of the issues that AI raises are truly global in character, and this will become increasingly evident as AI is incorporated into the functioning of the global Internet. There is therefore a pressing need to establish a distinctively global discourse that is duly informed by the differences between Eastern and Western cultural values, business environments, economic development levels, and political, legal and regulatory systems.

Read the full session description
(Related sessions on this topic took place earlier this month at the Global Festival for AI Ideas in Beijing, China.)

--

Artificial Intelligence and Inclusion

Featuring Malavika Jayaram, Chinmayi Arun, Urs Gasser, and others

The policy debates about Artificial Intelligence (AI) have been predominantly dominated by organizations and actors in the Global North. There is a growing need for a more diverse perspective regarding the policy issues and consequences of AI. The developing world will be directly affected by the deployment of AI technologies and services. However, there is a lack of informed perspectives to participate in the policy debates. This roundtable is a follow up to the international event “Artificial Intelligence and Inclusion” held in Rio de Janeiro earlier this year. The discussion will focus on development of Artificial Intelligence and its impact on inclusion in different areas such as health and wellbeing, education, low-resource communities, public safety and security, employment and workplace, and entertainment, media and journalism, among others. The goal of this roundtable is to bring the debates of the this international event to the IGF community, enlarging the conversation and deepening the understanding of AI inclusion 

Read the full session description

--

Selective Persecution and the Mob: Hate and Religion Online

Featuring Chinmayi Arun, Susan Benesch, Wolfgang Schulz, and others

As hate speech online spreads at an alarming rate, states, companies, civil society and other stakeholders grapple with the question of how to mitigate the situation. States have relied on command-control regulation, including hate speech laws, as the primary solution. However, these laws are used to censor and punish political dissent and other expression protected under the ICCPR and most countries’ constitutions. These laws also seem to be able to do very little for the journalists being murdered, attacked and threatened for their online speech, or for people receiving onslaughts of threats, doxxing, abuse and other forms of aggression online.

Read the full session description

--

Artificial Intelligence in Asia: What’s Similar? What’s Different? Findings from our AI Workshops

Featuring Malavika Jayaram

Ideas about the future and about what progress means are heavily contested, and context-specific. Digital Asia Hub set out to investigate whether the future of artificial intelligence - heralded as a game changing technology - was constructed and implemented differently in Asia, and to explore whether the problems that AI was deployed in service of signalled different socioeconomic aspirations and fears.

Read the full event description

--

We were also pleased to share recent research about youth practice online in the lightning talk Blurring the lines between work and play: Emerging Youth Practices and the Digital Economy given by Sandra Cortesi, and to participate in a global roundtable on AI and Governance hosted by the Digital Asia Hub that featured evidence-based approaches to testing the social impact of AI-based governance, methods for holding AI governance accountable, and open a conversation on the future of evidence-based policy and consumer protection online.

You can see the full IGF2017 schedule here, watch live streamed sessions here, and access archived videos of sessions here.

Learn more about some of the Berkman Klein Center’s related work on the Ethics and Governance of Artificial Intelligence Initiative page on our website. The Initiative, which is guided by the Berkman Klein Center and the MIT Media Lab, aims to foster global conversations among scholars, experts, advocates, and leaders from a range of industries. By developing a shared framework to address urgent questions surrounding AI, the Initiative aims to help public and private decision-makers understand and plan for the effective use of AI systems for the public good. 

Categories: Tech-n-law-ogy

Announcing the 2018 Assembly Cohort

Subtitle At the Berkman Klein Center and MIT Media Lab Teaser

We are thrilled to announce the 2018 cohort for the Assembly program at the Berkman Klein Center and MIT Media Lab! Read more to learn about the twenty-one individuals who will be joining us in January 2018 to tackle challenges and opportunities in artificial intelligence and its governance.

Thumbnail Image: 


We are thrilled to announce the 2018 cohort for the Assembly program at the Berkman Klein Center and MIT Media Lab. The program, which will start its second iteration on January 22, 2018, gathers developers, project managers, academics, and tech industry professionals for a rigorous spring term to tackle hard problems at the intersection of code and policy. The program will be split into three parts: a two week design and team building session, a course co-taught by MIT Media Lab Director Joi Ito and BKC co-founder and HLS professor Jonathan Zittrain on the ethics of artificial intelligence, and a twelve-week collaborative development period.

Our 2018 cohort is made up of twenty-one participants with diverse backgrounds and experiences representing the private sector, academia, and civil society organizations. Their task? To work on the emerging problems and opportunities within artificial intelligence and its governance.

Below you can see who makes up our 2018 cohort! For more information about the program, visit the Assembly website. To see the cohort's full profiles, you can go directly to their profiles on the 2018 cohort page.

 

DHAVAL ADJODAH
Ph.D student at the MIT Media Lab researching AI, computational social science and finance ANDRÉ BARRENCE
Director of Campus São Paulo and leads Google for Entrepreneurs in Brazil HALLIE BENJAMIN
Experiment Designer at Google KASIA CHMIELINSKI
Technologist at the White House U.S. Digital Service JACK CLARK
Strategy and Communications Director of OpenAI JENNIFER FERNICK
Ph.D candidate in Mathematics (Computer Science – Quantum Information) at the University of Waterloo GRETCHEN GREENE
Computer Vision Scientist and Machine Learning Engineer working with Cambridge startups SARAH HOLLAND
Public Policy Manager at Google AHMED HOSNY
Data Scientist, Web Developer and Researcher at the Dana-Farber Cancer Institute and Harvard Medical School JOSH JOSEPH
CSO of Alpha Features THOM MIANO
Research Data Scientist in the Center for Data Science at RTI International SARAH NEWMAN
Creative Researcher at metaLAB at Harvard, and Fellow at the Berkman Klein Center for Internet & Society at Harvard FRANCISCO DANIEL PEDRAZA
Data Strategist at UNICEF JONNIE PENN
Google Technology Policy Fellow, and Rausing, Williamson and Lipton Trust doctoral scholar at the University of Cambridge KATHY PHAM
Fellow at the Berkman Klein Center for Internet & Society at Harvard AARON PLASEK
Richard Hofstadter Fellow and History Doctoral Student at Columbia University BOGDANA RAKOVA
Researcher Engineer at Samsung Research America and Connected Devices fellow at Amplified Partners DAVID COLBY REED
Co-founder and CEO of Foossa and Lecturer in Design, Management, and Social innovation at the Parsons School of Design at the New School

MATT TAYLOR
Software Engineer for the Scratch Team, a project part of the Lifelong Kindergarten group at the MIT Media Lab

AMY X. ZHANG
Ph.D. student in Computer Science at MIT CSAIL working on systems to improve discourse, collaboration, and understanding on the web>
Categories: Tech-n-law-ogy

Open Call for Fellowship Applications, Academic Year 2018-2019

About the Fellowship ProgramQualificationsCommitment to Diversity •  Logistics
Stipends and BenefitsAbout the Berkman Klein CenterFAQ
Required Application MaterialsApply!


The Berkman Klein Center for Internet & Society at Harvard University is now accepting fellowship applications for the 2018-2019 academic year through our annual open call. This opportunity is for those who wish to spend 2018-2019 in residence in Cambridge, MA as part of the Center's vibrant community of research and practice, and who seek to engage in collaborative, cross-disciplinary, and cross-sectoral exploration of some of the Internet's most important and compelling issues.
 

Applications will be accepted until Wednesday, January 31, 2018 at 11:59 p.m. Eastern Time.
 

We invite applications from people working on a broad range of opportunities and challenges related to Internet and society, which may overlap with ongoing work at the Berkman Klein Center and may expose our community to new opportunities and approaches. We encourage applications from scholars, practitioners, innovators, engineers, artists, and others committed to understanding and advancing the public interest who come from -- and have interest in -- countries industrialized or developing, with ideas, projects, or activities in all phases on a spectrum from incubation to reflection.


Through this annual open call, we seek to advance our collective work and give it new direction, and to deepen and broaden our networked community across backgrounds, disciplines, cultures, and home bases. We welcome you to read more about the program below, and to consider joining us as a fellow!

About the Berkman Klein Fellowship Program

“The Berkman Klein Center's mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions.


We are a research center, premised on the observation that what we seek to learn is not already recorded. Our method is to build out into cyberspace, record data as we go, self-study, and share. Our mode is entrepreneurial nonprofit.”


Inspired by our mission statement, the Berkman Klein Center’s fellowship program provides an opportunity for some of the world’s most innovative thinkers and changemakers to come together to hone and share ideas, find camaraderie, and spawn new initiatives. The program encourages and supports fellows in an inviting and playful intellectual environment, with community activities designed to foster inquiry and risk-taking, to identify and expose common threads across fellows’ individual activities, and to bring fellows into conversation with the faculty directors, employees, and broader community at the Berkman Klein Center.  From their diverse backgrounds and wide-ranging physical and virtual travels, Berkman Klein Center fellows bring fresh ideas, skills, passion, and connections to the Center and our community, and from their time spent in Cambridge help build and extend new perspectives and actions out into the world.


A non-traditional appointment that defies any one-size-fits-all description, each Berkman Klein fellowship carries a unique set of opportunities, responsibilities, and expectations based on each fellow’s goals. Fellows appointed through this open call come into their fellowship with a personal research agenda and set of ambitions they wish to conduct while at the Center. These might include focused study or writing projects, action-oriented meetings, the development of a set of technical tools, capacity building efforts, testing different pedagogical approaches, or efforts to intervene in public discourse and trialing new platforms for exchange.  Over the course of the year fellows advance their research and contribute to the intellectual life of the Center and fellowship program activities; as they learn with and are influenced by their peers, fellows have the freedom to change and modify their plans.


Together fellows actively design and participate in weekly all-fellows sessions, working groups, skill shares, hacking and development sessions, and shared meals, as well as joining in a wide-range of Berkman Klein Center events, classes, brainstorms, interactions, and projects. While engaging in both substance and process, much of what makes the fellowship program rewarding is created each year by the fellows themselves to address their own interests and priorities. These entrepreneurial, collaborative ventures – ranging at once from goal-oriented to experimental, from rigorous to humorous – ensure the dynamism of a fellowship experience, the fellowship program, and the Berkman Klein community.  As well, the Center works to support our exemplary alumni network, and beyond a period of formal affiliation, community members maintain ongoing active communication and mutual support across cohorts.


Alongside and in conversation with the breadth and depth of topics explored through the Center’s research projects, fellows engage the fairly limitless expanse of Internet & society issues. Within each cohort of fellows we encourage and strive for wide inquisition and focused study, and these areas of speciality and exploration vary from fellow to fellow and year to year. Some broad issues of interest include (but are not limited to) fairness and justice; economic growth and opportunity; the ethics and governance of artificial intelligence; equity, agency, inclusion, and diversity; health; security; privacy; access to information; regulation; politics; and democracy. As fields of Internet and society studies continue to grow and evolve, and as the Internet reaches into new arenas, we expect that new areas of interest will emerge across the Center as well. We look forward to hearing from potential fellows in these nascent specialities and learning more about the impact of their work.

back to top
Qualifications

We welcome applications from people who feel that a year in our community as a fellow would accelerate their efforts and contribute to their ongoing personal and professional development.
 

Fellows come from across the disciplinary spectrum and different life paths. Some fellows are academics, whether students, post-docs, or professors. Others come from outside academia, and are technologists, entrepreneurs, lawyers, policymakers, activists, journalists, educators, or other types of practitioners from various sectors. Many fellows wear multiple hats, and straddle different pursuits at the intersections of their capacities. Fellows might be starting, rebooting, driving forward in, questioning, or pivoting from their established careers.  Fellows are committed to spending their fellowship in concert with others guided by a heap of kindness, a critical eye, and a generosity of spirit.


The fellowship selection process is a multi-dimensional mix of art and science, based on considerations that are specific to each applicant and that also consider the composition of the full fellowship class. Please visit our FAQ to learn more about our selection criteria and considerations.

To learn more about the backgrounds of our current community of fellows, check out our fall video series with new fellows, 2017-2018 community announcement, read their bios, and find them on Twitter. As well, other previous fellows announcements give an overview of the people and topics in our community: 2016-2017, 2015-2016, 2014-2015, 2013-2014.

back to top
 

Commitment to Diversity

The work and well-being of the Berkman Klein Center for Internet & Society are profoundly strengthened by the diversity of our network and our differences in background, culture, experience, national origin, religion, sexual orientation, gender, gender identity, race, ethnicity, age, ability, and much more. We actively seek and welcome people of color, women, the LGBTQIA+ community, persons with disabilities, and people at intersections of these identities, from across the spectrum of disciplines and methods. In support of these efforts, we are offering a small number of stipends to select incoming fellows chosen through our open call for applications.  More information about the available stipends may be found here. More information about the Center’s approach to diversity and inclusion may be found here.

back to top
 

Logistical Considerations

While we embrace our many virtual connections, spending time together in person remains essential. In order to maximize engagement with the community, fellows are encouraged to spend as much time at the Center as they are able, and are expected to conduct much of their work from the Cambridge area, in most cases requiring residency. Tuesdays hold particular importance--it is the day the fellows community meets for a weekly fellows hour, as well as the day the Center hosts a public luncheon series; as a baseline we ask fellows to commit to spending as many Tuesdays at the Center as possible.


Fellowship terms run for one year, and we generally expect active participation from our fellows over the course of the academic year, roughly from the beginning of September through the end of May.
 

In some instances, fellows are re-appointed for consecutive fellowship terms or assume other ongoing affiliations at the Center after their fellowship.

back to top 

Stipends and Access to University Resources

Stipends

Berkman Klein fellowships awarded through the open call for applications are rarely stipended, and most fellows receive no direct funding through the Berkman Klein Center as part of their fellowship appointment.


To make Berkman Klein fellowships a possibility for as wide a range of applicants as possible, in the 2018-2019 academic year we will award a small number of stipends to select incoming fellows chosen through our open call for applications. This funding is intended to support people from communities who are underrepresented in fields related to Internet and society, who will contribute to the diversity of the Berkman Klein Center’s research and activities, and who have financial need. More information about this funding opportunity can be found here.


There are various ways fellows selected through the open call might be financially supported during their fellowship year. A non-exhaustive list: some fellows have received external grants or awards in support of their research; some fellows have received a scholarship or are on sabbatical from a home institution; some fellows do consulting work; some fellows maintain their primary employment alongside their fellowship. In each of these different scenarios, fellows and the people with whom they work have come to agreements that allow the fellow to spend time and mindshare with the Berkman Klein community, with the aim to have the fellow and the work they will carry out benefit from the affiliation with the Center and the energy spent in the community. Fellows are expected to independently set these arrangements with the relevant parties.
 

Office and Meeting Space

We endeavor to provide comfortable and productive spaces for for coworking and flexible use by the community. Some Berkman Klein fellows spend every day in our office, and some come in and out throughout the week while otherwise working from other sites. Additionally, fellows are supported in their efforts to host small meetings and gatherings at the Center and in space on the Harvard campus.
 

Access to University Resources

  • Library Access: Fellows are able to acquire Special Borrower privileges with the Harvard College Libraries, and are granted physical access into Langdell Library (the Harvard Law School Library).  Access to the e-resources is available within the libraries.  

  • Courses: Berkman Klein fellows often audit classes across Harvard University, however must individually ask for permission directly from the professor of the desired class.  

  • Benefits: Fellows appointed through the open call do not have the ability to purchase University health insurance or get Harvard housing.

back to top 

Additional Information about the Berkman Klein Center

The Berkman Klein Center for Internet & Society at Harvard University is dedicated to exploring, understanding, and shaping the development of the digitally-networked environment. A diverse, interdisciplinary community of scholars, practitioners, technologists, policy experts, and advocates, we seek to tackle the most important challenges of the digital age while keeping a focus on tangible real-world impact in the public interest. Our faculty, fellows, staff and affiliates conduct research, build tools and platforms, educate others, form bridges and facilitate dialogue across and among diverse communities. More information at https://cyber.harvard.edu.

To learn more about the Center’s current research, consider watching a video of the Berkman Klein Center’s Faculty Chair Jonathan Zittrain giving a lunch talk from Fall 2017, and check out the Center’s most recent annual reports.
back to top

Frequently Asked Questions

To hear more from former fellows, check out 15 Lessons from the Berkman Fellows Program, a report written by former fellow and current Fellows Advisory Board member David Weinberger. The report strives to "explore what makes the Berkman Fellows program successful...We approached writing this report as a journalistic task, interviewing a cross-section of fellows, faculty, and staff, including during a group session at a Berkman Fellows Hour. From these interviews a remarkably consistent set of themes emerged."

 

More information about fellows selection and the application process can be found on our Fellows Program FAQ.

If you have questions not addressed in the FAQ, please feel welcome to reach out Rebecca Tabasky, the Berkman Klein Center's manager of community programs, at rtabasky@cyber.harvard.edu.
back to top
 

Required Application Materials

(1.) A current resume or C.V.

(2.) A personal statement that responds to the following two questions.  Each response should be between 250-500 words.

  • What is the research you propose to conduct during a fellowship year?  Please    

    • describe the problems are you trying to solve;

    • outline the methods which might inform your research; and

    • tell us about the public interest and/or the communities you aim to serve through your work.

       

  • Why is the Berkman Klein Center the right place for you to do this work?  Please share thoughts on:    

    • how the opportunity to engage colleagues from different backgrounds -- with a range of experiences and training in disciplines unfamiliar to you -- might stimulate your work;

    • which perspectives you might seek out to help you fill in underdeveloped areas of your research;

    • what kinds of topics and skills you seek to learn with the Center that are outside of your primary research focus and expertise; and

    • the skills, connections, and insights you are uniquely suited to contribute to the Center’s community and activities.

(3.) A copy of a recent publication or an example of relevant work.  For a written document, for instance, it should be on the order of a paper or chapter - not an entire book or dissertation - and should be in English.

(4.) Two letters of recommendation, sent directly from the reference.
back to top
 

Apply for a 2018-2019 Academic Year Fellowship Through Our Open Call

The application deadline is Wednesday, January 31, 2018 at 11:59 p.m. Eastern Time.


Applications will be submitted online through our Application Tracker tool at:

http://brk.mn/1819app
 

Applicants will submit their resume/C.V., their personal statement, and their work sample as uploads within the Berkman Klein Application Tracker.  Applicants should ensure that their names are included on each page of their application materials.
 

Recommendation letters will be captured through the Application Tracker, and the Application Tracker requires applicants to submit the names and contact information for references in advance of the application deadline. References will receive a link at which they can upload their letters. We recommend that applicants create their profiles and submit reference information in the Application Tracker as soon as they know they are going to apply and have identified their references - this step will not require other fellowship application materials to be submitted at that time.  We do ask that letters be received from the references by the application deadline.

Instructions for creating an account and submitting an application through the Application Tracker may be found here.
back to top

Categories: Tech-n-law-ogy

When a Bot is the Judge

Teaser

What happens when our criminal justice system uses algorithms to help judges determine bail, sentencing, and parole?

Thumbnail Image: 

Earlier this month, a group of researchers from Harvard and MIT directed an open letter to the Massachusetts Legislature to inform its consideration of risk assessment tools as part of ongoing criminal justice reform efforts in the Commonwealth. Risk assessment tools are pieces of software that courts use to assess the risk posed by a particular criminal defendant in a particular set of circumstances. Senate Bill 2185 — passed by the Massachusetts Senate on October 27, 2017 — mandates implementation of RA tools in the pretrial stage of criminal proceedings.

In this episode of the Berkman Klein Center podcast, The Platform, Managing Director of the Cyberlaw Clinic Professor Chris Bavitz discusses some of the concerns and opportunities related to the use of risk assessment tools as well as some of the related work the Berkman Klein Center is doing as part of the Ethics and Governance of AI initiative in partnership with the MIT Media Lab.

What need are risk assessment tools addressing? Why would we want to implement them?

Well, some people would say that they’re not addressing any need and ask why we would ever use a computer program when doing any assessments. But I think that there are some ways in which they’re helping to solve problems, particularly around consistency. Another potential piece of it, and this is where we start to get sort of controversial, is that the criminal justice system is very biased and has historically treated racial minorities and other members of marginalized groups poorly. A lot of that may stem from human biases that creep in anytime you have one human evaluating another human being. So there’s an argument to be made that if we can do risk scoring right and turn it into a relatively objective process, we might remove from judges the kind of discretion that leads to biased decisions.

Are we there yet? Can these tools eliminate bias like that?

My sense is that from a computer science perspective we’re not there. In general, these kinds of technologies that use machine learning are only as good as the data on which they’re trained. So if I’m trying to decide whether you’re going to come back for your hearing in six months, the only information that I have to train a risk scoring tool to give me a good prediction on that front is data about people like you who came through the criminal justice system in the past. And if we take as a given that the whole system is biased, then the data is that coming out of that system is biased. And when we feed that data to a computer program, the results are going to be biased.

And we don’t know what actually goes into these tools?

Many of the tools that are in use in states around the country are tools that are developed by private companies. So with most of the tools we do not have a very detailed breakdown of what factors are being considered, what relative weights are being given to each factor, that sort of thing. So one of the pushes for advocates in this area is that at the very least we need more transparency.

Tell me about the Open Letter to the Legislature. Why did you write it?

The Massachusetts Senate and House are in the process of considering criminal justice reform broadly speaking in Massachusetts. The Senate bill has some language in it that suggests that risk scoring tools should be adopted in the Commonwealth and that we should take steps to make sure that they’re not biased. And a number of us, most of whom are involved in the Berkman and MIT Media Lab AI Ethics and Governance efforts, signed onto this open letter to the Mass Legislature that basically said, “Look these kinds of tools may have a place in the system, but simply saying ‘Make sure they’re not biased’ is not enough. And if you’re going to go forward, here are a whole bunch of principles that we want you to adhere to,” basically trying to set up processes around both the procurement or development of the tool, the implementation of the tool, the training of the judges on how to use it and what the scores really mean and how they should fit into their legal analysis, and then ultimately the rigorous evaluation of the outcomes. Are these tools actually having the predictive value that was promised? How are we doing on the bias front? Does this seem to be generating results that are biased in statistically significant ways?

What are you hoping will happen next?

I think we would view part of our mission here at Berkman Klein as making sure that this is the subject of vigorous debate. Informed debate, to be clear, because I think that sometimes the debate about this devolves into either that technology is going to solve all our problems, or it’s a dystopian future with robotic judges that are going to sentence us to death, and I don’t think it’s either of those things. Having this conversation in a way that is nuanced and responsible will be really difficult, but I think it’s something we absolutely have to do.

This initiative at Berkman Klein and MIT is the Ethics and Governance of Artificial Intelligence Initiative, but there’s nothing about anything we’ve talked about here that really has to do with artificial intelligence where the computer program is learning and evolving and changing and adapting over time. But that’s coming. And the more we get used to these kinds of systems working in the criminal justice system and spitting out risk scores that judges take into account, the more comfortable we’re going to be as the computing power increases and the autonomy of these programs increases.

I don’t mean to be too dystopic about it and say that bad stuff is coming, but it’s only a matter of time. It’s happening in our cars, and it’s happening in our news feeds on social media sites. More and more decisions are being made by algorithms. And anytime we get a technological intervention in a system like this, particularly where people’s freedom is at stake, I think we want to tread really carefully, recognizing that the next iteration of this technology is going to be more extensive, and raise even more challenging questions.


Subscribe to us on Soundcloud
iTunes
or RSS

Categories: Tech-n-law-ogy

A Pessimist’s Guide to the Future of Technology

Subtitle featuring Dr. Ian Bogost, Professor of Interactive Computing at the Georgia Institute of Technology, in conversation with Professor Jeffrey Schnapp, Professor of Romance Languages & Literature, Harvard Graduate School of Design Teaser

Two decades of technological optimism in computing have proven foolhardy. Let’s talk about new ways to anticipate what might go right and wrong, using a technology that has not yet mainstreamed—autonomous vehicles—as a test case.

Parent Event Berkman Klein Luncheon Series Event Date Dec 12 2017 12:00pm to Dec 12 2017 12:00pm Thumbnail Image: 

Tuesday, December 12, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Milstein East C (HLS campus map)
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Since the rise of the web in the 1990s, technological skeptics have always faced resistance. To question the virtue and righteousness of tech, and especially computing, was seen as truculence, ignorance, or luddism. But today, the real downsides of tech, from fake news to data breaches to AI-operated courtrooms to energy-sucking bitcoin mines, have become both undeniable and somewhat obvious in retrospect.

In light of this new technological realism, perhaps there is appetite for new ways to think about and plan for the future of technology, which anticipates what might go right and wrong once unproven tech mainstreams quickly. As a test case, this talk will consider a technology that has not yet mainstreamed—autonomous vehicles—as a test case.

About Ian

Dr. Ian Bogost is an author and an award-winning game designer. He is Ivan Allen College Distinguished Chair in Media Studies and Professor of Interactive Computing at the Georgia Institute of Technology, where he also holds an appointment in the Scheller College of Business. Bogost is also Founding Partner at Persuasive Games LLC, an independent game studio, and a Contributing Editor at The Atlantic. He is the author or co-author of ten books including Unit Operations: An Approach to Videogame Criticism and Persuasive Games: The Expressive Power of Videogames.

Bogost is also the co-editor of the Platform Studies book series at MIT Press, and the Object Lessons book and essay series, published by The Atlantic and Bloomsbury.

Bogost’s videogames about social and political issues cover topics as varied as airport security, consumer debt, disaffected workers, the petroleum industry, suburban errands, pandemic flu, and tort reform. His games have been played by millions of people and exhibited or held in collections internationally, at venues including the Smithsonian American Art Museum, the Telfair Museum of Art, The San Francisco Museum of Modern Art, The Museum of Contemporary Art, Jacksonville, the Laboral Centro de Arte, and The Australian Centre for the Moving Image.

His independent games include Cow Clicker, a Facebook game send-up of Facebook games that was the subject of Wired magazine feature, and A Slow Year, a collection of videogame poems for Atari VCS, Windows, and Mac, which won the Vanguard and Virtuoso awards at the 2010 IndieCade Festival.

Bogost holds a Bachelors degree in Philosophy and Comparative Literature from the University of Southern California, and a Masters and Ph.D. in Comparative Literature from UCLA. He lives in Atlanta.

About Jeffrey

Jeffrey is Professor of Romance Languages & Literature, Harvard Graduate School of Design; Director, metaLAB (at) Harvard; and Director, Berkman Klein Center for Internet & Society. A cultural historian with research interests extending from Roman antiquity to the present, his most recent books are The Electric Information Age Book (a collaboration with the designer Adam Michaels (Princeton Architectural Press, 2012) and Italiamerica II (Il Saggiatore, 2012). His pioneering work in the domains of digital humanities and digitally augmented approaches to cultural programming includes curatorial collaborations with the Triennale di Milano, the Cantor Center for the Visual Arts, the Wolfsonian-FIU, and the Canadian Center for Architecture. His Trento Tunnels project — a 6000 sq. meter pair of highway tunnels in Northern Italy repurposed as a history museum– was featured in the Italian pavilion of the 2010 Venice Biennale and at the MAXXI in Rome in RE-CYCLE - Strategie per la casa la città e il pianeta (fall-winter 2011). He is Professor of Romance Languages & Literature, on the teaching faculty of Harvard’s Graduate School of Design,and is the faculty director of metaLAB (at) Harvard.

Links to selected writing  over the last year or so that are relevant:

Loading...

Categories: Tech-n-law-ogy

Charting a Roadmap to Ensure AI Benefits All

Teaser

An international symposium aimed at building capacity and exploring ideas for data democratization and inclusion in the age of AI.

Thumbnail Image: 

AI-based technologies — and the vast datasets that power them — are reshaping a broad range of sectors of the economy and are increasingly affecting the ways in which we live our lives. But to date these systems remain largely the province of a few large companies and powerful nations, raising concerns over how they might exacerbate inequalities and perpetuate bias against underserved and underrepresented populations.

In early November, on behalf of a global group of Internet research centers known as the Global Network of Internet & Society Centers (NoC) , the Institute for Technology & Society of Rio de Janeiro and the Berkman Klein Center for Internet & Society at Harvard University co-organized a three-day symposium on these topics in Brazil. The event brought together representatives from academia, advocacy groups, philanthropies, media, policy, and industry from more than 20 nations to start identifying and implementing ways to make the class of technologies broadly termed “AI” more inclusive.

The symposium — attended by about 170 people from countries including Nigeria, Uganda, South Africa, Kenya, Egypt, India, Japan, Turkey, and numerous Latin American and European nations — was intended to build collaborative partnerships and identify research questions as well as action items. These may include efforts to draft a human rights or regulatory framework for AI; define ways to democratize data access and audit algorithms and review their effects; and commit to designing and deploying AI that incorporates the perspectives of traditionally underserved and underrepresented groups, which include urban and rural poor communities, women, youth, LGBTQ individuals, ethnic and racial groups, and people with disabilities.

Read more about this event on our Medium post

Categories: Tech-n-law-ogy

A Layered Model for AI Governance

Teaser

​AI-based systems are “black boxes,” resulting in massive information asymmetries between the developers of such systems and consumers and policymakers. In order to bridge this information gap, this article proposes a conceptual framework for thinking about governance for AI.

Publication Date 20 Nov 2017 Author(s) External Links: Download from DASHDownload from IEEE Internet Computing

Abstract
AI-based systems are “black boxes,” resulting in massive information asymmetries between the developers of such systems and consumers and policymakers. In order to bridge this information gap, this article proposes a conceptual framework for thinking about governance for AI.

Many sectors of society rapidly adopt digital technologies and big data, resulting in the quiet and often seamless integration of AI, autonomous systems, and algorithmic decision-making into billions of human lives[1][2]. AI and algorithmic systems already guide a vast array of decisions in both private and public sectors. For example, private global platforms, such as Google and Facebook, use AIbased filtering algorithms to control access to information. AI algorithms that control self-driving cars must decide on how to weigh the safety of passengers and pedestrians[3]. Various applications, including security and safety decisionmaking systems, rely heavily on A-based face recognition algorithms. And a recent study from Stanford University describes an AI algorithm that can deduce the sexuality of people on a dating site with up to 91 percent accuracy[4]. Voicing alarm at the capabilities of AI evidenced within this study, and as AI technologies move toward broader adoption, some voices in society have expressed concern about the unintended consequences and potential downsides of widespread use of these technologies.

To ensure transparency, accountability, and explainability for the AI ecosystem, our governments, civil society, the private sector, and academia must be at the table to discuss governance mechanisms that minimize the risks and possible downsides of AI and autonomous systems while harnessing the full potential of this technology[5]. Yet the process of designing a governance ecosystem for AI, autonomous systems, and algorithms is complex for several reasons. As researchers at the University of Oxford point out,3 separate regulation solutions for decision-making algorithms, AI, and robotics could misinterpret legal and ethical challenges as unrelated, which is no longer accurate in today’s systems. Algorithms, hardware, software, and data are always part of AI and autonomous systems. To regulate ahead of time is dicult for any kind of industry. Although AI technologies are evolving rapidly, they are still in the development stages. A global AI governance system must be flexible enough to accommodate cultural dierences and bridge gaps across dierent national legal systems. While there are many approaches we can take to design a governance structure for AI, one option is to take inspiration from the development and evolution of governance structures that act on the Internet environment. Thus, here we discuss dierent issues associated with governance of AI systems, and introduce a conceptual framework for thinking about governance for AI, autonomous systems, and algorithmic decision-making processes.

Producer Intro Authored by
Categories: Tech-n-law-ogy

Accountability of AI Under the Law: The Role of Explanation

Teaser

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under the law. It ultimately finds that, at least for now, AI systems can and should be held to a similar standard of explanation as humans currently are.

Publication Date 27 Nov 2017 Thumbnail Image: External Links: Download from SSRNDownload from DASHDownload from arXiv.org

by Finale Doshi-Velez and Mason Kortz

for the Berkman Klein Center Working Group on Explanation and the Law:
Chris Bavitz, Harvard Law School; Berkman Klein Center for Internet & Society at Harvard University 
Ryan Budish, Berkman Klein Center for Internet & Society at Harvard University
Finale Doshi-Velez, John A. Paulson School of Engineering and Applied Sciences, Harvard University
Sam Gershman, Department of Psychology and Center for Brain Science, Harvard University
Mason Kortz, Harvard Law School Cyberlaw Clinic
David O'Brien, Berkman Klein Center for Internet & Society at Harvard University
Stuart Shieber, John A. Paulson School of Engineering and Applied Sciences, Harvard University
James Waldo, John A. Paulson School of Engineering and Applied Sciences, Harvard University
David Weinberger, Berkman Klein Center for Internet & Society at Harvard University
Alexandra Wood, Berkman Klein Center for Internet & Society at Harvard University

Abstract

The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before|applications range from clinical decision support to autonomous driving and predictive policing. That said, common sense reasoning [McCarthy, 1960] remains one of the holy grails of AI, and there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014].

There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exists important consistencies: when demanding explanation from humans, what we typically want to know is how and whether certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous|there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.

The authors have invited researchers, technologists, and policy makers to engage with the ideas outlined in the paper by emailing mkortz@cyber.harvard.edu and finale@seas.harvard.edu. For questions and comments related to broader AI themes and/or related activities of the Ethics and Governance of Artificial Intelligence Initiative, please email ai-questions@cyber.harvard.edu

Producer Intro Authored by
Categories: Tech-n-law-ogy

Designing Artificial Intelligence to Explain Itself

Subtitle A new working paper maps out critical starting points for thinking about explanation in AI systems. Teaser

As we integrate artificial intelligence deeper into our daily technologies, it becomes important to ask “why” not just of people, but of systems. A new working paper from the Berkman Klein Center at Harvard University and the MIT Media Lab maps out critical starting points for thinking about explanation in AI systems. 

Thumbnail Image: 

“Why did you do that?” The right to ask that deceptively simple question and expect an answer creates a social dynamic of interpersonal accountability. Accountability, in turn, is the foundation of many important social institutions, from personal and professional trust to legal liability to governmental legitimacy and beyond.

As we integrate artificial intelligence deeper into our daily technologies, it becomes important to ask “why” not just of people, but of systems. However, human and artificial intelligences are not interchangeable. Designing an AI system to provide accurate, meaningful, human-readable explanations presents practical challenges, and our responses to those challenges may have far-reaching consequences. Setting guidelines for AI-generated explanations today will help us understand and manage increasingly complex systems in the future.

In response to these emerging questions, a new working paper from the Berkman Klein Center at Harvard University and the MIT Media Lab maps out critical starting points for thinking about explanation in AI systems. “Accountability of AI Under the Law: The Role of Explanation” is now available to scholars, policy makers, and the public.

“If we’re going to take advantage of all that AIs have to offer, we’re going to have to find ways to hold them accountable,” said Finale Doshi-Velez of Harvard’s John A. Paulson School of Engineering and Applied Sciences, “Explanation is one tool toward that end.  We see a complex balance of costs and benefits, social norms, and more. To ground our discussion in concrete terms, we looked to ways that explanation currently functions in law.”

Doshi-Velez and Mason Kortz of the Berkman Klein Center and Harvard Law School Cyberlaw Clinic are lead authors of the paper, which is the product of an extensive collaboration within the Ethics and Governance of Artificial Intelligence Initiative, now underway at Harvard and MIT.

“An explanation, as we use the term in this paper, is a reason or justification for a specific decision made by an AI system--how a particular set of inputs lead to a particular outcome,” said Kortz. “A helpful explanation will tell you something about this process, such as the degree to which an input influenced the outcome, whether changing a certain factor would have changed the decision, or why two similar-looking cases turned out differently.”

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under the law. It ultimately finds that, at least for now, AI systems can and should be held to a similar standard of explanation as humans currently are.

“It won’t necessarily be easy to produce explanations from complex AI systems that are processing enormous amounts of data,” Kortz added. “Humans are naturally able to describe our internal processes in terms of cause and effect, although not always with great accuracy. AIs, on the other hand, will have to be intentionally designed with the capacity to generate explanations in mind. This paper is the starting point for a series of discussions that will be increasingly important in the years ahead. We’re hoping this generates some constructive feedback from inside and outside the Initiative.”

Guided by the Berkman Klein Center at Harvard and the MIT Media Lab, the Ethics and Governance of Artificial Intelligence Initiative aims to foster global conversations among scholars, experts, advocates, and leaders from a range of industries. By developing a shared framework to address urgent questions surrounding AI, the Initiative aims to help public and private decision-makers understand and plan for the effective use of AI systems for the public good. More information at: https://cyber.harvard.edu/research/ai

Categories: Tech-n-law-ogy

A Legal Anatomy of AI-generated Art: Part I

Teaser

This Comment, published in the JOLT Digest, is the first in a two-part series on how lawyers should think about art generated by artificial intelligences, particularly with regard to copyright law. This first part charts the anatomy of the AI-assisted artistic process. ​

Thumbnail Image: 

This Comment by Jessica Fjeld and Mason Kortz originally published in the Journal of Law and Technology's online Digest, is the first in a two-part series on how lawyers should think about art generated by artificial intelligences, particularly with regard to copyright law. This first part charts the anatomy of the AI-assisted artistic process. The second Comment in the series examine how copyright interests in these elements interact and provide practice tips for lawyers drafting license agreements or involved in disputes around AI-generated artwork.

Advanced algorithms that display cognition-like processes, popularly called artificial intelligences or “AIs,” are capable of generating sophisticated and provocative works of art.[1] These technologies differ from widely-used digital creation and editing tools in that they are capable of developing complex decision-making processes, leading to unexpected outcomes. Generative AI systems and the artwork they produce raise mind-bending questions of ownership, from broad policy concerns[2] to the individual interests of the artists, engineers, and researchers undertaking this work. Attorneys, too, are beginning to get involved, called on by their clients to draft licenses or manage disputes.

The Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society has recently developed a practice in advising clients in the emerging field at the intersection of art and AI. We have seen for ourselves how attempts to negotiate licenses or settle disputes without a common understanding of the systems involved may result in vague and poorly understood agreements, and worse, unnecessary conflict between parties. More often than not, this friction arises between reasonable parties who are open to compromise, but suffer from a lack of clarity over what, exactly, is being negotiated. In the course of solving such problems, we have dissected generative AIs and studied their elements from a legal perspective. The result is an anatomy that forms the foundation of our thinking—and our practice—on the subject of AI-generated art. When the parties to an agreement or dispute share a common vocabulary and understanding of the nature of the work, many areas of potential conflict evaporate.

Read the full comment at JOLTdigest.

Categories: Tech-n-law-ogy

Apply for a Spot in CopyrightX 2018

Teaser

CopyrightX is a networked course that explores the current law of copyright; the impact of that law on art, entertainment, and industry; and the ongoing debates concerning how the law should be reformed. 

Thumbnail Image: 

The application for the CopyrightX online sections will be open from Oct. 16 - Dec. 13. See CopyrightX:Sections for details.

CopyrightX is a networked course that explores the current law of copyright; the impact of that law on art, entertainment, and industry; and the ongoing debates concerning how the law should be reformed. Through a combination of recorded lectures, assigned readings, weekly seminars, and live interactive webcasts, participants in the course examine and assess the ways in which the copyright system seeks to stimulate and regulate creative expression.

In 2013, HarvardX, Harvard Law School, and the Berkman Klein Center for Internet & Society launched an experiment in distance education: CopyrightX, the first free and open distance learning course on law. After five successful offerings, CopyrightX is an experiment no longer. Under the leadership of Professor William Fisher, who created and directs the course, CopyrightX will be offered for a sixth time from January to May 2018. 

Three types of courses make up the CopyrightX Community:
•    a residential course on Copyright Law, taught by Prof. Fisher to approximately 100 Harvard Law School students;
•    an online course divided into sections of 25 students, each section taught by a Harvard Teaching Fellow;
•    a set of affiliated courses based at educational institutions worldwide, each taught by an expert in copyright law.

Participation in the 2018 online sections is free and is open to anyone at least 13 years of age, but enrollment is limited. Admission to the online sections will be administered through an open application process that ends on December 13, 2017. We welcome applicants from all countries, as well as lawyers and non-lawyers alike. To request an application, visit http://brk.mn/applycx18. For more details, see CopyrightX:Sections. (The criteria for admission to each of the affiliated courses are set by the course’s instructor. Students who will enroll in the affiliated courses may not apply to the online sections.)

We encourage widespread promotion of the application through personal and professional networks and social media. Feel free to circulate: 
•    this blog post 
•    the application page 

Categories: Tech-n-law-ogy

An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System

Teaser

This open letter — signed by Harvard and MIT-based faculty, staff, and researchers— is directed to the Massachusetts Legislature to inform its consideration of risk assessment tools as part of ongoing criminal justice reform efforts in the Commonwealth.

Publication Date 9 Nov 2017 External Links: Download from DASHRead the letter on Medium

This open letter — signed by Harvard and MIT-based faculty, staff, and researchers Chelsea Barabas, Christopher Bavitz, Ryan Budish, Karthik Dinakar, Cynthia Dwork, Urs Gasser, Kira Hessekiel, Joichi Ito, Ronald L. Rivest, Madars Virza, and Jonathan Zittrain — is directed to the Massachusetts Legislature to inform its consideration of risk assessment tools as part of ongoing criminal justice reform efforts in the Commonwealth.

 

Producer Intro Authored by
Categories: Tech-n-law-ogy

Plain Text: The Poetics of Computation

Subtitle featuring Dennis Tenen, Assistant Professor of English and Comparative Literature at Columbia University Teaser

Computers—from electronic books to smart phones—play an active role in our social lives. Our technological choices thus entail theoretical and political commitments. Dennis Tenen takes up today's strange enmeshing of humans, texts, and machines to argue that our most ingrained intuitions about texts are profoundly alienated from the physical contexts of their intellectual production.

Event Date Nov 28 2017 12:00pm to Nov 28 2017 12:00pm Thumbnail Image: 

Tuesday, November 28, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Milstein East C, Room 2036 (HLS campus map)
RSVP required to attend in person

Watch Live Starting at 12pm

If you experience a video disruption reload to refresh the webcast.

We are pleased to welcome back Berkman Klein Fellow alumnus, Dennis Tenen, who joins us to discuss his new book, Plain Text: The Poetics of Computation (Stanford UP, 2017).

This book challenges the ways we read, write, store, and retrieve information in the digital age. Computers—from electronic books to smart phones—play an active role in our social lives. Our technological choices thus entail theoretical and political commitments. Dennis Tenen takes up today's strange enmeshing of humans, texts, and machines to argue that our most ingrained intuitions about texts are profoundly alienated from the physical contexts of their intellectual production. Drawing on a range of primary sources from both literary theory and software engineering, he makes a case for a more transparent practice of human–computer interaction. Plain Text is thus a rallying call, a frame of mind as much as a file format. It reminds us, ultimately, that our devices also encode specific modes of governance and control that must remain available to interpretation.

 

Dennis's Biography:

Dennis Tenen's research happens at the intersection of people, texts, and technology.

His recent work appears on the pages of Amodern, boundary 2, Computational Culture, Modernism/modernity, New Literary History, Public Books, and LA Review of Books on topics that range from book piracy to algorithmic composition, unintelligent design, and history of data visualization.

He teaches a variety of classes in fields of literary theory, new media studies, and critical computing in the humanities.

Tenen is a co-founder of Columbia's Group for Experimental Methods in the Humanities and author of Plain Text: The Poetics of Computation (Stanford UP, 2017).

For an updated list of projects, talks, and publications please visit dennistenen.com.

 

Loading...

Categories: Tech-n-law-ogy

Harvard Open Access Project Part-Time Research Assistant Opportunity

The Harvard Open Access Project (HOAP) at the Berkman Klein Center for Internet & Society is hiring a part-time research assistant!

The Harvard Open Access Project (HOAP) fosters open access to research, within Harvard and beyond, using a combination of education, consultation, collaboration, research, tool-building, and direct assistance. HOAP is a project within the Berkman Klein Center for Internet & Society at Harvard University. For more detail, see the project home page at http://cyber.harvard.edu/hoap.

The Research Assistant will contribute to the Open Access Tracking Project (OATP), using the TagTeam social-tagging platform, contribute to the Open Access Directory (OAD), and perform occasional research, help with grant reporting, and strategize about open access inside and outside Harvard University. The position offers remote work options, flexible scheduling, and community work spaces at the Berkman Klein Center for Internet & Society.

The position will remain open until the job is filled, and plan to begin reviewing applicants as soon as possible.

Work Requirements/Benefits Information:

This part-time position is 17.25 hours per week.  The pay is at a rate of $11.50+ per hour, with the possibility of more to suit qualifications and experience. This position does not include benefits. The role will include the expectation of regular weekend work as needed to support time-sensitive projects (approximately 2 - 4 of total 17.25). The Research Assistant must be based in Massachusetts.  The work may be done remotely, but will include regular face-to-face meetings in Cambridge, Massachusetts to review progress and discuss new ideas.  Unfortunately we are not able to sponsor a visa for this position. This position is approved through the end of August, 2018.

To Apply:

Please send your current CV or resume and a cover letter summarizing your interest and experience to Peter Suber at psuber@cyber.law.harvard.edu with “HOAP application” in the subject line.

 

via GIPHY

Categories: Tech-n-law-ogy

#FellowFriday! Get to know the 2017-2018 Fellows

This series of short video interviews highlights the new 2017-2018 Berkman Klein fellows. Check back evey Friday for new additions!

published October 27, 2017

Tell us about a research question you're excited to address this year and why it matters to you.
This year I'm really trying to understand how communication on social media leads to offline violence. So I'm studying a Twitter dataset of young people in Chicago to better understand how things like grief and trauma and love and happiness all play out on Twitter and the relationship between that communication and offline gun violence. 

I started my research process in Chicago and I have been just completely troubled by the amount of violence that happens in the city. And one of the ways in which that violence happens or occurs is through social media communication. And so I want to be a part of the process of ending violence through learning how young people communicate online.  

***

published October 27, 2017

Tell us about a research question you're excited to address this year and why it matters to you.
I’m working on the ethics and governance of artificial intelligence project, here at Berkman Klein. There are a lot of questions as to how exactly incorporating this new technology into different social environments is really going to affect people, and I think one of the most important things is getting people’s perspectives who are actually going to be impacted. So, I’m looking forward to participating in some early educational initiatives and some discussions that we can post online in blog posts and things, to help people feel like they’re more familiar with this subject and more comfortable, because it can be really intimidating.

Why should people care about this issue?
Right now, this technology or early versions of machine learning and artificial intelligence applications are being used in institutions ranging from the judicial system, to financial institutions, and they’re really going to impact everyone. I think it’s important for people to talk about how they’re being implemented and what the consequences of that are for them, and that we should have an open discussion, and that people can’t do that if they’re unfamiliar with the technology or why it’s being employed. I think that everyone needs to have at least a basic familiarity with these things because in ten years there’s not going to be an institution that doesn’t use it in some way.

How did you become interested in this topic?
I grew up in a pretty low income community that didn’t have a lot of access to these technologies initially, and so I was very new to even using a computer when I got into college. It’s something that was hard for me initially, but that I started really getting interested in, partially because I’m a huge sci-fi fan now, and so I think that sci-fi and fiction really opens up your eyes to both the opportunities and the potential costs of using different advanced technologies. I wanted to be part of the conversation about how we would actually approach a future where these things were possible and to make sure that we would use them in a way that would benefit us and not this scarier, more dystopian views of what could happen.

What excites you most about technology and its potential impact on our world?
Software, so scalable, that we can offer more resources and more information to so many more people at a lower cost. We’re also at a time where we have so much more information than we’ve ever had in history, so things like machine learning and artificial intelligence can really help to open up the answers that we can get from all of that data and maybe some very non-intuitive answers that people just have not been able to find themselves.

What scares you most?
I think that the thing that scares me most is that artificial intelligence software is going to be employed in institutions and around populations that don’t understand both ends of the things it has to offer, but also its limitations. It will just be taken as objective fact or a scientific opinion that you can’t question, when it’s important to realize that this is something that is crafted by humans, that can be fallible, that can be employed in different ways and have different outcomes. I think my biggest fear is that we won’t question it and that these things will just be able to be deployed without having any kind of public dialogue or pushback if it has negative consequences.

 

 

Categories: Tech-n-law-ogy

The Slippery Slope of Internet Censorship in Egypt

Teaser

Explaining the recent dramatic increase in Internet censorship in Egypt, examining the Twitter conversation around website blocking in Egypt, and identifying ways that users disseminate banned content

Thumbnail Image: 

The first Internet Monitor research bulletin summarizes the recent, dramatic increase in Internet censorship in Egypt, examines the Twitter conversation around website blocking in Egypt, and identifies ways that users disseminate banned content.

Internet filtering in Egypt illustrates how censorship can be a slippery slope. After an extended period of open access to the Internet in Egypt lasting several years following the January 2011 revolution, the government dramatically increased its censorship of political content between December 2015 and September 2017. What started with the filtering of one regional news website in 2015 has led to the filtering of over 400 websites by October 2017. The blocked websites include local and regional news and human rights websites, websites based in or affiliated with Qatar, and websites of Internet privacy and circumvention tools. This bulletin examines how Egyptian Internet users have reacted to the pervasive blocking and describes their efforts to counter the censorship. These efforts center on disseminating banned content through platforms protected by encrypted HTTPS connections such as Facebook and Google Drive, which makes individual objectionable URLs challenging for the censors to block. 

Read the complete bulletin on the Internet Monitor website.

Categories: Tech-n-law-ogy

Badges of Oppression, Positions of Strength: Digital Black Feminist Discourse and the Legacy of Black Women’s Technology Use

Subtitle featuring Catherine Knight Steele, University of Maryland Teaser

The use of online technology by black feminist thinkers has changed the principles, praxis, and product of black feminist writing and simultaneously has changed the technologies themselves. Texts from the antebellum south through the 20th-century contextualize the contemporary relationship between black women and digital media.

Parent Event Berkman Klein Luncheon Series Event Date Nov 21 2017 12:00pm to Nov 21 2017 12:00pm Thumbnail Image: 

Tuesday, November 21, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
*VENUE CHANGE* Wasserstein Hall, Room 3019 (HLS campus map)
RSVP required to attend in person
Event will be live webcast at 12:00 pm.

Black women have historically occupied a unique position, existing in multiple worlds, manipulating multiple technologies, and maximizing their resources for survival in a system created to keep them from thriving. I present a case for the unique development of black women’s relationship with technology by analyzing historical texts that explore the creation of black womanhood in contrast to white womanhood and black manhood in early colonial and antebellum periods in the U.S. This study of Black feminist discourse online situates current practices in the context of historical use and mastery of communicative technology by the black community broadly and black women more specifically. By tracing the history of black feminist thinkers in relationship to technology we move from a deficiency model of black women’s use of technology to recognizing their digital skills and internet use as part of a long developed expertise. 

About Catherine

Catherine Knight Steele is an Assistant Professor of Communication at the University of Maryland - College Park and the Director of the Andrew W. Mellon funded African American Digital Humanities Initiative (AADHum). As the director of the AADHum, Dr. Steele works to foster a new generation of scholars and scholarship at the intersection of African American Studies and Digital Humanities and Digital Studies. She earned her Ph.D. in Communication from the University of Illinois at Chicago. Her research focuses on race, gender, and media with a specific focus on African American culture and discourse in traditional and new media. She examines representations of marginalized communities in the media and how traditionally marginalized populations resist oppression and utilize online technology to create spaces of community. Dr. Steele has published in new media journals such as Social Media & Society and Television & New Media; and the edited volumes Intersectional Internet (Ed. S. Noble & B. Tynes) and the upcoming edited collection A Networked Self: Birth, Life, Death (Ed. Z. Papacharissi). She is currently working on a book manuscript about Digital Black Feminism. 

Links

 

Loading...

Categories: Tech-n-law-ogy

Pages

Subscribe to www.dgbutterworth.com aggregator - Tech-n-law-ogy