Skip to the main content

Balancing Transparency and Security in Open Research on Generative AI

JOIN US FOR  A VIRTUAL CONVERSATION WITH SUE HENDRICKSON AND BRUCE SCHNEIER FOR THE SECOND IN A SERIES ON ACCOUNTABLE TECHNICAL OVERSIGHT OF GENERATIVE AI

Join Bruce Schneier, security researcher and affiliate at the Berkman-Klein Center for Internet and Society at Harvard University, for a conversation on balancing security and transparency in open research on generative AI, moderated by Sue Hendrickson, Executive Director of the Berkman Klein Center. 

In releasing new generative AI models, companies are blurring the line of ‘research’ and ‘product’ while struggling with balancing transparency and security. Given that these models are trained on public data, there are calls to operate them both openly and as a public resource. Yet, models that are open source – while aspirational – may reduce abilities to curb malicious actors. To enable responsible transparency, models of ‘open research’ must contend with security concerns. AI research organizations and companies are balancing transparency with considerations of safety and risk differently, resulting in a variety of approaches concerning access to models, training data, and other components.

This conversation asks: How should companies consider the tradeoffs between transparency and security in releasing their models and underlying training information? How should model builders and stakeholders balance the societal need to better understand these technologies versus the security risks that might come from sharing training data or code? How do the risks shift as model availability becomes more decentralized?

This is the second in a series of virtual fireside chats exploring accountable technical oversight of generative AI. The first session “How is generative AI changing the landscape of AI harms?” is on May 8 and will be recorded. Learn more about the Berkman Klein Center’s project on Responsible Generative AI for Accountable Technical Oversight.

About the speakers:

Sue Hendrickson, the Executive Director of the Berkman Klein Center, is a leading technology and intellectual property legal and policy strategist focused on cutting-edge technology and innovation. Her experience with complex legal, commercial, and public policy issues spans three decades of technology expansion, and she has forged effective interdisciplinary and global alliances enabling leading international and civil society organizations, technology companies, investors, and philanthropists to embrace the promise and mitigate the risks of emerging technologies, from the early days of AOL to today’s advanced AI technologies.

Bruce Schneier is an internationally renowned security technologist, called a “security guru” by the Economist. He is the New York Times best-selling author of 14 books -- including Click Here to Kill Everybody -- as well as hundreds of articles, essays, and academic papers. His influential newsletter Crypto-Gram and blog Schneier on Security are read by over 250,000 people. Schneier is a fellow at the Berkman-Klein Center for Internet and Society at Harvard University, a Lecturer in Public Policy at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation and AccessNow, and an advisory board member of EPIC and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Relevant links to work

Past Event
Monday, May 15, 2023
Time
1:00 PM - 1:30 PM ET
Location
Berkman Klein Center for Internet & Society
VIRTUAL
Cambridge, MA 02138 US

You might also like


Projects & Tools 01

Responsible Generative AI: Accountable Technical Oversight

Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social…