Skip to the main content
Skin in the Game: Modulate AI and Addressing the Legal and Ethical Challenges of Voice Skin Technology

Skin in the Game: Modulate AI and Addressing the Legal and Ethical Challenges of Voice Skin Technology

A BKC Policy Practice Case Study & Educational Toolkit

Published

The following is an educational toolkit from the BKC Policy Practice on AI that includes a case study, a teaching note, and a background primer. Collectively, they comprise a toolkit that can illuminate some of the challenges in moving from AI principles to practice. Download the case study and its companion resources together in a single PDF at the links below. Contact us at ai@cyber.harvard.edu with questions or to let us know how you're using these materials.

Skin in the Game: Modulate AI and Addressing the Legal and Ethical Challenges of Voice Skin Technology follows MIT alums and friends Carter Huffman and Mike Pappas, who co-founded Modulate in 2017 to commercialize their technology for creating synthetic voice skins. By innovatively applying concepts from artificial intelligence systems called Generative Adversarial Networks (GANs), the two men had created a novel approach to make one voice sound like another in real time. Although Modulate had extremely limited human and financial resources, Huffman and Pappas wanted to ensure that this technology, with its ability to match the timbre of almost any individual on the planet, would not be misused. How could they simultaneously push Modulate forward, maintain its technological and competitive edges, and make their investors happy, while also upholding a code of ethics? For a tiny company like Modulate, what did this look like?

Written by Rachel Gordon, Research Associate, Teaching Learning and Curriculum Solutions, Harvard Law School Library, and Ryan Budish, Assistant Research Director, Berkman Klein Center for Internet & Society at Harvard University, this case was developed as a basis for discussion in educational and training environments. It is intended to spark discussions among students as they put themselves in the place of the Modulate co-founders, and explore the conflicts and tensions that they faced. The case is not an endorsement of any one approach or business, but instead highlights the complex and dynamic challenges that AI ethics can present in the real world.

Listen to the introduction of the case study using voice skins from Modulate.

This case study is produced by BKC Policy Practice: Artificial Intelligence (AI) at the Berkman Klein Center for Internet & Society at Harvard University and the Case Studies Program at Harvard Law School. BKC Policy Practice: AI is a public interest-oriented program that helps governmental, nonprofit, and private sector organizations implement AI best practices and turn AI principles into operational realities. The Case Studies Program supports a wide range of case development projects throughout Harvard Law School. Faculty at HLS are in the process of creating innovative, experiential materials for the legal classroom, and the Case Studies Program works with them to conceive, develop, edit, and publish their classroom materials.

Since 2018, the Berkman Klein Center and its Policy Practice: AI have hosted workshops with a variety of public and private sector organizations, creating a space for learning, knowledge-sharing, and capacity-building in which participating organizations work with small, agile teams composed of faculty, staff, students, and outside expert collaborators to identify key problems and create actionable outputs. Outputs range from policy frameworks to open educational resources and are developed with an eye towards inclusion, replicability, and broad usefulness. This case emerged from these workshops.

You might also like