Skip to the main content

The Berkman Klein Center for Internet & Society has launched a research, policy analysis, and network building effort devoted to the study of harmful speech, in close collaboration with the Center for Communication Governance at National Law University in New Delhi, the Digitally Connected network, and in conjunction with the Global Network of Internet & Society Centers. This effort aims to develop research methods and protocols to enable and support robust cross-country comparisons; study and document country experiences, including the policies and practices of governments and private companies, as well as civil society initiatives and responses; and build and expand research, advocacy, and support networks. Our efforts build upon many complementary projects and initiatives, including the Berkman Center’s ongoing work related to Youth-Oriented Online Hate Speech / Viral Peace, as well as the activities of various individuals and institutions within our networks.

Digital media, including social media, blogs, and other applications and platforms, offer numerous ways in which users can interact with their peers, civil society, educational institutions, companies, and governments. Citizens are no longer merely readers or passive viewers. Instead, affordable Internet technology, tools and, applications enable individuals to become content creators, active drivers and participants in public conversations. These expressions are often pro-social and positive in nature. However, this online content may also threaten personal and community security, online and offline. While some of these risks and challenges have recently gained the attention of educators and policymakers, others remain relatively unexamined.  Among the areas of growing concern are online harmful behaviors defined as hate speech—speech that attacks a person or group on the basis of religious or political ideology, race, gender, social class or sexual orientation, among other aspects of individual identity.
 
There is growing evidence, though much of it anecdotal, that hate speech online inspires others to react, escalating tensions and at times resulting in violent online and offline reactions. Despite rising concerns over its harmful effects, relatively little is known about the effectiveness of different approaches to reduce the deleterious impacts of hate speech online, the manner in which they interact to complement or counteract one another, and the relative costs and benefits of alternative approaches.

This project aims, on one hand, to contribute to the exploration, evaluation, and implementation of strategies to reduce the incidence and propagation of hate speech online; and on the other hand, to preempt and avoid decisions and policies by companies and governments to inappropriately censor legitimate speech under the guise of controlling hate speech.

A note on language: we use the expression “hate speech” as a term of convenience, recognizing that there is disagreement over the use, relevance, and implications of different terms, such as hate speech, harmful speech, toxic speech, etc. We use “hate speech” in a broad, non-legal sense of the term.

These activities build upon many complementary projects and initiatives, including those found in our related resources.