Registration Still Open

 

 

Freedom of Expression on the Internet

By William Fisher & Yochai Benkler

Last Updated May 24, 2001

Table of Contents

Introduction

Background

Current Controversies

Discussion Topics

Additional Resources

Introduction

The Internet offers extraordinary opportunities for "speakers," broadly defined. Political candidates, cultural critics, corporate gadflies -- anyone who wants to express an opinion about anything -- can make their thoughts available to a world-wide audience far more easily than has ever been possible before. A large and growing group of Internet participants have seized that opportunity.

Some observers find the resultant outpouring of speech exhilarating. They see in it nothing less than the revival of democracy and the restoration of community. Other observers find the amount -- and, above all, the kind of speech -- that the Internet has stimulated offensive or frightening. Pornography, hate speech, lurid threats -- these flourish alongside debates over the future of the Democratic Party and exchanges of views concerning flyfishing in Patagonia. This phenomenon has provoked various efforts to limit the kind of speech in which one may engage on the Internet -- or to develop systems to "filter out" the more offensive material.

This module examines some of the legal issues implicated by the increasing bitter struggle between the advocates of "free speech" and the advocates of filtration and control.

 

Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources


Background

Before plunging into the details of the proliferating controversies over freedom of expression on the Internet, you need some background information on two topics. The first and more obvious is the Free-Speech Clause of the First Amendment to the United States Constitution. The relevance and authority of the First Amendment should not be exaggerated; as several observers have remarked, "on the Internet, the First Amendment is just a local ordinance." However, free-expression controversies that arise in the United States inevitably implicate the Constitution. And the arguments deployed in the course of American First-Amendment fights often inform or infect the handling of free-expression controversies in other countries. The upshot: First-Amendment jurisprudence is worth studying.

Unfortunately, that jurisprudence is large and arcane. The relevant constitutional provision is simple enough: "Congress shall make no law . . . abridging the freedom of speech, or of the press . . .." But the case law that, over the course of the twentieth century, has been built upon this foundation is complex. An extremely abbreviated outline of the principal doctrines would go as follows:

·        If a law gives no clear notice of the kind of speech it prohibits, it’s "void for vagueness."

·        If a law burdens substantially more speech than is necessary to advance a compelling government interest, it’s unconstitutionally "overbroad."

·        A government may not force a person to endorse any symbol, slogan, or pledge.

·        Governmental restrictions on the "time, place, and manner" in which speech is permitted are constitutional if and only if:

·        they are "content neutral," both on their face and as applied;

·        they leave substantial other opportunities for speech to take place; and

·        they "narrowly serve a significant state interest."

·        On state-owned property that does not constitute a "public forum," government may restrict speech in any way that is reasonable in light of the nature and purpose of the property in question.

·        Content-based governmental restrictions on speech are unconstitutional unless they advance a "compelling state interest." To this principle, there are six exceptions:

1. Speech that is likely to lead to imminent lawless action may be prohibited.
2. "Fighting words" -- i.e., words so insulting that people are likely to fight back -- may be prohibited.
3. Obscenity -- i.e., erotic expression, grossly or patently offensive to an average person, that lacks serious artistic or social value -- may be prohibited.
4. Child pornography may be banned whether or not it is legally obscene and whether or not it has serious artistic or social value, because it induces people to engage in lewd displays, and the creation of it threatens the welfare of children.
5. Defamatory statements may be prohibited. (In other words, the making of such statements may constitutionally give rise to civil liability.) However, if the target of the defamation is a "public figure," she must prove that the defendant acted with "malice." If the target is not a "public figure" but the statement involved a matter of "public concern," the plaintiff must prove that the defendant acted with negligence concerning its falsity.
6. Commercial Speech may be banned only if it is misleading, pertains to illegal products, or directly advances a substantial state interest with a degree of suppression no greater than is reasonably necessary.

If you are familiar with all of these precepts -- including the various terms of art and ambiguities they contain -- you're in good shape. If not, you should read some more about the First Amendment. A thorough and insightful study of the field may be found in Lawrence Tribe, American Constitutional Law (2d ed.), chapter 12. Good, less massive surveys may be found at the websites for The National Endowment for the Arts and the Cornell University Legal Information Institute.

The second of the two kinds of background you might find helpful is a brief introduction to the current debate among academics over the character and desirability of what has come to be called "cyberdemocracy." Until a few years ago, many observers thought that the Internet offered a potential cure to the related diseases that have afflicted most representative democracies in the late twentieth century: voter apathy; the narrowing of the range of political debate caused in part by the inertia of a system of political parties; the growing power of the media, which in turn seems to reduce discussion of complex issues to a battle of "sound bites"; and the increasing influence of private corporations and other sources of wealth. All of these conditions might be ameliorated, it was suggested, by the ease with which ordinary citizens could obtain information and then cheaply make their views known to one another through the Internet.

A good example of this perspective is an article by Bernard Bell, where he suggests that “[t]he Internet has, in many ways, moved society closer to the ideal Justice Brennan set forth so eloquently in New York Times v. Sullivan. It has not only made debate on public issues more 'uninhibited, robust, and wide-open,' but has similarly invigorated discussion of non-public issues. By the same token, the Internet has empowered smaller entities and even individuals, enabling them to widely disseminate their messages and, indeed, reach audiences as broad as those of established media organizations.”

Recently, however, this rosy view has come under attack. The Internet, skeptics claim, is not a giant "town hall." The kinds of information flows and discussions it seems to foster are, in some ways, disturbing. One source of trouble is that the Internet encourages like-minded persons (often geographically dispersed) to cluster together in bulletin boards and other virtual clubs. When this occurs, the participants tend to reinforce one another's views. The resultant "group polarization" can be ugly. More broadly, the Internet seems at least potentially corrosive of something we have long taken for granted in the United States: a shared political culture. When most people read the same newspaper or watch the same network television news broadcast each day, they are forced at least to glance at stories they might fight troubling and become aware of persons and groups who hold views sharply different from their own. The Internet makes it easy for people to avoid such engagement -- by enabling people to select their sources of information and their conversational partners. The corrosion of community and shared culture deeply worries others, like Cass Sunstein of the University of Chicago.

An excellent summary of the literature on this issue can be found in a New York Times article by Alexander Stille. If you are interested in digging further into these issues, we recommend the following materials:

·        Cass Sunstein, Republic.com (Princeton Univ. Press 2001)

·        Neil W. Netanel, Is the Commercial Mass Media Necessary, or Even Desirable, for Liberal Democracy? (2001)

To test some of these competing accounts of the character and potential of discourse on the Internet, we suggest you visit - or, better yet, participate in - some of the sites at which Internet discourse occurs. Here's a sampler of some of the more mainstream forums for online discussion: MSNBC Political News Discussion Board and The New York Times Politics Forums. In contrast, take a look at a site that caters to a more politically and ideologically specific group of online speakers: Free Republic. Try also looking at commentary sites like Kuro5hion, or the recent political debate links on a collaborative volunteer-created encyclopedia like Wikipedia.

A very different model for the political impact of new technologies is represented by e-government initiatives like DemocracyNet and Senator Lieberman and Thompson’s E-Government Project. DemocracyNet’s election information and the Senators’ interactive forum on topics relating to government information and services seek to contribute to democracy (narrowly understood) by harnessing the net’s ability to directly inform and empower citizens.

How closely do both the discussion forums and the e-government initiatives correspond to your understanding of democracy? How successful do you find each in promoting a diverse and robust public debate?  To what extent could they be said to facilitate an informed debate?

Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

 


Current Controversies

1. Restrictions on Pornography

Three times in the past five years, critics of pornography on the Internet have sought, through federal legislation, to prevent children from gaining access to it. The first of these efforts was the Communications Decency Act of 1996 (commonly known as the "CDA"), which (a) criminalized the "knowing" transmission over the Internet of "obscene or indecent" messages to any recipient under 18 years of age and (b) prohibited the "knowin[g]" sending or displaying to a person under 18 of any message "that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs." Persons and organizations who take "good faith, . . . effective . . . actions" to restrict access by minors to the prohibited communications, or who restricted such access by requiring certain designated forms of age proof, such as a verified credit card or an adult identification number, were exempted from these prohibitions.

The CDA was widely criticized by civil libertarians and soon succumbed to a constitutional challenge. In 1997, the United States Supreme Court struck down the statute, holding that it violated the First Amendment in several ways:

·        because it restricted speech on the basis of its content, it could not be justified as a "time, place, and manner" regulation;

·        its references to "indecent" and "patently offensive" messages were unconstitutionally vague;

·        its supposed objectives could all be achieved through regulations less restrictive of speech;

·        it failed to exempt from its prohibitions sexually explicit material with scientific, educational, or other redeeming social value.

Two aspects of the Court's ruling are likely to have considerable impact on future constitutional decisions in this area. First, the Court rejected the Government's effort to analogize the Internet to traditional broadcast media (especially television), which the Court had previously held could be regulated more strictly than other media. Unlike TV, the Court reasoned, the Internet has not historically been subject to extensive regulation, is not characterized by a limited spectrum of available frequencies, and is not "invasive." Consequently, the Internet enjoys full First-Amendment protection. Second, the Court encouraged the development of technologies that would enable parents to block their children's access to Internet sites offering kinds of material the parents deemed offensive.

A year later, pressured by vocal opponents of Internet pornography -- such as "Enough is Enough" and the National Law Center for Children and Families -- Congress tried again. The 1998 Child Online Protection Act (COPA) obliged commercial Web operators to restrict access to material considered "harmful to minors" -- which was, in turn, defined as any communication, picture, image, graphic image file, article, recording, writing or other matter of any kind that is obscene or that meets three requirements:

(1) "The average person, applying contemporary community standards, would find, taking the material as a whole and with respect to minors, is designed to appeal to, or is designed to pander to, the prurient interest."
(2) The material "depicts, describes, or represents, in a manner patently offensive with respect to minors, an actual or simulated sexual act or sexual conduct, an actual or simulated normal or perverted sexual act or a lewd exhibition of the genitals or post-pubescent female breast."
(3) The material, "taken as a whole, lacks serious literary, artistic, political, or scientific value for minors."

Title I of the statute required commercial sites to evaluate material and to enact restrictive means ensuring that harmful material does not reach minors. Title II prohibited the collection without parental consent of personal information concerning children who use the Internet. Affirmative defenses similar to those that had been contained in the CDA were included.

Once again, civil libertarians and online publishers challenged the statute on the grounds that it was unduly burdensome and would excessively chill online speech. The case – Ashcroft v. ACLU - was decided by the Supreme Court in May 2002. Both of the lower courts found that Congress had exceeded its constitutional authority. In the judgment of the Third Circuit Court of Appeals, the critical defect of COPA was its reliance upon the criterion of "contemporary community standards" to determine what kinds of speech are permitted on the Internet:

Because material posted on the Web is accessible by all Internet users worldwide, and because current technology does not permit a Web publisher to restrict access to its site based on the geographic locale of a each particular Internet user, COPA essentially requires that every Web publisher subject to the statute abide by the most restrictive and conservative state's community standard in order to avoid criminal liability.

The net result was to impose burdens on permissible expression more severe than can be tolerated by the Constitution. The court acknowledged that its ruling did not leave much room for constitutionally valid restrictions on Internet pornography:

We are forced to recognize that, at present, due to technological limitations, there may be no other means by which harmful material on the Web may be constitutionally restricted, although, in light of rapidly developing technological advances, what may now be impossible to regulate constitutionally may, in the not-too-distant future, become feasible.

The Supreme Court has now vacated this decision, but its opinion leaves unsettled whether COPA will, eventually, be upheld or invalidated.

One of the arguments made by those who had opposed the CDA was that there was no need for such expansive regulation, because technology provided filters that would allow parents to control what their children would see.  The children would thus be protected, but adults could access whatever they wanted to see.  Very few voices were raised at the time concerned that the censorial effects of filters were as bad in many cases as some form of direct regulation. Here is Lessig's argument. In late 2000, however, the censorial possibilities became all too clear, as the anti-pornography forces tried once more. At their urging, Congress adopted the Children's Internet Protection Act (CIPA), which requires schools and libraries that receive federal funding (either grants or "e-rate" subsidies) to install Internet filtering equipment on library computers that can be used by children. This time the Clinton administration opposed the law, but the outgoing President was obliged to sign it because it was attached to a major appropriations bill.

 

Opposition to CIPA was swift. Opponents claimed that it suffers from all the constitutional infirmities of the CDA and COPA. In addition, it will reinforce one form of the "digital divide" -- by subjecting poor children, who lack home computers and must rely upon public libraries for access to the Internet, to restrictions that more wealthy children can avoid. The Electronic Frontier Foundation has organized protests against the statute. In May of 2002 a special panel of the federal court in the Eastern District of Pennsylvania indeed invalidated the law in American Library Association v. U.S.   

There are many reasons why filtering software suffers from extensive over- and underblocking. They center on the limitations on filtering companies' ability to: (1) accurately collect Web pages that potentially fall into a blocked category (e.g., pornography); (2) review and categorize Web pages that they have collected; and (3) engage in regular re-review of Web pages that they have previously reviewed. These failures spring from constraints on the technology of automated classification systems, and the limitations inherent in human review, including error, misjudgment, and scarce resources. One failure of critical importance is that the automated systems that filtering companies use to collect Web pages for classification are able to search only text, not images. This is crippling to filtering companies' ability to collect pages containing "visual depictions" that are obscene, child pornography, or harmful to minors, as CIPA requires. We find that it is currently impossible, given the Internet's size, rate of growth, rate of change, and architecture, and given the state of the art of automated classification systems, to develop a filter that neither underblocks nor overblocks a substantial amount of speech.

The court treated public libraries that provide Internet access as public fora, from which the government cannot exclude speech except if its decision passes strict scrutiny. It reasoned:

In providing even filtered Internet access, public libraries create a public forum open to any speaker around the world to communicate with library patrons via the Internet on a virtually unlimited number of topics. Where the state provides access to a "vast democratic forum[]," Reno v. ACLU, 521 U.S. 844, 868 (1997), open to any member of the public to speak on subjects "as diverse as human thought," id. at 870, the state's decision selectively to exclude from the forum speech whose content the state disfavors is subject to strict scrutiny, as such exclusions risk distorting the marketplace of ideas that the state has facilitated. Application of strict scrutiny finds further support in the extent to which public libraries' provision of Internet access uniquely promotes First Amendment values in a manner analogous to traditional public fora such as streets, sidewalks, and parks, in which content-based restrictions are always subject to strict scrutiny.

Because the filtering software mandated by CIPA will block access to substantial amounts of constitutionally protected speech whose suppression serves no legitimate government interest, we are persuaded that a public library's use of software filters is not narrowly tailored to further any of these interests. Moreover, less restrictive alternatives exist that further the government's legitimate interest in preventing the dissemination of obscenity, child pornography, and material harmful to minors, and in preventing patrons from being unwillingly exposed to patently offensive, sexually explicit content. To prevent patrons from accessing visual depictions that are obscene and child pornography, public libraries may enforce Internet use policies that make clear to patrons that the library's Internet terminals may not be used to access illegal speech. Libraries may then impose penalties on patrons who violate these policies, ranging from a warning to notification of law enforcement, in the appropriate case. Less restrictive alternatives to filtering that further libraries' interest in preventing minors from exposure to visual depictions that are harmful to minors include requiring parental consent to or presence during unfiltered access, or restricting minors' unfiltered access to terminals within view of library staff. Finally, optional filtering, privacy screens, recessed monitors, and placement of unfiltered Internet terminals outside of sight-lines provide less restrictive alternatives for libraries to prevent patrons from being unwillingly exposed to sexually explicit content on the Internet.

In an opinion joined by a plurality of the justices, Justice Thomas emphasized the fact that COPA, unlike the CDA, included in its definition of prohibited materials the need to show its prurience, and its lack of artistic, scientific, or political merit. These two factors narrowed, in his view, the class of materials sufficiently and gave appellate courts a toe hold to impose objective constraints on local community judgments. This, in turn, signified to Justice Thomas that the use of "community standards" and its variation across communities that would inevitably have access to the materials does not, in and of itself, render the COPA unconstitutional. "When the scope of an obscenity statute's coverage is sufficiently narrowed by a 'serious value'prong and a 'prurient interest'prong," wrote Justice Thomas, "we have held that requiring a speaker disseminating material to a national audience to observe varying community standards does not violate the First Amendment."

Perhaps the most important and controversial passage in this opinion is one in which Justice Thomas suggests that, if the Internet reaches too easily into many communities that have strict community values, then the recourse of publishers is to restrict themselves to other media--presumably magazines or cable channels--where it is possible to control in which communities the potentially offensive materials will be received:

"If a publisher chooses to send its material into a particular community, this Court's jurisprudence teaches that it is the publisher's responsibility to abide by that community's standards. The publisher's burden does not change simply because it decides to distribute its material to every community in the Nation. Nor does it change because the publisher may wish to speak only to those in a 'community where avant garde culture is the norm,' but nonetheless utilizes a medium that transmits its speech from coast to coast. If a publisher wishes for its material to be judged only by the standards of particular communities, then it need only take the simple step of utilizing a medium that enables it to target the release of its material into those communities."

This position was rejected, however, by six out of the nine justices. Justices O'Connor and Breyer each filed an opinion that, in some measure, sought to allow Web-based publishers certaint as to the standards they would have to comply with, by suggesting that a national community standard based on what American adults on average think is appropriate for minors.

Justice Kennedy filed an opinion joined by Justices Souter and Ginsburg, rejecting even the notion of a national standard, arguing that of necessity juries in different communities would have very different views of what the average American adult would think is appropriate for children. In other words, every juror would think that his or her views are the majority views (or at least ought to be), and the result would, in effect, be a resort to the most sringent community values. These justices joined the judgment of the Court only because they believed that whether the additional aspects of the Act substantially reduce variability and uncertainty as to the application of the COPA's prohibitions, as Justice Thomas speculated they did, was a fact intensive question. They wanted the court below to decide find facts that would actually speak to this question. But in the very opening paragraph these justices stated that "There is a very real likelihood that the COPA is overbroad and cannot survive such a challenge."

Justice Stevens dissented, arguing that the saving grace supposedly present in the narrowing components of the COPA's additional definitional elements of covered materials (lack of value and prurience) at most saved many pages from being prohibited. This changed nothing, however, because the correct inquiry was whether many web pages that should not be subject to prohibition will be, because of the ineveitable variability of community standards. Finding that it would, Justice Stevens would have affirmed the Third Circuit's holding.

In addition to the efforts to protect children from viewing adult pornography, there has also been throughout this period a broad crackdown on child pornography on the Internet. In 1996, the Congress passed a statute called the Child Pronography Prevention Act. That Act relied on the understanding that child pronography may be regulated to a greater extent than adult pornography, even wheh it does not involve obscenity. In a case called New York v. Ferber, the Supreme Court found that prohibiting child pornograph was consistent with the First Amendment even when it was not obscene, basing its decision on the fact that the prohibition was intended to protect the children used to make the film and that that interest justified the prohibition. The Child Pronography Prevention Act tried to extend this permissive rule, and included in the things it prohibited virtual child pornography--materials that look like they show child pornography, but in fact they use either adult models or computer generated images. In April of 2002, in a decision called Ashcroft v. Free Speech Coalition, the Supreme Court refused to permit Congress to prohibit virtual child pornography. If the materials were not obscene, and children were not actually used, the Court held, then Congress could not ban the materials consistent with the First Amendment.

In a provocative recent article, The Perverse Law of Child Pornography, 101 Colum. L. Rev. 209 (2001), Amy Adler argues that the effort to curb child pornography online -- the kind of pornography that disgusts the most people -- is fundamentally misguided. Far from reducing the incidence of the sexual abuse of children, governmental efforts to curtail child pornography only increase it. A summary of her argument is available here. The full article is available here.

 

The CDA, COPA, and CIPA have one thing in common: they all involve overt governmental action -- and thus are subject to challenge under the First Amendment. Some observers of the Internet argue that more dangerous than these obvious legislative initiatives are the efforts by private Internet Service Providers to install filters on their systems that screen out kinds of content that the ISPs believe their subscribers would find offensive. Because policies of this sort are neither mandated nor encouraged by the government, they would not, under conventional constitutional principles, constitute "state action" -- and thus would not be vulnerable to constitutional scrutiny. Such a result, argues Larry Lessig, would be pernicious; to avoid it, we need to revise our understanding of the "state action" doctrine. Charles Fried disagrees:

Note first of all that the state action doctrine does not only limit the power of courts to protect persons from private power that interferes with public freedoms. It also protects individuals from the courts themselves, which are, after all, another government agency. By limiting the First Amendment to protecting citizens from government (and not from each other), the state action doctrine enlarges the sphere of unregulated discretion that individuals may exercise in what they think and say. In the name of First Amendment "values," courts could perhaps inquire whether I must grant access to my newspaper to opinions I abhor, must allow persons whose moral standards I deplore to join my expressive association, or must remain silent so that someone else gets a chance to reach my audience with a less appealing but unfamiliar message. Such inquiries, however, would place courts in the business of deciding which opinions I would have to publish in my newspaper and which would so distort my message that putting those words in my mouth would violate my freedom of speech; what an organization's associational message really is and whether forcing the organization to accept a dissenting member would distort that message; and which opinions, though unable to attract an audience on their own, are so worthy that they must not be drowned out by more popular messages. I am not convinced that whatever changes the Internet has wrought in our environment require the courts to mount this particular tiger.

"Perfect Freedom or Perfect Control," 114 Harvard Law Review 606, 635 (2000).

The United States may have led the way in seeking (unsuccessfully, thus far) to restrict the flow of pornography on the Internet, but the governments of other countries are now joining the fray. For the status of the struggle in a few jurisdictions, you might read:

·        Joseph C. Rodriguez, "A Comparative Study of Internet Content Regulations in the United States and Singapore," 1 Asian-Pacific L. & Pol'y J. 9 (February 2000). (Singapore)

·        Mark Konkel, "Internet Indecency, International Censorship, and Service Providers' Liability," 19 N.Y.L. Sch. J. INt'l & Comp. L. 453 (2000). (Canada, Maylasia, and China)

2. Threats

When does speech become a threat? Put more precisely, when does a communication over the Internet inflict -- or threaten to inflict -- sufficient damage on its recipient that it ceases to be protected by the First Amendment and properly gives rise to criminal sanctions? Two well known cases addressed that issue from different angles.

The first was popularly known as the "Jake Baker" case. In 1994 and 1995, Abraham Jacob Alkhabaz, also known as Jake Baker, was an undergraduate student at the University of Michigan. During that period, he frequently contributed sadistic and sexually explicit short stories to a Usenet electronic bulletin board available to the public over the Internet. In one such story, he described in detail how he and a companion tortured, sexually abused, and killed a young woman, who was given the name of one of Baker's classmates. (Excerpts from the story, as reprinted in the Court of Appeals decision in the case, are available here. WARNING: This material is very graphic in nature and may be troubling to some readers. It is presented in order to provide a complete view of the facts of the case.) Baker's stories came to the attention of another Internet user, who assumed the name of Arthur Gonda. Baker and Gonda then exchanged many email messages, sharing their sadistic fantasies and discussing the methods by which they might kidnap and torture a woman in Baker's dormitory. When these stories and email exchanges came to light, Baker was indicted for violation of 18 U.S.C. 875(c), which provides:

Whoever transmits in interstate or foreign commerce any communication containing any threat to kidnap any person or any threat to injure the person of another, shall be fined under this title or imprisoned not more than five years, or both.

Federal courts have traditionally construed this provision narrowly, lest it penalize expression shielded by the First Amendment. Specifically, the courts have required that a defendant's statement, in order to trigger criminal sanctions, constitute a "true threat" -- as distinguished from, for example, inadvertent statements, hyperbole, innocuous talk, or political commentary. Baker moved to quash the indictment on the ground that his statements on the Internet did not constitute "true threats." The District Court agreed, ruling that the class of women supposedly threatened was not identified in Baker's exchanges with Gonda with the degree of specificity required by the First Amendment and that, although Baker had expressed offensive desires, "it was not constitutionally permissible to infer an intention to act on a desire from a simple expression of desire." The District Judge's concluding remarks concerning the character of threatening speech on the Internet bear emphasis:

Baker's words were transmitted by means of the Internet, a relatively new communications medium that is itself currently the subject of much media attention. The Internet makes it possible with unprecedented ease to achieve world-wide distribution of material, like Baker's story, posted to its public areas. When used in such a fashion, the Internet may be likened to a newspaper with unlimited distribution and no locatable printing press - and with no supervising editorial control. But Baker's e-mail messages, on which the superseding indictment is based, were not publicly published but privately sent to Gonda. While new technology such as the Internet may complicate analysis and may sometimes require new or modified laws, it does not in this instance qualitatively change the analysis under the statute or under the First Amendment. Whatever Baker's faults, and he is to be faulted, he did not violate 18 U.S.C. § 875(c).

Two of the three judges on the panel that heard the appeal agreed. In their view, a violation of 875(c) requires a demonstration, first, that a reasonable person would interpret the communication in question as serious expression of an intention to inflict bodily harm and, second, that a reasonable person would perceive the communications as being conveyed "to effect some change or achieve some goal through intimidation." Baker's speech failed, in their judgment, to rise to this level.

Judge Krupansky, the third member of the panel, dissented. In a sharply worded opinion, he denounced the majority for compelling the prosecution to meet a standard higher that Congress intended or than the First Amendment required. In his view, "the pertinent inquiry is whether a jury could find that a reasonable recipient of the communication would objectively tend to believe that the speaker was serious about his stated intention." A reasonable jury, he argued, could conclude that Baker's speech met this standard -- especially in light of the fact that the woman named in the short story had, upon learning of it, experienced a "shattering traumatic reaction that resulted in recommended psychological counseling."

For additional information on the case, see Adam S. Miller, The Jake Baker Scandal: A Perversion of Logic.

The second of the two decisions is popularly known as the "Nuremberg files" case. In 1995, the American Coalition of Life Activists (ACLA), an anti-abortion group that advocates the use of force in their efforts to curtail abortions, created a poster featuring what the ACLA described as the "Dirty Dozen," a group of doctors who performed abortions. The posters offered "a $ 5,000 [r]eward for information leading to arrest, conviction and revocation of license to practice medicine" of the doctors in question, and listed their home addresses and, in some instances, their phone numbers. Versions of the poster were distributed at anti-abortion rallies and later on television. In 1996, an expanded list of abortion providers, now dubbed the "Nuremberg files," was posted on the Internet with the assistance of an anti-abortion activist named Neil Horsley. The Internet version of the list designated doctors and clinic workers who had been attacked by anti-abortion terrorists in two ways: the names of people who had been murdered were crossed out; the names of people who had been wounded were printed in grey. (For a version of the Nuremberg Files web site, click here. WARNING: This material is very graphic in nature and may be disturbing to many readers. It is presented in order to provide a complete view of the facts of the case).

The doctors named and described on the list feared for their lives. In particular, some testified that they feared that, by publicizing their addresses and descriptions, the ACLA had increased the ease with which terrorists could locate and attack them -- and that, by publicizing the names of doctors who had already been killed, the ACLA was encouraging those attacks.

Some of the doctors sought recourse in the courts. They sued the ACLA, twelve individual anti-abortion activists and an affiliated organization, contending that their actions violated the federal Freedom of Access to Clinic Entrances Act of 1994 (FACE), 18 U.S.C. §248, and the Racketeer Influenced and Corrupt Organizations Act (RICO), 18 U.S.C. §1962. In an effort to avoid a First-Amendment challenge to the suit, the trial judge instructed the jury that defendants could be liable only if their statements were "true threats." The jury, concluding that the ACLA had indeed made such true threats, awarded the plaintiffs $107 million in actual and punitive damages. The trial court then enjoined the defendants from making or distributing the posters, the webpage or anything similar.

This past March, a panel of the Court of Appeals for the Ninth Circuit overturned the verdict, ruling that it violated the First Amendment. Judge Kozinski began his opinion by likening the anti-abortion movement to other "political movements in American history," such as the Patriots in the American Revolution, abolitionism, the labor movement, the anti-war movement in the 1960s, the animal-rights movement, and the environmental movement. All, he argued, have had their "violent fringes," which have lent to the language of their non-violent members "a tinge of menace." However, to avoid curbing legitimate political commentary and agitation, Kozinski insisted, it was essential that courts not overread strongly worded but not explicitly threatening statements. Specifically, he held that:

Defendants can only be held liable if they "authorized, ratified, or directly threatened" violence. If defendants threatened to commit violent acts, by working alone or with others, then their statements could properly support the verdict. But if their statements merely encouraged unrelated terrorists, then their words are protected by the First Amendment.

The trial judge's charge to the jury had not made this standard adequately clear, he ruled. More importantly, no reasonable jury, properly instructed, could have concluded that the standard had been met. Accordingly, the trial judge was instructed to dissolve the injunction and enter judgment for the defendants on all counts.

In the course of his opinion, Kozinski offered the following reflections on the fact that the defendants' speech had occurred in public discourse -- including the Internet:

In considering whether context could import a violent meaning to ACLA's non-violent statements, we deem it highly significant that all the statements were made in the context of public discourse, not in direct personal communications. Although the First Amendment does not protect all forms of public speech, such as statements inciting violence or an imminent panic, the public nature of the speech bears heavily upon whether it could be interpreted as a threat. As we held in McCalden v. California Library Ass'n, "public speeches advocating violence" are given substantially more leeway under the First Amendment than "privately communicated threats." There are two reasons for this distinction: First, what may be hyperbole in a public speech may be understood (and intended) as a threat if communicated directly to the person threatened, whether face-to-face, by telephone or by letter. In targeting the recipient personally, the speaker leaves no doubt that he is sending the recipient a message of some sort. In contrast, typical political statements at rallies or through the media are far more diffuse in their focus because they are generally intended, at least in part, to shore up political support for the speaker's position. Second, and more importantly, speech made through the normal channels of group communication, and concerning matters of public policy, is given the maximum level of protection by the Free Speech Clause because it lies at the core of the First Amendment.

 

Over a year later, however, in May 2002, the full court of the Court of Appeals for the Ninth Circuit reversed vacated the panel decision, and reinstated the trial court's determination. The Court of Appeals was very closely divided, with six judges favoring a finding that the Nuremberg Files site did not merit protection, and five judges holding that it did. The majority defined threat

[A] threat is an expression of an intention to inflict evil, injury, or damage on another.Alleged threats should be considered in light of their entire factual context, including the surrounding events and reaction of the listeners. [Morover,] the fact that a threat is subtle does not make it less of a threat. A true threat, that is one where a reasonable person would foresee that the listener will believe he will be subjected to physical violence upon his person, is unprotected by the first amendment.. It is not necessary that the defendant intend to, or be able to carry out his threat; the only intent requirement for a true threat is that the defendant intentionally or knowingly communicate the threat.
Both dissents would change the test, either to require that the speaker actually intend to carry out the threat or be in control of those who will, or to make it inapplicable when the speech is public rather than private. However, for years our test has focused on what a reasonable speaker would foresee the listener’s reaction to be under the circumstances, and that is where we believe it should remain. Threats are outside the First Amendment to protect individuals from the fear of violence, from the disruption that fear engenders, and from the possibility that the threatened violence will occur. This purpose is not served by hinging constitutionality on the speaker’s subjective intent or capacity to do (or not to do) harm. Rather, these factors go to how reasonably foreseeable it is to a speaker that the listener will seriously take his communication as an intent to inflict bodily harm.
Neither do we agree that threatening speech made in public is entitled to heightened constitutional protection just because it is communicated publically rather than privately. Threats are unprotected by the First Amendment however communicated.
Therefore, we hold that "threat of force" a statement which, in the entire context and under all the circumstances, a reasonable person would foresee would be interpreted by those to whom the statement is communicated as a serious expression of intent to inflict bodily harm upon that person. So defined, a threatening statement is unprotected under the First Amendment.
* * *
Because of context, we conclude that the Crist and Deadly Dozen posters are not just a political statement. Even if the Gunn poster, which was the first "WANTED" poster, was a purely political message when originally issued, and even if the Britton poster were too, by the time of the Crist poster, the poster format itself had acquired currency as a death threat for abortion providers. Gunn was killed after his poster was released; Britton was killed after his poster was released; and Patterson was killed after his poster was released. Knowing this, and knowing the fear generated among those in the reproductive health services community who were singled out for identification on a "wanted"-type posters, poster, ACLA deliberately identified Crist on a "GUILTY" poster and intentionally put the names of Hern and the Newhalls on the Deadly Dozen "GUILTY" poster to intimidate them. This goes well beyond the political message (regardless of what one thinks of it) that abortionists are kill-ers who deserve death too.
The Nuremberg Files are somewhat different. Although they name individuals, they name hundreds of them. The avowed intent is "collecting dossiers on abortionists in antici-pation that one day we may be able to hold them on trial for crimes against humanity." The web page states: "One of the great tragedies of the Nuremberg trials of Nazis after WWII was that complete information and documented evidence had not been collected so many war criminals went free or were only found guilty of minor crimes. We do not want the same thing to happen when the day comes to charge abortionists with their crimes. We anticipate the day when these people will be charged in PERFECTLY LEGAL COURTS once the tide of this nation’s opinion turns against child-killing (as it surely will)." However offensive or disturbing this might be to those listed in the Files, being offensive and provocative is protected under the First Amendment. But, in two critical respects, the Files go further. In addition to listing judges, pol-iticians and law enforcement personnel, the Files separately categorize "Abortionists" and list the names of individuals who provide abortion services, including, specifically, Crist, Hern, and both Newhalls. Also, names of abortion providers who have been murdered because of their activities are lined through in black, while names of those who have been wounded are highlighted in grey. As a result, we cannot say that it is clear as a matter of law that listing Crist, Hern, and the Newhalls on both the Nuremberg Files and the GUILTY posters is purely protected, political expression. Accordingly, whether the Crist Poster, the Deadly Dozen poster, and the identification of Crist, Hern, Dr. Eliza-beth Newhall and Dr. James Newhall in the Nuremburg Files as well as on "wanted"-type posters, constituted true threats was properly for the jury to decide.

 

3. The Problem of Hate Speech

The net has an equalizing effect on speech for all parties; those aspects of new media that are potentially democratizing at the same time facilitate the ability to spread messages of racism and hate. Unsurprisingly, the desire to regulate hate speech is another key theme in the ongoing conflict between freedom and control. The problem of hate speech has raised significant international dimensions because of the divergence between the United States and most other countries in their treatment of hate speech generally, and Nazism-related communications in particular. Therefore, before addressing the difficulties of regulating hate speech on the Internet, it is important to understand the divergent approaches to messages of intolerance undertaken by the US and the global community. 

 

In the United States, government efforts to regulate "hate speech" – namely, speech attacking racial minorities, women, homosexuals, or other traditionally disfavored groups – are likely to run afoul of the First Amendment for being content-based. While the Supreme Court has held statutes constitutional that augment penalties for crimes inspired by hate, it has held unconstitutional laws penalizing the use of hate speech against historically persecuted groups. As in the context of threats, an important qualification is that if such speech calls for immediate violent action and may cause an “imminent threat of harm,” as opposed to advocacy or encouragement, then it is unlawful. That said, if government tries narrowly to regulate "fighting words" against certain groups, the Supreme Court has held that such a law runs afoul of the First Amendment under a content-based strict scrutiny standard. This is the holding of the seminal hate speech case, R.A.V. v. City of St. Paul.

 

In R.A.V., the Supreme Court overturned a St. Paul, Minnesota statute that made it a crime to "place on public or private property a symbol, characterization or graffiti [including a burning cross or Nazi swastika] which one knows or has reasonable grounds to know arouses anger, alarm or resentment in others on the basis of race, color, creed, religion or gender...." The defendant was prosecuted for having burned a homemade cross inside the fenced yard of a black family, in the middle of the night.  The Court held that the defendant could not be convicted, because the ordinance on its face violates the First Amendment. Even though the ordinance has been construed by the state court to reach only "fighting words," the Court reasoned that the city could not choose which fighting words to prohibit based on their content. Thus, the city could ban all fighting words, but not just those motivated by, say, racial or religious bias.

 

The American approach to hate speech is substantially more permissive than in most other countries.  The following countries have passed laws that specifically penalize hate propaganda: Austria, Belgium, Brazil, Canada, England, France, Germany, India, Israel, and Switzerland. The German approach is emblematic. While the German constitution protects free speech, its language, unlike the US constitution, is not absolute. It subjects speech to “limitations embodied in the provisions of general legislation, statutory provisions for the protection of youth, and the citizen’s right to personal respect.” Thus, under German law, it is a criminal offense to publicly distribute or supply any “writings that incite race hatred or describe cruel or otherwise inhuman acts of violence against humans.”

 

The problem of hate speech on the global medium like the Internet is, thus, exacerbated by the conflicting approach taken across different legal regimes.  The strength of the First Amendment and permissiveness of the U.S. approach to hate speech has led to criticisms and concern that the U.S. has become a haven for hate speech around the globe. The controversial case of Yahoo v. LICRA! is instructive in this regard.

 

In LICRA and UEJF v. Yahoo! Inc, Yahoo! was sued in France for violating a French law that prohibits the exhibition of Nazi memorabilia. The French court ordered Yahoo! to limit the display of Nazi memorabilia and images on Yahoo!-hosted auction sites in the U.S. (Click here for a translation of the Superior Court of Paris’ opinion in the case and here for the opinion in French.) Such displays, while illegal in France, are protected by the First Amendment in the United States. The French court reasoned that because the offensive materials were accessible in France – and hence caused harm to French citizens – the court had the power to penalize Yahoo! in the U.S. for non-compliance with its domestic hate speech statutes.

 

Because enforcement of the French order would require action against Yahoo! and its assets in the United States, Yahoo! in turn sought relief from the order in the form a declaratory judgment making the order unenforceable stateside. Before a California district court, Yahoo! sought a declaration that the French court has no jurisdiction over Yahoo's U.S.-based operations, and that the French court's order violates rights guaranteed by the U.S. Constitution. Yahoo! argued that only a U.S. court has jurisdiction to determine if the French order is enforceable in the United States. The California court agreed, and on First Amendment grounds invalided enforcement of the French order in the U.S. See the California District Court Order Granting Yahoo!’s Motion for Summary Judgment. The court concluded that:

           

The French order’s content and viewpoint-based regulation, while entitled to great deference as an articulation of French law, clearly would be inconsistent with the First Amendment if mandated by a court in the United States. What makes this case uniquely challenging is that the Internet in effect allows one to speak in more than one place at a time. Although France has the sovereign right to regulate what speech is permissible in France, this Court may not enforce a foreign order that violates the protections of the United States Constitution by chilling protected speech that occurs simultaneously within our borders.

 

The Yahoo! case demonstrates the challenges to the First Amendment and the  normative expressive regimes of other nations that are likely to recur the international context. While subject to criticism abroad, free speech advocates and cyber-libertarians applauded the California court's judgment as an important precedent for online speech worldwide and the openness of the Internet.

 

The decision in Yahoo! is important for another reason: the French court’s focus on Yahoo’s ability to discern where its content was received. Based on the testimony of expert witnesses, the French court concluded that Yahoo! had the technological capacity to know with 90% accuracy where it was distributing its online content. In the Third Circuit decision in COPA discussed above, we saw the court imagining the next stage of online regulation: a net that is zoned so as to permit prohibitions of content based on community norms. In effect, the French court in Yahoo! sought to actualize that vision. Whereas geographic-location technologies may not be readily available and cost-efficient at present, an increasingly important question for net regulation in the future will be the ability to control not merely types of content but the flow of information. For a general discussion on the development of geographic location technologies and the consequences for online regulation take a look at this article on “The Internet’s New Borders.”

 

4. Intellectual Property

 

The First Amendment forbids Congress to make any law "abridging the freedom of speech." Nonetheless, the following stories, and many like them, involve the lawful abridgment of free speech under the banner of intellectual property.

o       Dennis Erlich, a member of the Church of Scientology for fourteen years, became a vocal critic of Scientology. As part of his campaign, Erlich had posted to an internet newsgroup documents containing the scientologists’ religious teachings, interspersed with criticism. The Church of Scientology sued for copyright infringement. The court issued a temporary restraining order and a seizure order. This is what followed, as described by the court itself:

On February 13, 1995, in execution of the writ of seizure, local police officers entered Erlich's home to conduct the seizure. The officers were accompanied by several [Scientology] representatives, who aided in the search and seizure of documents related to Erlich's alleged copyright infringement and misappropriation of trade secrets. Erlich alleges that [Scientology] officials in fact directed the seizure, which took approximately seven hours. Erlich alleges that plaintiffs seized books, working papers, and personal papers. After locating Erlich's computers, plaintiffs allegedly seized computer disks and copied portions of Erlich's hard disk drive onto floppy disks and then erased the originals from the hard drive.

o       Free Republic includes a forum where right-wing conservatives share news clippings and exchange opinions online. Users who read articles they think deserve comment cut and paste them onto the forum. They then post a comment, and other users participate in a threaded discussion of the article. The Washington Post and the L.A. Times decided that public discourse may be a good thing, but not when it is evoked using their stories. So they brought a copyright action to prevent the users of Free Republic from posting the papers’ stories to their political forum. In other words, two large newspapers successfully asked the government to limit the operation of a discussion group where people share clippings of their news stories and engage in political debate over them.

o       The San Francisco Arts and Athletics Association wanted to run an athletic event to celebrate gay athletes, and wanted to call the event "Gay Olympics," but were prohibited from doing so. Congress gave the US Olympic committee the exclusive right to control the usage of the word "Olympic," saying that the legislature could reasonably have believed that the positive connotations of the word come from the USOC's efforts. When the USOC stopped the holders of the Gay Olympic Games from using the term, the Supreme Court held that this did not violate the First Amendment.

                                 

How can these prohibitions on specific expressions, which would be both less effective and meaningful absent the words or phrases in question, be squared with the Constitution's command? Does this imply that the copyright statute as a whole – or, less radically, some specific applications of it – should be deemed unconstitutional?  

Courts have generally found the problem less troubling than one might think. One, fairly simple justification has been that Article I, Section 8, Clause 8 of the Constitution explicitly authorizes Congress "To promote the Progress of Science and the useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries," and there is no indication that the drafters or ratifiers of the First Amendment intended to nullify this express grant of lawmaking power. This justification does not carry one too far, however, because the fact that Congress is empowered to create rights in intellectual property does not imply that the First Amendment does not then impose real limitations on how Congress uses this power. In that regard, second, and more importantly, the U.S. Supreme Court has held that various doctrines within copyright law function to ensure that it does not unduly interfere with the ability of persons to express themselves. Specifically, the principle that only the particular way in which an idea is "expressed" is copyrightable, not the idea itself, ensures that the citizenry will be able to discuss concepts, arguments, facts, etc. without restraint. Even more importantly, the fair use doctrine (discussed in the first module) provides a generous safe harbor to people making reasonable uses of copyrighted material for educational, critical, or scientific purposes. These considerations, in combination, have led many of the lower courts to turn aside virtually every challenge to the enforcement of copyrights on First Amendment grounds. Recently, the Court of Appeals for the District of Columbia took this logic a step further, holding that "users have 'no cognizable First Amendment interest in exploiting the copyrighted works of others'," and that therefore "Copyrights, are categorically immune from challenges under the First Amendment." Eldred v. Reno.

 

These constitutional interpretations that free Congress from judicial review when it enacts copyright-type statutes comes at a time of tremendous expansion and amplification of intellectual property rights. Many now believe that the essential balance between the ability of persons freely to communicate and the protection of creators' expression has been upset. In large part a response to the ease of reproduction and distribution using digital technologies, the strengthening of IP rights has taken two forms. First, a gradual broadening of the scope of intellectual property protection over time, e.g., the lengthening of the copyright term from life plus 50 to 70 years (read a full description by here). And second, the Digital Millennium Copyright Act’s expansion of intellectual property protections described in module 2.

 

Following the very extreme version of this expansive pro-copyright view taken by the Court of Appeals from the District of Columbia in Eldred, the Supreme Court has now granted certiorari to review that decision. Recall, in that case the Court of Appeals upheld an extension of the length of copyright protection by 20 years, not only prospectively, but retroactively as well. The arguments supporting such an extension as a matter of economic incentives are flimsy. See the Economists' Brief filed in the Surpeme Court case. Given contemporary First Amendment law, as well as an expansion of the Court's willingness to hold Congress strictly to its enumerated powers under the Constitution, upholding such a law requires courts more or less to treat copyright legislation as unreviewable, which is, in fact, what the Court of Appeals did. The Supreme Court stage of the case, Eldred v. Aschcroft, promises to become the most important decision that the Court has ever issued on the constitution bounds of congressional power to enact copyright legislation. If it upholds the lower court, it will effectively lock-in the position that congressional statutes are unreviewable if their subject matter is copyright. If it overturns the lower court, it will require many of the courts of appeals to adjust their increasingly dismissive treatment of First Amendment challenges to copyright-style legislation. The materials submitted to the Supreme Court in this case can be found here. To get the First Amendment implications, you should read the main brief filed by the petitioner, and the brief of five constitutional law professors.

 

 

Eldred will be the most immediate and most important decision of principle in this area. Of great practical importance, however, is also a slew of cases involving the DMCA’s provisions. Most extreme is the case of Dmitry Sklyraov. Sklyarov is a Russian programmer who faced the prospect of an American jail because he wrote software that lets people read books that they are not allowed to read. Sklyraov's program allows readers of ebooks encoded in the Adobe eBook format to read the book in other readers as well, including those that ignore limitations--like a prohibition on printing the book, quoting from it, or lending it--that the eBook reader is designed to enforce. For this, Skylarov was prosecuted criminally. Since the initial prosecution, Sklyarov has been permitted to return home, and his employer, Elcomsoft, has been prosecuted instead. Despite the particular concern that the Supreme Court has in the past shown for criminal, rather than civil, enforcement of laws that burden speech, the District Court handily rejected Elcomsoft's challenge to the constitutionality of the criminal provisions of the DMCA. Aa the court put it

The DMCA does not "eliminate" fair use. Although certain fair uses may become more difficult, no fair use has been prohibited. Lawful possessors of copyrighted works may continue to engage in each and every fair use authorized by law. It may, however, have become more difficult for such uses to occur with regard to technologically protected digital works, but the fair uses themselves have not been eliminated or prohibited. For example, nothing in the DMCA prevents anyone from quoting from a work or comparing texts for the purpose of study or criticism. It may be that from a technological perspective, the fair user my find it more difficult to do so—quoting may have to occur the old fashioned way, by hand or by re-typing, rather than by "cutting and pasting" from existing digital media. Nevertheless, the fair use is still available. Defendant has cited no authority which guarantees a fair user the right to the most technologically convenient way to engage in fair use.

Another evocative case that never reached fruition is the case involving Edward Felten. Felten is a computer scientist at Princeton. He was preparing to publish a paper on the weaknesses of the music industry's proposed encryption standard, SDMI. As he prepared for publication, he received a threatening letter from the Recording Industry Association of America (RIAA), telling him that publication of the paper constituted a violation of the DMCA. In response, he asked a federal district court to declare that publication of his findings was not a violation of the DMCA. The RIAA suddenly realized that trying to silence academic publication of a criticism of the weakness of its approach to encryption was not the best litigation stance, and has moved to dismiss the case.  

Of the DMCA-related cases, the cases that seem most likely to bear fruit in the immediate future are those involving the DVD industry, and the attempts by the movie industry to prevent the distribution of a decryption program, DeCSS, that would allow its users to circumvent the copy protection system that protects DVDs. The story behind the cases is quite simple. DVDs are protected by a system called CSS. In 1999, a young Norwegian named Jon Johansen wrote and published a program, DeCSS, that can circumvent this protection system. The program was circulated on web sites around the world, and the movie industry brought three suits, one in California, one in New York, and one in Connecticut, against web sites that posted the software. The theory in the California case was a state trade secret theory. In the New York case the theory was that by posting or linking to DeCSS the websites were "trafficking" in circumvention technology, and thereby violating the DMCA.  

There are three distinct First Amendment issues involved in these cases.

(1) Is software "speech" protected by the First Amendment, and if so, what consequences follow? All the courts agree that software is speech, and that the prohibition on distributing software is a restriction on speech. However, they differ in the application of this insight. The California Court, at the present stage of the case, has relied on this finding to hold the issuance of a preliminary injunction under California's Uniform Trade Secrets Act a violation of the First Amendment. While accepting that software was speech, the district court in the New York Case held that it was also functional, and went on to reason that to the extent software is functional, it is subject to regulation notwithstanding the First Amendment. A fascinating study of the boundaries of the speech/function distinction that was used in the district court in the New York case can be found here. Reasons to be skeptical of the software-is-speech argument can be read in Robert Post's article here.

(2) Irrespective of whether software is speech, does the DMCA's prohibition on circumvention threaten fair use and the other First Amendment protections embodied in the Copyright Act to such a degree that the Act is unconstitutional? The main question here is not focused on the speech of computer programmers, as was the previous question, but on the ability of any user to use cultural materials in ways that are expressive, protected by the First Amendment, but nonetheless prohibited by the new shape of copyright law. This amicus brief offers an explanation of how the permissive doctrine of fair use is effectively undermined by a legal regime that prohibits the ability of individuals to access and work with protected materials in ways that are lawful.

(3) A separate component of the injunction prohibited not only posting of DeCSS, but linking by the defendant to other sites that had DeCSS posted on them. The district court held that linking--mechanically telling people where they could find information--could constitutionally be prohibited, if the person linking did so with knowledge of what they were linking to and with the intent to distribute the prohibited code. This holding too was challenged in the Court of Appeals. The argument about linking can be read here.

The Court of Appeals affirmed the District Court's holding that the DMCA did not violate the First Amendment on any of these issues. In Universal City Studios, Inc. v. Corely the court's rejected all three arguments.

(1) While acknowledging that software is "speech" protected by the First Amendment, the court adopted the district court's observation that computer code both communicated human information and achieved a practical result in the world. "Unlike a blueprint or a recipe, which cannot yield any functional result without human comprehension of its content, human decision-making, and human action, computer code can instantly cause a computer to accomplish tasks and instantly render the results of those tasks available throughout the world via the Internet. The only human action required to achieve these results can be as limited and instantaneous as a single click of a mouse. These realities of what code is and what its normal functions are require a First Amendment analysis that treats code as combining nonspeech and speech elements, i.e., functional and expressive elements." DeCSS could enable people to perform illegal copying, and its suppression, the court held, was justifiable under the First Amendment even under somewhat hieghtened, but not strict scrutiny..

(2) As to the fair use argument, the court held that "We know of no authority for the proposition that fair use, as protected by the Copyright Act, much less the Constitution, guarantees copying by the optimum method or in the identical format of the original."

(3) The court treated links, like computer code, as speech combined with functionality. It held that "the Appellants ignore the reality of the functional capacity of decryption computer code and hyperlinks to facilitate instantaneous unauthorized access to copyrighted materials by anyone anywhere in the world. Under the circumstances amply shown by the record, the injunction's linking prohibition validly regulates the Appellants' opportunity instantly to enable anyone anywhere to gain unauthorized access to copyrighted movies on DVDs."

 

If you are interested in the First Amendment and copyright or intellectual property more generally, these issues are explored more fully in a number of articles.

·        Boyle, The First Amendment And Cyberspace: The Clinton Years

·        Benkler, Free as the Air to Common Use: First Amendment Constraints on Enclosure of the Public Domain

·        Netanel, Locating Copyright Within the First Amendment Skein

Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

 


Discussion Topics

1. Are you persuaded by the judicial opinions declaring unconstitutional the CDA and questioning the constitutionality of COPA? Should CHIPA suffer the same fate? What should the Corut of Appeals do about COPA when it reconsiders on remand? Are there any ways in which government might regulate the Internet so as to shield children from pornography?

2. Some authors have suggested that the best way to respond to pornography on the Internet is through "zoning." For example, Christopher Furlow suggests the use of “restricted top-level domains” or “rTLDs” which would function similarly to area codes to identify particular areas of the Internet and make it easier for parents to control what type of material their children are exposed to online. See Erogenous Zoning on The Cyber-Frontier, 5 Va. J.L. & Tech. 7, 4 (Spring 2000). Do you find this proposal attractive? practicable? effective? In line with this thinking, lawmakers in the US have recently begun deliberating whether to create a kids subdivision within the .us country-code top-level domain (.kids.us). A U.S. Congressional subcommittee hearing was held on November 1, 2001 regarding a Dot Kids Domain Name Act. The original version of the bill would force ICANN – the domain-name decision-making body - to approve a "kid-friendly domain" first before any other domains can be introduced.

3. Elizabeth Marsh raises the following question: Suppose that the Ku Klux Klan sent unsolicited email messages to large numbers of African-Americans and Jews. Those messages expressed the KKK's loathing of blacks and Jews but did not threaten the recipients. Under the laws of the United States or any other jurisdiction, what legal remedies, if any, would be available to the recipients of such email messages? Should the First Amendment be construed to shield "hate spam" of this sort? More broadly, should "hate spam" be tolerated or suppressed? For Marsh's views on the matter, see "Purveyors of Hate on the Internet: Are We Ready for Hate Spam?"17 Ga. St. U. L. Rev. 379 (Winter 2000).

4. Were the Jake Baker and Nuremberg Files cases decided correctly? How would you draw the line between "threats" subject to criminal punishment and "speech" protected by the First Amendment?

5. The Yahoo! case received a great deal of attention and notoriety in part because allegiance to the First Amendment is as central to the American perception of free speech as the moral imperative and commitment to “personal dignity” that underlies the French hate speech statute. Can you think of a way to reconcile these seemingly incompatible perceptions of the limits on public discourse? Is this just another problem of Internet jurisdiction or does it have more to do with the ability of states to define the limits of their own public discourse?

6. Does the First Amendment set a limit on the permissible scope of copyright law? If so, how would you define that limit? What should the Supreme Court do in the Eldred case?

7. Lyrissa Lidsky, points out that the ways in which the Supreme Court has deployed the First Amendment to limit the application of the tort of defamation are founded on the assumption that most defamation suits will be brought against relatively powerful institutions (e.g., newspapers, television stations). The Internet, by enabling relatively poor and powerless persons to broadcast to the world their opinions of powerful institutions (e.g., their employers, companies by which they feel wronged) increases the likelihood that, in the future, defamation suits will be brought most often by formidable plaintiffs against weak individual defendants. If we believe that "[t]he Internet is . . . a powerful tool for equalizing imbalances of power by giving voice to the disenfranchised and by allowing more democratic participation in public discourse," we should be worried by this development. Lidsky suggests that it may be necessary, in this altered climate, to reconsider the shape of the constitutional limitations on defamation. Do you agree? If so, how would you reformulate the relevant limitations?

8. A core speech-enhancing aspect of online speech is its anonymous character. It is not only that there are greater and cheaper avenues for speech and expression online, but that expression is promoted by the ability to speak without revealing one’s identity. Concerns about the disclosure of personal information have focused on access to transactional information, communications and purchases.  Perhaps less acknowledged is the correlation between free speech and anonymity. In light of the recently heightened concerns over terrorism and given the regulatory attempts described above, how should we consider any attempts to regulate or ban online anonymity? What might be the cost of attempting to identify users online? For a defense of a speaker’s right to remain unknown see Wallace: Nameless in Cyberspace: Anonymity on the Internet.

 

Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

 


Additional Resources

Memorandum Opinion, Mainstream Loudoun v. Loudoun County Library, U.S. District Court, Eastern District of Virginia, Case No. 97-2049-A. (November 23, 1998)

Mainstream Loudoun v. Loudoun County Library, (Tech Law Journal Summary)

Lawrence Lessig, Tyranny of the Infrastructure, Wired 5.07 (July 1997)

Board of Education v. Pico

ACLU Report, "Fahrenheit 451.2: Is Cyberspace Burning?"

Reno v. ACLU

ACLU offers various materials relating to the Reno v. ACLU case.

Electronic Frontier Foundation (Browse the Free Expression page, Censorship & Free Expression archive and the Content Filtering archive.)

The Electronic Privacy Information Center (EPIC) offers links to various aspects of CDA litigation and discussion.

Platform for Internet Content Selection (PICS) (Skim the "PICS and Intellectual Freedom FAQ". Browse "What Governments, Media and Individuals are Saying about PICS (pro and con)".)

Jason Schlosberg, Judgment on "Nuremberg": An Analysis of Free Speech and Anti-Abortion Threats Made on the Internet, 7 B.U. J. SCI. & TECH. L. (Winter 2001)

CyberAngels.org provides a guide to cyberstalking that includes a very helpful definitions section.

Cyberstalking: A New Challenge for Law Enforcement and Industry – A Report from the Attorney General to the Vice President (August 1999) provides very helpful definitions and explanations related to cyberstalking, including 1st Amendment implications; also provides links to additional resources.

National Center for Victims of Crime

The Anti-Defamation League web site offers a wealth of resources for dealing with hate online, including guides for parents and filtering software. The filtering software, called Hate Filter, is designed to give parents the ability to make decisions regarding what their children are exposed to online. The ADL believes that “Censorship is not the answer to hate on the Internet. ADL supports the free speech guarantees embodied in the First Amendment of the United States Constitution, believing that the best way to combat hateful speech is with more speech.”

Laura Lorek, "Sue the bastards!." ZDNet 3/12/2001.

"At Risk Online: Your Good Name." ZDNet April 2001.

Jennifer K. Swartz, "Beyond the Schoolhouse Gates: Do Students Shed Their Constitutional Rights When Communicating to a Cyber-Audience," 48 Drake L. Rev. 587 (2000).

Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

 

 

contact: ilaw@cyber.law.harvard.edu