Boxed In: Why US Privacy Self Regulation Has Not Worked

Joseph M. Reagle Jr. <reagle@mit.edu>
Resident Fellow, Berkman Center for Internet & Society, Harvard Law School

It's easy to answer that efforts -- to date -- for Internet privacy self-regulation have not worked in the US. However, to ask the question itself is not so easy. What does effective privacy protection and self regulation mean? How do we know when it's working? In an attempt to frame this issue, the Department of Commerce has issued two drafts and held a public meeting this summer with the theme of "Thinking Out of the Box." Little progress had been made on the most important aspect of self regulation: enforcement and user remedy.

The crux of the problem is not a need to think out of the box, but to think about the box we are in. By this I mean the history and incentives that motivate present day behavior:

Self Regulation by Sustaining User Ignorance
This regulatory approach operates when market players suppress business practices (or reports thereof) that alarm a majority of the citizenry. Problems which are pointed out by the FTC, advocacy groups, or the media can sometimes result in improvements by the targeted company. Occasionally, a coalition of companies may find the scrutiny to be loathsome enough to exceed their legal obligations and uniformly reform the industry's practices. Given the number and diversity of services on the Web, such efforts are doubly difficult.
Enforcing Norms May Violate Anti-Trust
Antitrust law attempts to mitigate a dominant company, or set of companies, from colluding in order to suppress competition. This includes attempts at price fixing and excluding rivals by raising barriers of entry within a market. Consider the proposal that Internet companies that are part of a self-regulatory program adopt a consistent set of privacy procedures and that those procedures must be reviewed by external privacy audits. This could be construed as raising a barrier to entry in the ecommerce market for those companies that cannot readily afford such processes.
Being a Good Actor  Increases Liability
Outside of a few specific sectors (credit reporting, video rental records, etc.) there are very few limits on what one can do with information solicited, garnered, or inferred about users. In fact, by disclosing one's privacy practices, one is substantiality increasing one's liability without any noticeable or immediate benefit. In this context, I find it remarkable that any company has made substantive disclosures beyond the useless "We may share your information with partners and affiliates who wish to provide you with complementary products or services," and laud those that have.
Privacy is Expensive
It costs to be proactive on privacy. Companies concerned with privacy may turn away from business practices their less principled competitors jump at, or devote significant resources to supporting self-regulatory or technical programs. Regulatory actions (self or statutory) are not cheap. The cost of privacy when placed in the broader context of user satisfaction, fraud reduction, and user confidence in the Web is worthwhile, however the cost is uncertain and poorly distributed.

Given the above constraints, it is not surprising that self regulatory efforts have not advanced further. The efforts to date are simply not sufficient to overcome these significant pre-existent disincentives towards a self regulatory program. For progress to be made, we need enforceable rules regarding a baseline of acceptable behavior that cast pro-privacy policies as an advantage rather than a liability.


References