Complexity and Code: the Pitfalls of Regulation in Adaptive Systems

Marcus Maher

In discussions of Internet regulation a few commentators have begun to address the regulatory power of technology. The recognition of code's regulatory power can lead to a temptation to encourage the use of the regulatory power of code to compensate for weaknesses in law due to the superficial appearance of code as a perfect regulator. However, any regulatory action must be considered in light of the complete set of regulatory forces -- law, norms, the market and architecture. The complex nature of these forces and their interactions gives reason to believe that regulatory techniques which initially appear ideal may ultimately yield outcomes inconsistent with the regulation's purpose. This inaccuracy in outcomes may occur, both because the regulations may not bring about the intended results, and because the action may produce negative consequences that are worse than the problem that initially justified the regulation. Thus, any intelligent consideration of the appropriate regulatory means to achieve a given policy objective must be undertaken with appropriate sensitivity to the complex nature of the regulatory system. This leads to several conclusion about changes that may be desirable to more successfully implement regulations in a complex system. These changes include both alterations in traits of several of the regulatory forces, and conceptual changes for addressing regulatory decisions.

Part I will provide an introduction to the theory of regulation as it occurs through various mechanisms, including law, norms, the market and architecture. It will also provide a very basic discussion of complexity theory, including hallmarks of complex systems and the consequences of activity in such a system. Part II will apply the framework of complexity theory to the four regulatory forces discussed in Part I to determine the extent to which these forces may be subject to analysis as complex systems. In Part III the consequences of interactions among the regulatory forces is considered, as applied to attempts to regulate both directly and indirectly. Part IV is a brief discussion of a few specific examples of code regulation to highlight a few of the non-obvious consequences that impact the success of the regulation. Part V is a comparison of the relative problems caused by the complex nature of the regulatory forces. Finally, Part VI lists several suggestions for regulators or governing bodies faced with the problem of regulating in a complex system.

I. Background

It is initially necessary to have a basic understanding both of the regulatory system which will be the subject of this analysis and of complexity theory itself.

A. A Model of Internet Regulation

Any consideration of regulation almost certainly begins with the most obvious occurrence of regulation - laws. However, more recent discussions have identified other ways in which an individual or entity can be regulated. Social norms constrain people's behavior, and should be considered in any general discussion of regulation. Economic markets regulate as well -- they impose some constraints on what you can do and what things you can own. Finally, there is architecture. The architectures people create, as well as those provided by nature, constrain our potential range of activities, and thus constitute another category of regulatory force.

Discussions of Internet regulation have begun to involve the regulatory power of code. These discussions arise in part because of the recent recognition of code as a regulator. The consideration of the regulatory power of code is also due to the perception that code is particularly susceptible to use for regulation. Finally, in the technical context, many areas of technology with potential regulatory uses are currently up for consideration. The Internet protocol system - the technology that controls much of how information is transported on the Internet, has been undergoing a process of revision. The Domain Name System is expected to undergo changes in the top-level domains that are available and the entities that will be involved in their assignment. Technological solutions to problems of privacy and content rating have been the subject of development by one prominent Internet standards organization. The growth of electronic commerce will undoubtedly give rise to new transaction-facilitating technologies as well. These are only a few of the more prominent areas where code is being developed. Referencing the list of working groups of the Internet Engineering Task Force (IETF), a major Internet standards development body, shows more than 100 topics that are currently under consideration for standards development. The status of these technologies coupled with the perception of code as an ideal regulator lead some to the conclusion that code may be a perfect regulator.

Because fundamental Internet technologies have the power to regulate, there have been many suggestions for the inclusion of specific regulatory tools in the code of the Internet, or identified the need for a consideration of the regulatory consequences of code that is developed. For example, the U.S. government has attempted to incorporate assurances of law enforcement access into the technology cryptography systems. The Communications Decency Act is another example. Distribution of indecent material was permitted if the distributor utilizes adequate screening technology.

Lawrence Lessig has been at the forefront of the discussion of regulation through code. Specifically, he argues that this indirect form of regulation is about to become much more prevalent as a result of Cyberspace and the relative malleability of code in that realm. Unlike norms or law, it is argued, code is perfectly effective constraint. This has several consequences: (1) We need to think about code in political terms, because architectures in cyberspace have normative significance, (2) If code is political, then it is not the task of engineers alone, and (3) By leaving questions of Constitutional questions to judges, we may have lost the ability to address these meaningfully ourselves. Although Lessig raises a number of considerations important to regulation through code, it will be seen that the complex nature of the regulatory system will lead to still more.

B. Introduction to Complexity Theory

Complexity theory, as a subject matter of its own, has come into being only recently. It arose initially in a variety of subject areas of science and economics as researchers attempted to explain the behaviors in newly-discovered phenomena (genetics) and to compensate for inconsistencies between traditional theories and empirical observations (economics). Leading minds in numerous fields came together to share their respective insights, to collaborate and to determine a common set of characteristics describing what would come to be called a "complex system."

"In the mathematical framework of complex systems. . . complexity is at first defined as nonlinearity, which is a necessary but not sufficient condition of chaos and self-organization." Further, the structure of complex systems is important. "Complexity theory in computer science provides a hierarchy of complexity degrees, depending on, for instance, the computational time of computer programs or algorithms. As nonlinear complex systems are sometimes modeled by computer graphics, their degree of algorithmic complexity mat be described as their capacity for self-organization."

At most, the theory can explain why there are variations in the system, or what typical patterns may emerge on large scales, but not what the particular outcome of a particular system will be. Thus, a general theory of complex systems must necessarily be abstract. The interaction between theories and experiments takes place in the area of complexity studies by comparing the statistical features of general patterns. Finally, complexity theory is not intended to take over a field of study -- it is a compliment to traditional theories (or linear models).

Before the analysis relevant to complex systems can be applied, it is necessary to define what constitutes a complex system. These factors include systems of agents, interactions among these agents and the resulting consequences of these interactions.

1. System of agents

The "building blocks" that make up a complex system can be called "agents." These agents have certain internal characteristics: (1) a performance system, (2) a credit-assignment mechanism, (3) a rule-discovery mechanism, and (4) a mechanism for making predictions.

a. Performance system

The agent's performance system consists of the capabilities of an agent at a given point in time. These capabilities include the ability of an agent interact with its environment. Specifically, to receive stimuli from the environment, process that information and respond. The ability of an agent to receive stimuli and affect its environment means different things in different circumstances. For human agents, the detectors may be senses and the response mechanism may be communication. For a business, the detectors and affectors could be considered the responsibilities of different departments. The system for processing information can be considered a set of if/then rules. For example, the processing mechanism could be: IF there is stimulus A, THEN engage in response B.

b. Credit assignment mechanism

For an agent to be successful, the processing mechanisms it uses must allow it to be successful in its environment. One way that agents achieve this is through a "credit-assignment" process to select between successful and unsuccessful rules. The credit-assignment process will depend on the agent's current status in its environment and the reserves of its required resources (food, water, etc.). Credit assignment is done most easily for rules that receive direct feedback from the environment; for example, rules that allow the agent to obtain a needed resource or rules that lead to some harm. A more difficult credit-assignment process reinforces stage-setting rules that make possible the functioning of later rules, for which direct rewards are received. Through the process of competition between rules, the rules that lead to successful outcomes will be reinforced, and unsuccessful rules will be weakened or eliminated.

c. Rule discovery mechanism

The second means by which an agent discovers processing rules that allow for its success in its environment is through the creation of new rules based on permutations of rules previously found to be successful. Most of the time the rules compete in the credit assignment "marketplace," however, occasionally the strongest rules are used to create new rules. There are two processes by which this can occur - combination or mutation. By the first method, the combination of elements of two successful rules to form a new rule, which would take the place of an unsuccessful rule. The combination allows good performers to show improvement. Mutation would consist of the alteration of a successful rule into a rule that was slightly different. An important part of rule discovery is that adaptive agents keep the good rules, and get rid of the bad rules.

d. Prediction Mechanism

The use of building blocks to generate internal models is a pervasive feature of complex systems. Agents with predictive capabilities are able to describe new situations in terms of familiar components through the use of modeling. Models or "standard operating procedures" can serve as prediction mechanisms for the agents. The basic maneuver for construction of models is to eliminate details so that familiar patterns are emphasized. Because the models of interest here are interior to the agent, the agent must select patterns from the input it receives and then must convert those patterns into changes in its internal structure. As a practical matter, models must be created based on only a limited sample of the agent's environment - an environment that is constantly changing. However, for the model to be useful there must be some kind of repetition in the situations modeled, so that the model can be applied in the future. The solution to this problem is indicated by a consideration of the ability of humans to decompose a complicated scene into familiar parts. The parts that are used, such as "tree," "car," or "person," can be reused in a variety of combinations. "Indeed, it is evident that we parse a complex scene by searching for elements already tested for reusability by natural selection and learning."

By reusing these basic parts it is possible to have repetition despite facing novel situations. Thus, models must enable the agent to anticipate the consequences that follow when that pattern (or one like it) is encountered again. "If you have a process for discovering building blocks the combinations start working for you, rather than against you - you can describe a great many complicated things with relatively few building blocks." Physics, for example, shows how a few simple laws can produce the enormously rich behavior of the world.

2. Interactions of agents and the formation of aggregates

Complex large-scale behaviors emerge from the aggregate interactions of less complex agents. Agents will cooperate and compete to form more advanced structures as long as they are able (molecules would form cells, neurons would form brains, species would form ecosystems, consumers and corporations would form economies, etc.) The large-scale complex system is established solely because of the dynamical interactions among individual elements of the system: the critical state is self-organized. In a system, the fates of the agents and their relations with others are strongly influenced by interactions of other agents and their environment at other places and at earlier periods of time. This is a basic characteristics of all complex systems, and the phenomena that result from these historical interactions are the most complicated aspect of complex systems.

Aggregation of agents to form larger groups is facilitated by several characteristics of the interaction of agents. The first characteristics is tagging, which allow agents to differentiate agents or objects that would otherwise be indistinguishable. Agents compare tags with other agents to determine whether resources can be exchanged, and if so, which and how many. Tag-based interactions of agents, over time, provide a basis for specialization and cooperation. This leads to the emergence of meta-agents and general organizations that endure even though their internal components (agents) are continually changing. Tags, like rules, adapt in complex systems. The system selects tags that mediate useful interactions and against tags that cause malfunctions by the same processes used for rules.

The second characteristic of agent interaction are flows over a network of agents. In general terms, the agents are "nodes" in a network and the possible interactions are "connectors." In complex adaptive systems the flows through these networks vary over time, as do the particular nodes and connectors that are part of the network. Thus, neither the flows nor the networks are fixed in time. They are patterns that reflect changing adaptations as time elapses and experience accumulates. Tags are important to the nature of flows in a network because they often define the critical interactions and the major connections.

Aggregates can themselves act as agents at a higher level. If a cluster is coherent enough and stable enough, it can usually serve as a building block for some larger cluster. The behavior of these clusters, or meta-agents, is governed by the same principles that govern the underlying agents that aggregated in the first instance. This process of aggregation and re-aggregation often repeats numerous, yielding the hierarchical organization typical of complex systems.

3. Interactions of agents yield substantial changes

The essence of life is in the organization and not the molecules. The order that emerges in complex systems depends upon robust and typical properties of the systems, not on the details of structure and function. Thus, to understand the importance of agents in a complex system, it is important to consider the factors governing the agents' interactions.

a. Characteristics of flows

The previous section noted the importance of flows to the interaction of agents. There are two properties of flows that can cause small changes to have substantial results, creating difficulties for long-term predictions. The first of these is the multiplier effect. If additional resources are injected at some node (agent) of the system, this resource will typically be passed from node to node, producing a chain of changes. The multiplier effect is major feature of networks and flows, arising regardless of the particular nature of the resource. The effect is relevant to the estimation of the effect of the introduction of a new resource into the system, or the effect of a diversion of some resource over a new path. It jeopardizes long-range predictions based on simple trends. The second property of flows is the recycling effect as resources flow among nodes. A system that recycles some resource, with the same raw input of the resource, will produce more resources at each node.

b. Diversity of complex systems

The diversity of complex systems is important as well. All agents fill niches that are defined by their environment and the interactions centering around them. If an agent is removed from the system, a hole is created. "[T]he system typically responds with a cascade of adaptations resolution in a new agent that 'fills the hole.' The new agent typically occupies the same niche as the deleted agent and provides most of the missing interactions." Diversity results from the spread of agents as well. These new agents open new niches and provide opportunities for new interactions that can be exploited by modifications of other agents.

The diversity of complex systems is dynamic; that is, its persistence and coherence is like that of a standing wave. If the wave is disturbed, say, by a rock or a stick, the wave quickly repairs itself once the disturbance is removed. Similarly in complex adaptive systems, a pattern of interactions disturbed by the addition or extinction of agents often repairs itself. However, the new agents need not be identical to the old ones. Unlike standing waves, however, the "pattern" of a complex system evolves. Each new adaptation of the system opens the possibility for further interaction and new niches. The diversity of a complex system is in part the result of component agents' adaptation to the addition or extinction of agents, and the evolution of the pattern of the system itself.

The elements of a complex system combine and interact in such a way as to make the aggregate much more than merely the sum of the parts. The nature of the interaction and adaptation of the diverse component agents mean that the aggregate would have capabilities that would be difficult to attain in a single agent. Such complex capabilities are more easily approached step by step, using a distributed system. This is due to the co-evolutionary process of agents within an aggregated, coupled with the fact that their current state is highly dependent upon interactions at other places and historical states of the system. It should be evident that complex systems will not settle to a few highly adapted types that exploit all opportunities. Perpetual novelty is the hallmark of complex adaptive systems.

Adaptive agents co-evolve in an almost limitless space of possibilities, thus there is no practical way of "optimizing" their fitness. The most they can do is change and improve themselves relative to what other agents are doing. This is another reason why complex adaptive systems are characterized by perpetual novelty. Because agents are always changing and co-evolving, it is not possible to assign a single, fixed number designating an agent's fitness. The kinds of patterns that agents can actually perceive to work on are very limited compared with what is optimal.

4. Only short-term predictions

From the prior discussions of interactions among agents it is clear that the nature of complex systems only allow for short term predictions - not unlike predicting the weather. Any piece of code that is complex enough to be interesting will always surprise its programmers. First, the complication of interactions and the diversity of agents make simply tracing the impact of any change difficult. If the impact is difficult to trace, let alone foresee in advance, the system will be extremely difficult to control. Second, small changes in complex systems yield big results. In nonlinear systems, uncertainty in one's knowledge of the system's initial conditions can often grow substantially. Further, chance events can lock you in to any of a number of possible outcomes by being magnified through the feedback present in system flows. The substantial potential impact of small changes make initial predictions eventually seem like nonsense. Nonlinear interactions among agents often make the behavior of the aggregate more complicated than would be predicted by summing or averaging.

The theory of computation teaches that such a device might be behaving in a way that is its own shortest description. The shortest way to predict what this real physical system will do is just watch it. In a complex world, the pretense of long-term prediction must be rejected. The true consequences of our own best actions cannot be known. All that can be done is be locally wise, not globally wise.

II. Application of Complexity Theory

The four regulatory forces discussed in Part I will next be considered in light of the discussion of complexity theory to determine whether, and to what extent, these systems may fall within the purview of complexity theory.

A. Market

To describe the dynamics of an economy, it is necessary to have evolution equations of many economic quantities from perhaps thousands of sectors and millions of agents. Since everything depends on everything else, the equation will be coupled and nonlinear, in order to model economic complexity. In particular, the economic behavior of modern high-tech industries and the effects of technological innovations seem to be better modeled by the nonlinear dynamics of complex systems. The crucial point of the complex system approach is that from a macroscopic point of view the development of political, social, or cultural order is not only the sum of single intentions, but the collective result of nonlinear interactions.

The subject area of economic markets has been addressed particularly thoroughly by complexity researchers. Thus, it will be possible to discuss the application of complexity theory to market regulation at a general level.

1. Agents

Agents in the economy include consumers, producers, governments, businesses and economists, among others. Agents are faced with a limited number of options that they exploit in an attempt to increase their "utility function," just as biological species improve their fitness by reproducing or mutating. "[E]conomic agents form expectations-- they build up models of the economy and action the basis of prediction generated by these models. These anticpative models need neither be explicit, nor coherent, nor even mutually consistent." This activity by agents affects the environment in which they and other agents operate. Then all agents adjust their behavior to the new situation. The weakest agents in the economy are weeded out and other agents arise to take their place. Agents look to the most successful agents, and adopt the strategies they use, either in whole, or by taking parts from different agents to fit a given agent's circumstances.

2. Interactions and the formation of aggregates

Economic agents are complex entities, "and the economy (or any subsystem of the economy, such as a firm or an industry or a market) is a system made up of agents with this complexity level." The agents at any given level typically serve as "building blocks," constructing agents of the next higher level of activity. "The overall organization is more than hierarchical, with many sorts of tangled interactions (associations, channels of communication) across channels."

Flows are evident in the economy as well. Under traditional analysis, in economies "goods and services flow easily from agent to agent in amounts such that no further flow or trade can be advantageous to any trading partner. A small change in the economy, such as a change in the interest rate, causes small flows that adjust the imbalance."

3. Interactions yield substantial changes

"What happens in the economy is determined by the interaction of many dispersed, possibly heterogeneous, agents acting in parallel. The action of any given agent depends upon the anticipated actions of a limited number of other agents and on the aggregate state these agents cocreate." "[C]ontrols are provided by mechanisms of competition and coordination among agents. Economic actions are mediated by legal institutions, assigned roles, and shifting associations. Nor is there a universal competitor -- a single agent that can exploit all opportunities in the economy."

a. Flows

A well-known feature of technological learning curves is that there is rapid initial improvement which then slows exponentially. This implies that after a major innovation there can be an early period of increasing returns. Markets for high technology are clearly governed by increasing returns -- the cost to the company of the first copy of Windows is $50 million, the second copy is $10; similarly, the first B2 bomber is $21 billion, second is $500 million. A given initial investment in the technology increases productivity greatly as a result of the increasing returns. "New goods and services create niches that call forth the innovations of further new goods and services. Each may unleash growth both because of increasing returns in the early phase of improvement on learning curves or in open markets." Later, as improvement slows exponentially, further investment is governed by the traditional economic principle of diminishing returns.

Further, these markets have a "winner-take-all" quality. Network feedback and path dependence occur when a number of not uncommon conditions are met. First, that multiple outcomes or equilibria are possible initially. Second, that there are substantial incentives for each economic agent to conform their actions to be compatible with the actions of others. Third, to be successful an agent must anticipate the behavior of others, even as these agents themselves estimate what others will do. Finally, that there is a steep learning curve and high initial fixed costs, which give a sizable advantage to any technology that gets a head start. These factors tend to lead to a single dominant technology, with all the benefits accruing to the developer of that standard.

b. Diversity

The market has constantly-increasing diversity through the creation, and filling, of niches. As economic activities of agents adapt, the economic landscape itself deforms. Thus, as niches are filled, new niches may be created, resulting in perpetual novelty. As a result of the interaction of agents and the creation and filling of niches the market becomes an "economic web." Many of the goods and services in the economy are 'intermediate,' in that, they are themselves used in the creation of still other goods and services ultimately utilized by final consumers. "If the economy is a web, as it surely is, does the structure of that web itself determine and drive how the web transforms? If so, then we should seek a theory of the self-transformation of an economic web over time creating an ever-changing web of production technologies." "The economic web is precisely defined by just these production and consumption complements and substitutes."

4. Only short-term predictions

In the last section there was a discussion of the creation and filling of niches in the economy. Because this process is constantly occurring, improvements to a given system are always possible as new economic landscape co-evolves with the evolution of economic agents. This means that the economy operates far from any optimum equilibrium.

[T]he large fluctuations observed in economics indicate an economy operating at the self-organized critical state, in which minor shocks can lead to avalanches of all sizes, just like earth quakes. The fluctuations are unavoidable. There is no way that one can stabilize the economy and get rid of the fluctuations through regulations of interest rates or other measures. Eventually something different and quite unexpected will upset any carefully architectured balance, and there will be a major avalanche somewhere else in the system.

A small change in initial conditions can be worked out only by predicting the complex reactions of the other agents in the system. However, there is no way to do this except by a complete simulation of these other complex systems. There is also no shortcut to simulate the behavior of the complex agents. Indeed, any attempt to predict behavior in such a system would force the economist to face the problem of trying to calculate the value of an uncomputable function. The function is uncomputable not because of any efficiency of current computer or information system technologies. Rather, it relates to the self-referential character of the problem, where all the elements change in repose to changes elsewhere in the system. This is a problem that raw computing power cannot overcome.

A consequence is that if small chance events can lock you in to any of several possible outcomes, then the outcome selected may not be the best in an ultimate sense. Thus, the maximum individual freedom - and the free market - may not produce the best of all possible worlds. Per Bak's discussion of research that was done on traffic jams may provide some useful insight into economics. "Maybe Greenspan and Marx are wrong. The most robust state for an economy could be the decentralized self-organized critical state of capitalistic economics, with fluctuations of all sizes and durations. The fluctuations of prices and economic activity may be a nuisance (in particular if it hits you), but that is the best we can do."

B. Architecture

It is difficult to provide a single, uniform discussion about the complexity of architecture. Thus, three specific examples - two in nature and one code - will be discussed to illuminate the entire area, with more in-depth consideration of these and other areas left to the reader.

1. Genetics

Biological adaptation can occur through changes in an organism's genetic makeup. The characteristics of every organism is determined by the genes in that organism's chromosomes. These genes have several forms, known as alleles, which are responsible for the different sets of characteristics that are associated with a given gene. For example, certain types of garden peas have a single gene that determines the color of their blossom - one allele causes a white blossom, the other causing a pink blossom. Because of the large number of genes in the chromosomes of a typical vertebrate (tens of thousands), each of which has several alleles, even a population of 10 billion would contain only a tiny fraction of the attainable sets of chromosomes possible from allele combinations. These are the agents in a genetic environment. The alleles can be considered the basic agents, and the genes, chromosomes and, ultimately, observed characteristics, are meta-agents that are created through aggregation.

The large number of possible genetic structures attainable through allele and gene combinations is an indicator of the complexity of biological system. However, the true complexity of these systems comes from the interactions. Different alleles of the same gene produce related proteins, which themselves produce variations in the characteristics associated with that gene. Commonly the proteins these alleles produce (or combinations of them) are catalysts, called enzymes, that can cause reaction rates to increase by factors of 10,000 or more. As a consequence of their production of catalysts, genes exercise extensive control over the reaction in a cell. The enzyme flows control ongoing reactions so strongly that they are the major determinants of the cell's form. This leads to a kind of feedback effect - the products of the enzyme-controlled reaction themselves enter into further reactions, leading to widespread changes in cell form and function from the activities of a single enzyme.

In many cases sequences of reactions require the prior existence of several enzymes for them to proceed. The subtraction of a single enzyme could stop a reaction completely. Complicated reactions involving both positive and negative feedback are common, particularly when the product of the reaction is a "stage-setting" catalyst or inhibitor for an intermediate state in the reaction. Because the effect of an individual allele depends upon what other alleles are already present, the stage-setting catalysts must be encouraged despite the fact that their only role is intermediate. Small changes, the removal of a singe enzyme, for example, can produce large effects down the line. Thus, the organism's observed characteristics - its phenotypes - depend strongly on these complex interactions, and small changes early in the process can lead to vastly different phenotypes.

Although the problem of attaining fit characteristics would seem to be one of choosing the appropriate set of alleles, problems in coordination among alleles makes the situation more complicated. What may be a good allele with certain sets of alleles for other genes, can be disastrous in a different genetic context. As a result, adaptation is not a simple matter of independently selecting particular alleles for a given gene. Because a fit characteristics results from a co-evolved set of alleles, there must be a consideration of what alleles will appear along with a given allele.

As a result of the complex interactions and adaptations of individual alleles, adaptation in genetic makeup involves the search for co-adapted sets of alleles. The co-adapted sets together augment the performance of the corresponding phenotype. The co-adaptation depends strongly upon the environment of the phenotype, which plays a role in determining which sets produce successful phenotypes. For example, the set of alleles which produce gills in fish is useful only in aquatic environments. The dependence of co-adaptation upon the environment gives rise to environmental niches, meaning a set of features of the environment which can be exploited by an appropriate organization of the phenotype. Quite distinct co-adapted sets can exploit the same environmental niche. For example, the eye of aquatic mammals and the eye of the octopus perform the same function in exploiting the same environmental niche, but are caused by sets of alleles of entirely unrelated sets of genes. Thus, through the co-adaptation of agents and their environment, it is reasonable to expected the substantial diversity in gene-allele combinations that are observed in nature.

Fitness, viewed as a measure of a genotype's influence upon the future state of the organism, introduces a concept useful through the whole spectrum of adaptation. The fitness of a phenotype is defined as the number of its offspring which survive to reproduce. Each phenotype exists within a population of similar phenotypes that is constantly changing due to reproduction and death. The fitness of an individual phenotype is thus clearly related to its influence upon the future development of the population in which it exists. A phenotype that has a high fitness will have its alleles substantially represented in the next generation of the population. A population can consequently be considered a reservoir of co-adapted sets, preserving the history of past advances, including the environmental niches encountered.

With fitness related to the ability to pass on alleles to future generations, it is important to consider how this might occur. With simple copying of phenotypes successful phenotypes are passed on, but there is no capability to improve on these characteristics. With mutation, on the other hand, you have no capability to ensure that the fittest phenotypes will survive, or that the population will have any likelihood of success in the future. Thus, the actual process includes two steps. First, individuals have a rate of reproduction proportional to their performance, ensuring that the most successful individuals are represented most heavily. Second, genetic operators are applied, interchanging and modifying sets of alleles in the chromosomes of different individuals.

2. Ecologies

From the perspective of complexity theory, ecological systems of coexisting, interacting species have a number of features that make them interesting. First, ecological systems have substantial diversity. Estimates of the number of species is in the range of 5 million - 30 million. Each species is itself comprised of hundreds to billions of individuals. The ways in which these species compare to one another has profound implications for the organization of ecological systems. The individual organisms that make up a species are more similar to each other than to members of other species in genetic composition, physiology and behavior. The gaps between species enables the recognition of species as discrete units of organization. These unique combinations of traits provide the "tags" or "labels" in the general model of agents, or, more accurately, in the case of species, meta-agents, in complex systems.

The distinctions that allow the classification of individuals arises from classic co-evolution of agents to deal with their environment and other agents therein. The differences in species reflect different requirements for resources, different tolerances for environmental conditions and the types of interactions with other species. A consequence of the inhabitation of a particular niche by a species is that the species undergoes a pattern of variations in the abundance of its members over time and geography. The coexisting species living together in one place (an "ecological community") are interconnected in networks that involve the flow of energy, materials and services between them.

These ecological systems are themselves adaptive, responding to change by at least three means. First, ecosystems adapt through changes in the individual organisms. The second mode of adaptation involves the movement of species between ecosystems. Both of these mechanisms involve the activities of individual organisms attempting to increase their ability to survive and reproduce. These also generally produce rapid negative feedback in ecosystems, maintaining flows of energy and materials and preventing the accumulation of unused resources. The final mechanism for adaptation is natural selection, which occurs at both the individual and species level. In contrast to the first two methods of adaptation, natural selection requires time scales of generations to millennia, because it involves the death of differential reproduction of individuals or species.

3. The Internet

Finally, within the category of architecture is the important subcategory of "code." For purposes of this paper this means, specifically, the Internet. The Internet clearly has the traits of a complex system. The agents in this system are many and varied. They include individuals and their computers. TCP/IP is, at least to some extent, the performance system for these agents. The individuals form aggregate meta-agents such as customers of on-line service providers, such as AOL, standards organizations like the Internet Engineering Task Force (IETF), business groups and the government, among others. The computers may be aggregated into intranets that are themselves on the Internet, into geographic units, etc.

The development of TCP/IP provides a good example of niche creation and the diversity resulting therefrom. DARPA, the sponsor of ARPANET, the forerunner to the Internet, experimented with a number of networking schemes over a period of 15 years before settling on TCP/IP. Following the transition to a uniform networking protocol, TCP/IP, vendors, researchers and newly-created DARPA working groups began playing increasingly important roles in Internet technology development The involvement of researchers and universities in Internet development led to lobbying efforts for wider access to their colleagues, ultimately leading to the National Science Foundation's NSFNet providing Internet access for researchers. The creation of the NSFNet itself resulted in substantial growth. The increased usage due to researchers and developers, leading to the creation of the domain name system. The research environment also fostered the creation of the World Wide Web. Each stage of technological development of the Internet led to the creation of new niches to be filled by new technologies and organizations, which in turn created more niches, etc.

The current view of the Internet is much different than that anticipated by those involved in its creation.

The ARPANET was not intended as a message system. In the minds of its inventors, the network was intended for resource-sharing, period. That very little of its capacity was actually ever used for resource-sharing was a fact soon submersed in the tide of electronic mail. Between 1972 and the early 1980s, e-mail, or network mail as its was referred to, was discovered by thousands of users. The decade gave rise to many of the enduring features of modern digital culture: Flames, emotiocons, the @ sign, debates on free speech and privacy, and a sleepless search for technical improvements and agreements about the technical underpinnings of its all. At first, e-mail was difficult to use, but by the end of the 1970s the big problems had been licked. The big rise in message traffic was to become the largest early force in the network's growth and development. E-mail was to the ARPANET what the Louisiana Purchase was to the young United States. Things only got better as the network grew and technology converged with the torrential human tendency to talk.

Clearly, the development of the Internet led to consequences that were not anticipated at the early stages of technological development.

C. Norms

There are numerous theories about how norms originate. It is beyond the scope of this article to provide an in-depth consideration of each. Thus, the "esteem theory" of norm origin will be used as an example for several reasons. First, its framework provides a clearer example of how complexity theory could apply to norm development. Second, it is expressly not all-encompassing -- it is consistent with, and exists to supplement, other theories of norm origin.

Consider the possibility that people seek esteem: the good opinion or respect of others. This means that an individual's utility depends in part on the opinion that she perceives others to hold of her. The appropriate conditions can cause the desire for others' esteem to produce a norm. An esteem norm will arise in a population regarding some particular behavior if three factors are present: (1) there is some amount of consensus regarding whether engaging in the particular behavior is worthy of positive or negative esteem; (2) there is the possibility that others will be able to determine when someone engages in the behavior; and (3) the consensus and risk of detection is understood within the relevant population. These factors create either a benefit or a cost for engaging in the activity, depending upon whether the activity generates positive or negative esteem. If the consensus in a population is that engaging in a particular behavior is worthy of esteem, a norm will arise if the esteem benefits exceed, for most people, the costs of engaging in the behavior. Conversely, if the consensus condemns the behavior, the norm will arise if most people find the esteem costs to be greater than the benefits from engaging in the activity. This could be considered a description of the internal characteristics of agents, specifically, performance system and credit assignment mechanisms, and also is descriptive of the interactions between the agents (people) in this system.

The first condition for an esteem norm to arise is a consensus within the population about the esteem worthiness of certain behavior. The existence of this consensus indicates only that individuals have some evaluative opinions about others, rather than being indifferent. In addition to having these opinions, it is necessary that people direct some of these opinions at other people. It is not strictly necessary that the majority of people in the relevant population share the consensus opinion. It could be enough for a large minority to bear strong feelings about behavior, where the numerical majority is indifferent. Without a group of people that hold the opposite opinion regarding the behavior, there would be a net cost to any individual for violating this consensus. Thus, all that is required is that the majority of those who hold an opinion have the same opinion.

It seems likely that, even if beliefs about behavior was initially randomly distributed, there would be some occurrences of consensus regarding a behavior in a population. Because granting esteem is costless to the grantor, it makes sense for the individual to grant esteem in ways that reinforce beneficial behaviors and punish harmful behaviors. When a substantial majority of those with an opinion perceive that a behavior imposes positive or negative externalities upon them, selfish behavior regarding esteem allocation can produce a consensus. For example, if all homeowners consider being surrounded by well-kept houses beneficial to them, the neighborhood consensus will favor yard maintenance. Group discussion can also help produce a consensus. This may simply be selfish esteem allocation at the level of the meta-agent (the larger group of individuals) where the group becomes convinced a behavior hurts or helps it, or it could involve the argument, in a community that shares certain values or morals, that certain activity must be condemned or praised based upon that pre-existing, common belief. Exit is another means of creating consensus. Once some less-than-complete consensus is created in a population, members who wish to act in a contrary manner may leave the group in search of other like-minded individuals. Thus, the consensus in both the group they leave, and the group they ultimately join, are reinforced by this activity.

The second condition is an inherent risk that anyone who engages in the behavior at issue will be detected by the individuals in the population. This detection can come through various means. First, while pursuing various interests, information about others can be acquired accidentally, especially when this behavior occurs in public. Further, independent of an intent to enforce a norm, an individual will sometimes invest in detecting the behavior of those who harm her interests. Thus, because the esteem norm may initially arise due to selfish esteem allocation individuals may invest in determining whether the related norm is being violated out of the same selfish motives. In general, when an individual suffers from the conduct of another, she has a reason to invest in detecting it. Once the information is acquired, an individual can then costlessly withhold esteem in addition to pursuing any personal remedies to the situation. This detection can then be passed on to the larger group in the form of gossip. As long as there is a sufficient supply of gossips in the relevant population, the risk that an individual will discover deviant behavior creates a related, substantial risk that a large number of individuals will learn about their behavior as well.

The third condition necessary for the development of an esteem norm is that the consensus regarding a behavior and the risk of detection of deviant behavior are known within the relevant population. If people were ignorant of the consensus, or of that their behavior would be detected, they would freely act contrary to the consensus because they don't recognize the potential harms.

Assuming, as discussed above, that individuals desire esteem and the three necessary conditions exist, then anyone who violates a consensus incurs a cost. This cost will be the probability of detection multiplied by the value of esteem that is lost by this detection. If most people prefer to follow the consensus rather than bear the cost of esteem loss, this situation describes an esteem-based norm obligating individuals of the relevant population to follow the consensus on this activity. Because esteem is valued by individuals, but costs nothing for an individual to grant or withhold, there is no barrier to norm formation due to the costs of supporting or deterring relevant behavior. This will mean that there is no incentive to free ride, because one may as well allocate esteem selfishly; to discourage behavior from which one suffers, like littering; or to encourage beneficial behavior, like recycling.

There are effects associated with esteem norms that magnify the power of esteem sanctions due to the fact that esteem is a relative good. First, a feedback effect among people will lead to increasing compliance with the norm. As individuals compete for esteem, the cost of deviance from the norm increases as compliance increases. Thus, as the norm becomes more entrenched, the cost of deviation becomes higher, leading to even further entrenchment of the norm. Similarly, the status gained from compliance with the norm decreases as compliance increases. This may lead individuals seeking to achieve or maintain high compliance to lead the way to higher levels of norm compliance than was previously expected. Thus, competition for relative esteem can lead an initially weak norm to become very demanding.

The second effect leads esteem sanctions to establish material sanctioning of conduct. When people disapprove of deviants who approve norm violators, there can be the production of secondary norms obligating enforcement of primary norms by disapproving primary norm violators. Thus, although esteem norms initially arise because of the ability to costlessly sanction behavior contrary to the consensus, esteem may also explain why sometimes parties bear costs to enforce norms. The pursuit of "hero" status for exceptional norm compliance coupled with the feedback effect generally can cause individuals to incur costs inflicting material sanctions on norm violators because of the esteem they receive from enforcing primary norms. Material sanctions are merely the logical culmination of the prior two stages of norm development: just as competition for relative esteem may increase the material cost that members are willing to bear to comply with a primary norm, esteem competition may increase the cost that members are willing to assume to comply with the secondary obligation to enforce the primary norm. Even though it is relatively easy to conceal the fact that an individual is a violator of a secondary norm merely by publicly pretending to comply, this fact is irrelevant. Even if the individual only fakes disapproval, they still convey disapproval publicly and create a secondary enforcement norm. In the end, competition for relative esteem can transform a weak behavioral standard into a very demanding one.

D. Law

Fitness of laws is measured in terms of how successful the law is in meeting its goals. The goals of laws are those expressed as the motivation for legislative enactment or judicial decision -- what we might call the law's policy. A law is fit if it achieves its policy. Given a "landscape" of fitness peaks and valleys, it is only possible to observe the "terrein" near the law's current fitness position. Thus, law is in the same boat as biological species -- the only option is to grope around on the fitness landscape searching for higher peaks and trying to avoid the valleys. This problem is heightened when a law is being created in the first place. Choices about the form and substance of laws often are limited by significant, often unanticipated, constraints posed by other types of regulatory realities. "Laws, like biological species restricted to walking their way from peaks to higher peaks, must be pretty lucky to start out on Mount Everest."

1. Environmental Law

There have been several attempts to apply complexity theory to law. J.B. Ruhl's discussion of the specific area of environmental law is a particularly good example, and will be used as one demonstration that there are aspects of law that are describable by complexity theory.

In the environmental law context, there is the initial success and eventual failure of tort law for providing environmental protection. Nuisance law was one of the earliest legal remedies available to control environmental polluters. Before long, it became apparent that nuisance law imposed evidentiary impediments, including proof of causation, fault, injury and damages, particularly when pollution was coming from multiple sources, that prevented many nuisance suits. Courts were reluctant to enjoin business important to the local economy. As industrialization, and thus the problem of pollution, increased, the shortcomings of nuisance law became further exaggerated. Nuisance law did evolve, to some extent, to respond to these problems. Mass tort litigation, coupled with the rise in strict liability, allowed some of the evidentiary problems to be overcome, and made injunction more likely. Federal common law developed remedies for pollution between states. This evolution was of the form of "peak-to-peak" movement that provided only limited gains in the "fitness" of legal remedy for environmental problems as compared to an ideal.

It is difficult to move in "jumps," to higher peaks, rather than continuing to navigate within your immediate area in the co-evolutionary environment. However, federal command-and-control legislation that arose in the 1970s provided a jump to a new area in the fitness terrein. In the early- to mid- 1970s Congress enacted or substantially amended numerous environmental regulation statutes and natural resource protection. It is important to consider the "stage-setting" events that made this change possible. The first factor was the increase in environmentalism in the 1960s when environmental amenities gained status as highly desirable property rights. Second, the judicial expansion of Congress' commerce clause power to facilitate civil rights legislation also allowed for regulation in other areas formerly the sole purview of the states. A further factor may have been Nixon's executive order, creating the Environmental Protection Agency, in an attempt to take the lead on environmental regulation away from the Democratic Congress. These new laws resulted in preventive regulation, removed the need for private injury for enforcement and established agencies with power to create further regulations. This new legal regime seemed extremely successful; as if a new, higher fitness peak had been reached. While nuisance law was not eliminated, its niche was substantially smaller after the law and environment evolved to the command-and-control system.

As one would expect for a complex system, the legal and other regulatory environments adapted to deal with the new environmental regulations. "Through competition, cooperation, and co-evolution, some components retreated, some were extinguished, but some adapted to new fitness peaks. As that process unfolded, the fitness landscape of the federal regulatory program itself deformed." There are new demands being placed on the regulatory system to be accountable for concerns such as economic rationality and private property rights.

In the legal context it is particularly important to remember that "[w]hile the complex interactions in a system mean that some of the consequences will be unintended and undesired, it is hard to measure their frequency." In the legal context, "straightforward effects are common and often dominate perverse ones. If this were not the case, it would be hard to see how society, progress, or any stable human interaction could develop."

2. Criminal Law

A second area that evidences clear aspects of complexity is criminal law. William Stuntz has provided an insightful analysis of the complex interactions among criminal procedure and the rest of the legal and political aspects of law enforcement. Professor Stuntz argues that criminal procedure must be considered for their role in the larger system of substantive law and politics that are involved in law enforcement. Specifically, this environment includes three main forces: "crime rates, the definition of crime (which of course partly determines crime rates), and funding decisions -- how much money to spend on police, prosecutors, defense attorneys, judges, and prisons." "Decisions about resources have important feedback effects on what the system looks like. And what the system looks like -- the size and scope of constitutional criminal procedure -- may in turn shape decisions about resources."

High crime rates give prosecutors substantial choice in which cases they pursue. Thus, cases where there were procedural errors can be replaced with cases with no such problems. A prosecutor might have ten winning cases on his desk, with enough time to pursue only seven. A legal development that ruined one of the ten cases would simply mean that that the prosecutor would have to chose seven cases out of the remaining nine. Thus, there would be a change in the distribution of cases prosecuted away from the types of cases where the defense claim would be likely to arise.

Further, underfunding of criminal defense counsel imposes such a substantial workload on public defenders that only a limited number of defenses, procedural or otherwise, can be raised. There are many rules in the criminal justice system which could be contested in any given prosecution. These judicially-produced rules can trump the preferences of the majority in congress or elsewhere. However, by underfunding criminal defense, congress can create a system where there will be less enforcement of constitutional criminal procedure. This underfunding leads to at least three effects. First, defense counsel may increasingly encourage their clients to clients to plead guilty, since trying a large number of cases is unaffordable. Where the case does go to trial, resource limitations prevent the public defender from spending as much time in jury selection, calling witnesses and pursuing factual arguments. Finally, resource constraints will limit the number and vigor of any objections or pursuit of other constitutional claims. Thus, the appropriate question "is not whether greater regulation of jury selection or police searches and seizures is a good thing in itself, but whether it is worth some loss of enforcement of, say, Miranda or double jeopardy law. Given the enforcement regime, defendants' rights are in competition with each other."

Within this system, the judges and Justices who create the law that determines the rights of defendants and the rules of criminal procedure have little information about crime rates and funding decisions. The expansion of the law of criminal procedure by the Supreme Court and lower appellate courts beginning in the 1960's created new defense claims and arguments. However, this was accompanied a substantial expansion in crime. This meant that prosecutors had many winning cases to choose from, constrained only by their budgets. Prosecutorial discretion, coupled with practical limitations on the number of procedural claims that could be raised, made rules of criminal procedure appear to impose little burden on the system, leading to an increased amount of regulation of criminal procedure than might otherwise have been the case.

This mistaken view of the effects of the rules they created have led to several actions on the part of courts. The first effect has been for courts to spend little time worrying about the merits of criminal cases. Rigorous attention to the merits is costly and time-consuming. Further, the appellate courts see comparatively few strong claims on the merits, and observe many guilty pleas leading them to conclude that the system is functioning well. Notably, the courts have developed much more favorable standards of review for non-guilt-related claims than for those on the merits of guilt or innocence. The second effect of the misperceptions of courts about the consequences of their rules is that constitutional regulation of the criminal process increases.

In the areas where the costs are evident to judges, the Supreme Court has moved to constrain these costs. In the are of large-scale drug prosecutions, where the defendants likely have the funds to hire expensive attorneys, the Court has cooperated in making forfeiture remedies effective in such cases, limiting the funds available to use for their defense. Further, Fourth Amendment jurisprudence has allowed for informants and undercover agents in drug prosecutions, despite the increased restrictions on other kinds of investigation. Similarly, although capital murder defendants often have lower-quality, underpaid defense counsel at trial, they may also have high-quality pro bono representation on collateral appeal. The resulting high volume in capital habeas litigation led to the Court cutting off habeas relief through procedural default and retroactivity decisions.

Legislators have, in turn, responded to the increased pro-defendant constitutional rules of criminal procedure. As courts have raised the cost of criminal investigation and prosecution, legislators have attempted to reduce these costs through limits on funding for criminal defense. The legislators view criminal procedure doctrines as having reduces the benefit of the marginal dollar of funding for criminal defense, because that dollar will be spent in ways the legislators disagree with. Thus, in a world with more "technicalities" that allow defendants to go free, there is less money for defense attorneys than there otherwise would be. "The law of criminal procedure thus may give defense counsel more arguments to raise -- more arrows in the quiver -- but at the cost of also giving them less time and money to work with -- fewer shots at the target."

Overcriminlaization and high mandatory sentences have been another legislative response to increased constitutional criminal procedure. Coupled with underfunding of criminal defense, these factors encourage guilty pleas, which avoids most of the requirements imposed by criminal procedure. The ability to search incident to arrest for traffic "crimes" reduces the cost of the probable cause requirement, and sodomy laws reduce the cost of the beyond-a-reasonable-doubt standard of proof in many rape cases. Interestingly, it is in the areas of funding and substantive criminal law that the courts have most deferred to legislatures.

Thus, the response of prosecutors and legislators to rules of criminal procedure, given the number of criminal cases available, serves primarily to circumvent these constitutional protections. Because many of the problems are not observed by the courts, they proceed to further develop rules of criminal procedure, leading to a repeated response by legislators and prosecutors, in a self-reinforcing cycle.

III. Interactions of Forces

Addressing the complex aspects of the regulatory forces still does not allow for many conclusions about the effect of the kind of indirect regulation being proposed in the context of the Internet. Thus, the nature of the interactions between the regulatory forces becomes important to the ultimate outcomes of any regulation. This includes not only the expected adaptation of code to any legal regulation, but also any secondary effects occurring in the context of other regulatory forces.

A. Primary Effects

The primary effects of law's regulation of code will be the impact of this regulation on the evolution of code due to the element insert according to law, and the impact on the evolution of law due the existing legal regulation.

1. Effects on Code

First, it is important to recognize that, because the code implemented by law did not arise naturally from the system, there is necessarily no relation between the code and the system's prior location in its fitness landscape. Thus, the new technology may place the system in a superior landscape (a higher set of peaks), an inferior landscape (lower peaks, or even valleys), or a landscape that is substantially the same. Ultimately, it is difficult to predict ex ante where the system will end up -- particularly if there is no consideration of this problem prior to the implementation of the new code.

Second, as a complex system, it is clear that the agents, meta-agents and the environment itself adapt to any changes in any of the other elements. The intervention of law to change or specify certain elements of code will result in the adaptation of other elements in the system. Indeed, substantial adaptation is likely, given the fact that the element inserted into the system was not anticipated by internals models and prediction mechanisms used by agents to govern their behavior. Further, changes or additions of new elements in complex systems often leads to the creation, and filling, of niches. Thus, the fellow agents that existing agents find them surrounded by, and competing with, could be vastly different than those that were present prior to the law-mandated code change.

Third, the changes caused by law will propagate and expand in such a way that their ultimate impact could reasonably be expected to be substantially greater than its initial impact. As mentioned previously, the niche-creation aspects of new elements and agents in a complex system will lead to effects that propagate through time as the niches created by the law-mandated code are filled, causing the creation of further niches, etc. The building-block nature of agents means that this new building block could be built into meta-agents, influencing how these agents function. If the code does not rise to the level of being an agent, but merely a performance rule, it may similarly serve a building block function if later rules are based in part on the reproduction of other rules with the new rule, or by mutations of this rule. All of these factors indicate that even a small change in code can, through time, impact substantial portions of any complex system. This will make it nearly impossible to undo such a rule, were a change to be desired at some point in the future.

These three points highlight the effect that legal regulation of code can be expected to have. Obviously, it can be expected to result in a change in code -- both the immediate, intended change, and the later changes to code that propagate through the system as a result of co-evolution in complex systems. Thus, the code regulation may have the effect of regulating as desired, but also yield numerous unanticipated types of regulation that are due to the adaptation of the complex system.

2. Effects on Law

There will be an impact on the law itself as a result of the regulation. Because one regulatory approach was chosen, other approaches may be abandoned or not considered. Further, the use of a technology to solve one problem, or the specifications used to attempt to achieve a correlation between the policy objective and the result of the law, may foreclose other options for indirect regulation through code for this policy objective or for others. Thus, a law regulating through code may foreclose or limit other options available to the law.

B. Secondary Effects

It seems clear that the regulatory forces do not act independently of one another, as if in a vacuum. Consider the example of cigarette smoking. There are certainly strong norms in much of American society today in opposition to smoking. These norms arose in part because of the harmful effects smoking has on everyone's health (architecture), as well as concern about the cost to society of treating all the health problems that smoking causes (market). Law responds by sponsoring ads against smoking (norms), passing laws keeping smokers away from places where they could cause harm through second-hand smoke (architecture), and levying substantial taxes on cigarettes (market). Cigarette manufacturers may respond to the decrease in sales (market) through increased advertising (norms), efforts to lobby the government (law), and research and development to make cigarette production cheaper (market) or to make cigarettes more addictive (architecture). Private individuals may respond with lawsuits (law). These lawsuit may reveal the cigarette companies tactics, giving the government the opportunity to attack cigarette companies with lawsuits of their own and restrictions on advertising (law), as well as providing useful messages for their anti-smoking ads (norms). Even from this brief discussion of one example, it is clear that there are substantial interactions among these regulatory forces.

It is reasonable to expect some effect in the other regulatory forces to occur as a result of changes in code. Just as the change in code affected code through changing the fitness landscape led to adaptation in the rest of the system, having these changes built in to future generations of the system in non-obvious ways, can be expected to cause similar changes in all areas of regulation. While the extent to which any of the effects will occur in a given regulatory force for a given code change is certainly unclear, it is likely some changes will occur.

Thus, in evaluating the success or failure of a given code regulation, it is at least necessary to consider the resulting regulatory regime in the aggregate. It is not enough to consider whether the code change appears to be having the intended result. The entire effect on the regulation of the individual through law, norms, code, market and architecture must be considered. For example, a law mandating a government-accessible back door in all encryption products may successfully give law enforcement access to substantial amounts of data, but do nothing to further the ultimate goal of the regulation - law enforcement access to evidence of a crime, if the U.S. share of the market for strong cryptography is small enough that criminals can readily obtain products with no back door, or if criminals use some other technology to communicate criminal plans. Similarly, looking only at the criminal deterrence effect of head start programs might lead to a limitation of access to the program to boys, or an elimination of the program. However, there are other, substantially socially beneficial results of the program that may more than justify its existence, regardless of the original motivation for its inception.

C. Results of these Problems

These complex interactions of regulatory forces lead to several problems that face any regulation of an individual through direct regulation of code. First, the intended result of the regulation may not be achieved because of the adaptations of the code and other regulatory forces in ways which undermine this goal. The alterations in law, code, market and norms may result in a regulatory system that has the net regulatory effect that is in direct opposition to the intended goals of the regulation.

Even if the primary goal is achieved through the regulation, the adaptations of the systems of regulation may lead to other problems which are worse than the problems that led to the regulation. This is simply a permutation of the first problem. The complex adaptations of the modes of regulation due to code changes need not be limited to aspects that relate only to the policy goal. It would also be entirely consistent for the negative regulatory changes discussed in the first problem to effect a variety of other areas as well.

The nature of the U.S. judicial system poses problems in the context of code regulation. A minor problem could occur as a result of different results to challenges of the code regulation in different jurisdictions. The technology of the Internet is largely international in nature. Thus, to require the code to reflect the various standards of various jurisdictions would be difficult given the current technological framework. A more serious problem, however, results from the fact that it may be years before the Supreme Court invalidates unconstitutional statute or practice. This may mean that substantial time will have passed, and the code will have become inextricably integrated into the technology of the Internet. Thus, even if the statute mandating the inclusion of a particular portion of code is held unconstitutional, this may be irrelevant if it would be impossible, or at least burdensome to fully remove from the system. Further, adaptations would have occurred due to the inclusion of the code that would leave the system in a different state than it would have been had the code never been included.

Another effect of the current technological difficulty in implementing a code requirement selectively based on jurisdiction is the difficulty in identifying the true source of code regulations. It is not unlikely that users of the Internet in foreign countries will be subjected to the regulation. There is nothing surprising about this fact -- it occurs regardless of the role of law in controlling code. However, when certain elements of code are adopted for purely political reasons, it might be reasonable for these regulated parties to desire a say in the process. However, the source of these regulations may not be readily apparent. These parties may then attempt to represent their interests to organizations that have a role in Internet regulation, but are not, in fact, the party responsible for the code regulation. This could be particularly problematic for the technological development of the Internet. There is no entity currently involved in the development of Internet technology that has the ability to enforce the technical standards they develop. Thus, there could easily be a situation of competition among standards organizations and among governments to have various technologies implemented, leading to confusion and the inability to adopt a standard.

Although the problems associated with complexity theory can be present in any type of regulation involving complex systems, there may be reasons to believe that these will be greater given the current nature of the Internet. Although the Internet has undergone substantial growth and advancement in relatively few years, it is still a new medium with the potential for much more development ahead. This means that any code regulation currently implemented will have an impact that is greater in both breadth and depth than indirect regulation utilizing more developed forces such as the market or norms. The fact that the Internet is young may also mean that there are few or no aspects that have "settled down" into a linear, ordered system. Unlike the market, for example, which has substantial linear elements that will not be susceptible to the complex effects of adaptation or co-evolution, the entire Internet will react in a complex way to the inclusion of the new code.

IV. Hypothetical Examples

It is clearly not possible to fully predict or even observe the full effects of any code regulation, particularly ones that have not yet, or have only recently been implemented. However, an attempt to do this in a very cursory way for several examples can be helpful in demonstrating even the most obvious potential problems with regulation in complex systems.

A. Digital Telephony Act

One example of code regulation raised by Lessig is the Digital Telephony Act. This act was an attempt to ensure that new digital telephone networks would be wire-tap accessible. The law required the adoption of a type of digital network technology that facilitated law enforcement access.

One apparent benefit of the law is the limitation of competition between digital telephone network standards. After the Digital Telephony Act, the types of digital network technologies realistically on the table became much more limited than they were before. To the extent that there might otherwise have been a standards war between non-interoperable standards, the law was helpful in avoiding these costs.

However, as with any law making a specific choice regarding a technology or some element thereof, it is susceptible to the potential for lock-in. While new technologies may be developed that would technologically superior to the current type of digital network, and other technological advances may render "traditional" wiretapping of digital networks unnecessary or irrelevant, the technology is more likely to remain than would otherwise be the case in a free market because of the uncertainty of compliance with the law.

Further, although the government currently claims that the act will not result in a substantial increase in wiretapping, there is no basis for assuming that this will continue to be true in the future. As new technologies are used by criminals to evade law enforcement techniques, law enforcement may be tempted to push the current law to its limits to make up for the loss in evidence, rather than pursue new options. This is an undesirable result because the current policy discussion about the costs and benefits of this regulatory approach are based on the law enforcement claim of limited use. However, in the future the law could authorize activity of a sufficient quantity that it might not have been approved initially, and this would occur without further analysis by the public or legislature.

B. Communications Decency Act

Another example of attempted code regulation is the Communications Decency Act ("CDA"). The act banned indecent speech on the Internet generally, while granting the right to engage in indecent speech if a reasonably effective screening technology is used. Lessig argues that these two parts should be read together as an indirect regulation of code; effectively mandating a technology that facilitates discrimination based on age. This was, of course, struck down in Reno v. ACLU, but if it had not been, one could imagine a number of negative effects the law might have had.

First, those wishing to engage in indecent speech that invested in technologies with the required screening capability would have experienced a path dependence effect that could have resulted in the stifling of the development or acceptance of new, more efficient technologies that would minimize the burdens on users of the age-verification process. Because they would have already invested in the old technology, they would have been unlikely to expend the resources to switch to the new technology as readily as might be considered optimal from a social welfare standpoint.

Second, companies may have chosen to avoid application of the law by exploiting the ambiguity in jurisdiction on the Internet and moving the location of their web site to a non-US server, or having the site run by non-US businesses. Not only would these businesses have evaded the law, but the United States would have lost the opportunity to exert control over them through other means, such as direct legal regulation.

Third, the Internet does not currently have a good architecture for ascertaining identity generally, and the CDA would have done little to address this general problem. Thus, it might have been relatively easy to violate the law without suffering any consequences, simply by exploiting these weaknesses in the identity infrastructure. Because the target of the legislation is the operator of a web page, rather than a "bottleneck" point, such as an ISP, it would be difficult to determine the identity of anyone wishing to violate the law. In fact, the good Samaritan provision, which survived the Court's decision in Reno v. ACLU, would potentially have removed the ISP as an option for enforcement of the requirement under the CDA.

Fourth, by focusing on screening of children, and thus targeting for prosecution those sites that do not engage in screening, the opportunity to prosecute on the basis of content might have been lost. By simply complying with the CDA's screening requirement, sites distributing, for example, child pornography, might have gone substantially undetected because law enforcement would be focusing its efforts elsewhere.

Finally, the perception of the Internet as a "child-safe" area after the CDA could have led to much less parental supervision of their children's activities on the Internet. Yet, this supervision is as important to ensure, for example, that children are not engaged in inappropriate chat room conversations. Further, there is a substantial amount of material on the Internet that may not be indecent, yet may be inappropriate for children, depending on their ages. Various medial databases, historical accounts of wars or databases of horror stories may be inappropriate for certain children, yet the CDA did nothing to address access to this content.

C. The V-Chip

The third example of code regulation is the requirement of V-chip development to allow blocking of television programming consistent with some rating system. This is a regulation of code that will allow content discrimination by viewers based on a number of factors.

One potential negative effect of the V-chip and the rating system is that programs with certain rating types might be considered preferable. This could lead to intentional inclusion of certain elements in any program to ensure it falls within this category, even when, in a category-free world, the elements would not have been included. This would have the effect of either foreclosing more programs from the range available to viewers of certain discrimination levels, or leading to increased viewing of this type of material by those wishing to see programming, but who otherwise would have a preference against such content.

Another obvious consequence is similar to one noted in the CDA discussion -- parental over-reliance on technology leading to inadequate monitoring of their children's viewing. The same pitfalls discussed in the CDA example are present here.

Further, based on the assumption that users will be able to screen undesirable content, a wider range of content may become available during an increased percentage of the day. This means that, if a child's parents have not enabled the screening mechanism, both the child and all of his or her friends will have full access, not only to the types of content currently available, but also to new content that is even less appropriate for children. This is clearly no better than the current system, but, when coupled with the potentially lessened monitoring by parents, may in fact be worse.

Finally, by ceding control in this are to technology and individual viewers, the government may be yielding some of its own power to regulate content legitimately. Whether this is intentional is unclear, but some justifications for broadcasting regulation may be lost through the ceding of power to code, despite the fact that, for public policy reasons, it might be good for the government to retain some authority in this regard.

V. Relative Effects of Complexity on Changes in the Regulation

Although earlier discussions highlighted the problems complexity theory poses generally to all modes of regulation, in fact, certain modes may currently present comparatively greater problems than others. Specifically, the flexibility or adaptability of a given regulation that is imposed in a particular mode can impact the magnitude or scope of the potential harm to the system. Both norms and the market seem quite flexible. If a regulator intentionally attempts to regulate by creating or supporting a norm, by affecting the natural world, or by subsidizing a particular element in a market, the systems can deal with these regulations in a way that minimizes any potential harms. Through ordinary process of the death of unfit elements an unfit regulation can be killed. Similarly, a less-than-ideal regulation can, through mutation and reproduction, lead to future permutations of the regulation that may be fit for the system. Thus, a bad regulation can be eliminated and an average regulation can be improved.

In the case of law and code this ability is, in many cases, absent. There is certainly some ability to improve a law through judicial interpretation, however such "judicial activism" may be unlikely in many cases, and may not go far enough to improve the law. "Death" of an unconstitutional law is also possible, but that death bears no relationship to the effectiveness or fitness of the law for a given purpose. Future legislative activity may also remedy the problem, but politics may prevent this from occurring as often as is necessary. Similarly, once code is implemented it may not be a trivial matter to remove or alter the code given a sufficient installed base. Thus, although the systems of law and code will likely adapt and evolve around and as a result of the new regulation, it is relatively unlikely that much can be done to adapt or evolve the regulation itself. This can mean the persistence of inadequate or entirely unfit regulations in a system into the future.

It is worth noting that this need not be the case, however. Greater flexibility could be given to law that would allow for the needed adaptations and improvements. Such flexibility could come from leaving substantial aspects of a regulation to be determined and implemented on an ongoing basis such as could be done with an administrative agency, or to leave much of the law to be decided by the judiciary as was done with the Sherman Antitrust Act. Increased flexibility could also come from a doctrine of desuetude, eliminating unenforced regulations as "dead."

Similarly, alternative methods of network design could put code in an improved position to cause the adaptation of an inadequate regulation. The "end-to-end" methodology for network design encouraged by Jerome Saltz among others, if strictly followed, could go some distance to increasing the flexibility of code regulations. The "end-to-end" theory of network design provides for an allocation of technological capabilities between the communication subsystem and the rest of the system. The argument against the implementation of particular functions at the basic system level is that:

The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible. (Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.)

Under a system designed according to this theory, much of the functionality would occur close to the end user. Thus, code regulation, consistent with this theory, would similarly occur near the user level. In such circumstances there may be enough turnover in the applications and hardware used by the end-users that there will be greater opportunity for revision or elimination of the inadequate code.

VI. Suggestions for Governance

The implications of complexity theory have been identified in the regulatory context. The problems posed by regulation in a complex system leads to conclusions for the actors in any system of governance. First, there is the importance of complexity theory in the historical evaluation or analysis of events in a complex system. However, a recognition of the complex nature of regulated systems may also have some more direct lessons for those involved in the creation and development of regulations. This includes determining when it appropriate for a governing body to pursue action or inaction.

A. Be Aware of the Historical Functioning of Complexity

The most obvious use of complexity theory to the would-be regulator is in the historical context. While complexity theory itself admits that it can provide no good predictions of the future, it can be useful in an analysis of past events. Thus, it is important to consider the complex interactions that were spawned by the regulation when evaluating whether the efforts of a regulatory effort are successful, or when considering the pursuit of a future course of action. This can be important in the judicial context as well. For example, it would be inconsistent with the purpose of a law for it to be enforced in situations the complex nature of a system left a party in a state much different than the one that was anticipated at the time of the law's creation. Thus, the flexibility that is currently lacking in the legal mode of regulation may be able to come, at least in part, from some increase in flexibility of statutory application.

B. Be Prepared to Act

Despite the obvious pitfalls to regulating in a complex system that have been identified, this should not lead to the conclusion that action is never appropriate, or is irrelevant because of the uncertainty of outcomes. Regulation still has an important role to play. However, this active implementation of regulations must be done with an awareness of the complexity facing the regulator. The steps of this process include weighing the importance of a policy objective, understanding the true source of the problem and finding the appropriate regulatory means to implement this policy.

1. Weigh the Policy

This means, first, weighing the value of implementing the particular policy goal, given the inherent uncertainty in the relationship between a policy goal and the ends actually achieved. This may mean that, in general, only the more pressing policy needs are implemented into actual regulations, while policy goals of only moderate importance may rarely be translated into regulations. "Food stamps have greatly diminished hunger, public-health measures have decreased disease, and the federal highways have vastly increased commerce and travel, although at the expense of a number of undesired and at least initially unexpected side effects." While the value of these policy goals may continue to justify the associated regulations, other regulatory activities may needlessly impose regulations when there is reason to believe that the system may achieve a reasonable result on its own. For example, there has been recent legislation to remove any authority or jurisdiction of the FCC or the States to regulate the prices paid by subscribers for Internet access or online services. While this policy undoubtedly is of importance to many people, it is not clear that it rises to the level of important to justify stripping the FCC or the States of their ability to regulate in this regard. This may be an area where inaction was the proper choice.

2. Identify the Source of the Problem

The second element of this consideration is to identify the correct source of the problem or the correct pressure point for the exertion of regulatory force. While this may be difficult to do, this is of extreme importance in complex systems. Even small interventions in a system can expand through time to have a substantial impact on the system. Thus, it is critical that this potential substantial force be focused on the correct problem. A good illustration of this type of problem occurred during the U.S. involvement in the Vietnam war. The United States had provided helicopters to the South Vietnamese army. When this support failed to result in a continued advantage for the South Vietnamese, no one considered how the Viet Cong had countered the use of helicopters. Instead, the American military increased helicopter usage to conduct the same operations with increased intensity. This also affected the South Vietnamese army by leading them to become dependent on helicopters as a crutch rather than engaging in sustained patrolling. Thus, the failure to identify the true problem to be solved not only resulted in a failure to solve the problem, but led to other unintended, negative consequences as well.

3. Determine the Correct Regulatory Action

Finally, once a problem deserving of regulatory action has been identified, the appropriate regulatory methodology must be implemented. These methodologies may include the utilization of experts in computer modeling in addition to traditional sources of information, an attempt to regulate in ways that take advantage of the "natural selection" characteristics of complex systems, a regulatory approach that provides a framework that both constrains actors and provides them with new opportunities to act and the choice of a regulatory approach that takes advantage of the rich set of regulatory modes available.

The study of complexity theory generally has made substantial use of the computer models and simulations. While computer models are necessarily limited in their predictive capabilities, they have proven to be valuable tools for complexity theorists. Thus, policymakers should turn to these experts, in addition to traditional source of information, for insight into the expected outcomes of particular regulatory methodologies.

Further, in the discussion of the complex nature of the regulatory forces it was seen that the likely negative impact of regulation may be heightened when the substance of the regulation is less flexible. Thus, in general it may be preferable to regulate through the market or norms, since bad regulations in those areas could be changed to create superior regulations, or allowed to die off through natural processes. To the extent that law and architecture are utilized, every effort should be made to allow for flexibility in the systems of law or architecture so that the positive effects of adaptation will be present.

International institutions "specify 'rules that constrain activity, shape expectations, and prescribe roles,'" and may have insights into some specific approaches to regulation in complex systems. In such an institution the participants influence the characteristics of the system based on their individual interests, while the institutions exert a reciprocal influence on the members. "Thus institutions take on lives of their own, evolving and expanding, not necessarily because actors consciously modify them but because, . . . 'prior institutions create incentives and constraints that affect the emergence or evolution of later ones.'" A solution that initially appears "imperfect or incomplete [] may gain influence and effectiveness as it is used. Institutions can constrain conflict and foster cooperation in ways that are not immediately apparent." The Concert of Europe is such an institution, which could be viewed as an attempt to establish an international regulatory system to further the peace and stability of Europe. The Concert not only had the general characteristics of a good international institution, but chose a methodology of internal regulations that was a combination of multiple regulatory techniques. In this setting the net regulatory system was used to both as resources and constraints. The Concert's regulations not only restricted the activities of the members, but facilitated cooperation and communication. The Concert thus provided a focus for member interactions and a limitation on the scope of their harms.

The discussion of the Concert of Europe highlights several important points for the regulator of a complex system. An attempt should be made to structure the regulatory system in such a way that primarily establish a framework in which the regulated entities are able to act. The framework should both constrain the actions of the regulated entities as well as encourage beneficial interactions between the members themselves and between the members of the system. Further, multiple regulatory forces should be used in concert to achieve a regulatory end. This could mean that any regulatory action by a given regulatory mode (law, norms, architecture or market) need not be large at all because small effects in complex systems can produce large outcomes, and because the regulation is spread out among the different modes. The adaptability of the substantive regulations should allow regulations to flourish when the regulation is well suited to the "fitness terrein" in which it is placed for a given regulatory mode, and die off when it is poorly suited to the fitness environment. This is not to say that a "shotgun" approach is always the appropriate method of regulation, but only that no regulatory mode need be neglected when structuring a regulatory system.

C. Be Prepared to Pursue Inaction

The choice of inaction by a governing body in an area where it has authority to regulate is appropriately viewed as a regulatory choice as well. It is certainly not the case that, when there is no active regulation by a governing body there is no regulation at all. Rather, the regulation occurs through the natural development of the market, architecture, norms and law that already exist. Thus, a body governing a complex system is responsible for the regulations its citizens face in an area where the government has authority, regardless of its course of conduct. Once inaction is seen as regulatory course of conduct, several results follow. The appropriate setting for inaction must be identified just as the appropriate setting for action was identified, and a governing body may legitimately choose inaction in pursuit of its regulatory authority.

In some sense, the evaluation of whether inaction is the appropriate choice could be seen as the same inquiry as whether action is warranted. However, this characterization of the question can be useful when considering further intervention in a system where there has already been regulatory action taken by a governing authority. The monitoring of the system that must occur to determine if a particular regulatory activity is fulfilling its intended goals may lead to the conclusion that nothing has yet occurred. However, this may not be cause to attempt some additional or corrective regulatory action. First, with sufficient time, the "seeds" of regulation that were planted in the various regulatory modes may grow into adequate regulation. To provide additional regulation could be to overcompensate, or damage the system in some other way. Further, the actual results of the regulation should be considered. If they happen to be beneficial, this may warrant no corrective action if the desirability of the achieved outcome is sufficient to defer pursuit of the original goal until another means can be found. Thus, it is clear that inaction in a complex system is much different than an argument in favor of the status quo. Inaction may instead mean allowing for the continued development of the regulatory system through complex processes.

VII. Conclusion

The recognition of multiple regulatory modes is a necessary first step in a consideration of regulation. However, the complex nature of these forces and their interactions can have considerable impacts on regulation. Thus, the tendency to desire to pursue regulation through a particular regulatory mode that has superficial appeal, in this case code, can lead to unexpected negative outcomes without the further analysis required by complexity theory. In fact, complexity theory shows the value of greater flexibility in substantive regulations than currently seems feasible with code or even law. Finally, the consideration of a number of new aspects of a regulatory approach may well be necessary to fully effectuate policy objective through regulation in a complex system.