self organizing systems

Post date: 05-Mar-2012 00:59:27

Herbert Simon noted that a large proportion of complex systems that we observe in nature (systems with a large number of parts and non-simple interaction) exhibit hierarchical structure (Simon 1962). These hierarchical structures are composed of interrelated subsystems which, in turn, have their own internal hierarchic structure. The components of these structures are distinguished in terms of the intensity of interaction. That is, the intensity of the interaction between components is much less than the intensity of the interactions within each component. Simon referred to such systems as “near decomposable” in recognition that a rough description of these systems need only refer to the interaction between components without needing to describe the inner workings of each component. It is also important that only a very small set of building blocks are typically involved in the construction of a vast array of natural systems.

Hierarchal structures are to be expected in natural systems which have evolved due to random processes. The random jostling of atoms will frequently produce small molecules, but the probability of instantaneously producing a large complex structure, such as an ant, is infinitesimal. However, if the small, randomly produced molecules are stable, then further random jostling will regularly combine a few of these small stable molecules into a larger structure. The time required for the evolution of a complex form from simple elements depends critically on the numbers and distribution of potential intermediate stable forms. Simon drew these conclusions from a broad range of complex natural systems including cosmology, chemistry, biology, sociology, physics and genetics.

The stability of sub-assemblies can be expressed in terms of the point of equilibrium between the concentration of components, and of the sub-assemblies. Unstable sub-assemblies rapidly decompose back into components, while stable sub-assemblies will tend to capture the majority of the available components. When the point of equilibrium is reached, the relative concentrations of components is maintained by continuous formation and dissolution of individual sub-assemblies. In chemical reactions, the presence of a catalyst hastens the progress towards equilibrium between reagents and products without actually altering the point of equilibrium. If the sub-assemblies are in some way removed or consumed by a subsequent reaction, then more of the original components will be consumed to reattain the equilibrium.

Kauffman further points out that, each new component type enables the formation of yet more new component types (Kauffman 2002). Kauffman coined the term “adjacent possible” to refer to those components that are just step away from the currently existing component set. There is a constant “pressure” for the adjacent possible components to be formed because their concentration is zero and equilibrium has not yet been reached. Kauffman proposed this rapidly expanding “adjacent possible” as an explanation for the rapidly expanding diversity of a wide range of systems from biology to economic markets.

Simon points out that a component which somehow increases the probability that another of its type is formed will greatly alter the equilibrium in favor of that component. These are Dawkins' replicators.

In sum, the capacity to form stable sub-assemblies is critical to enable large complex systems to form from smaller components in a randomized environment. Each new type of stable sub-assembly introduces the potential for many more new types.

Dawkins, R. (1987), The Selfish Gene. Oxford University Press.

Kauffman, S. (2002), Investigations. Oxford University Press.

Simon,H,A. (1962), The Architecture of Complexity. Proceedings of the American Philosophical Society, Vol. 106, No. 6. pp. 467-482.