Light at the End of the Tunnel!
The systems science book, “Introduction to the Principles of Systems Science,” is approaching the final stages of the submission draft. My co-author, Michael Kalton, and I are wrapping up the final two chapters that are concerned with the theories of systems science. The first of these two is titled “Auto-organization and Emergence”[1], which deals with the underlying mechanisms of building complexity and levels of organization. The second, “Evolution”, picks up the thread of how the Universe becomes more complex over time. Our approach is to cast evolution as a universal process rather than just a biological process. Auto-organization and emergence are framed as sub-processes that provide the opportunities for environmental selection to sort the structures and functions that emerge on the basis of their stability and potential for contributing to even higher organizations.
After that I will turn to the final three chapters dedicated to various tools and methodologies used in systems science and engineering, namely systems modeling, systems analysis & design, and systems engineering. Since these three topics are deeply embedded in my day job I think they will come together rather quickly. At least that is the plan!
In the course of bringing the prior chapters' themes into the perspective of a universal theory of evolution several insights emerged in my thinking. I thought I might share them with you since there are many readers here at QE that are intensely interested in evolution and they might provide some additional insights or questions that will help see if these insights of mine are “fit.”
Evolution
Theodosius Dobzhansky (1900 - 1975) one of the fathers of evolutionary biology, said, “Nothing in biology makes sense except in the light of evolution.” Indeed the fields of biological sciences, psychology, and medicine are increasingly seen in light of evolution. Many of the puzzling aspects from these disciplines are beginning to make sense once seen through the lens of evolutionary thinking.
I would actually extend this notion to not just biological phenomena but to all phenomena. Put somewhat differently, “No system makes sense except in the light of evolution.” Simple systems, since they are always open to interactions with other simple systems, are subject to participation in auto-organization and subsequent emergence of new organizations and functions. More complex systems, the ones that emerge from the lower levels of organization, demonstrate attributes that lead to the status of evolvability.
At a sufficient level of complexity, as we see in living systems certainly, but may see in non-living systems as well, a new capacity enters into the nature of systems — they can adapt to changes in the environment. A whole class of systems, called “complex adaptive systems” (CAS). Such systems are widely studied by both complexity theorists and dynamics theorists.
But even higher order systems, themselves comprised of many CASs, become evolvable. That is the systems can evolve over very long time scales. Here I will provide a basic explanation of what this means.
In the book we define a hierarchy of systems and the attributes that put them where they are in that hierarchy. We first define a basic simple system as one that has a minimum set of attributes (Figure 1). All other systems derive from this model with additions of properties not found in those lower in the hierarchy.All systems share the attributes of the simple system shown in figure 1. Additionally the system has to have a longevity or normal life time in its form. Entropy is always working on systems to degrade components and/or links. A system has to be stable within its environment to last long enough to be observed as a system.
Figure 1. A basic simple system is any organized structure that has the following properties: A boundary (even if fuzzy), a set of component types with distinct interaction personalities, multiples of each component type, a set of interaction links between the components, and sets of inputs from the environment and outputs to it. Inputs can be of three types, energy (including forces), material, and messages (which may convey information). Outputs are from the same types but are usually transformed versions of the inputs.
It is beyond the scope of this piece to explain completely but in the book we show how even so-called “conceptual” systems meet these criteria. We claim there that ALL systems are physically manifested in some medium and that all of the laws of nature pertain equally to all. The proof of this is left as an exercise to the reader (pending publication of the book!)
Even simple systems are dynamic, that is they have behavior or responses to changes in their inputs. As there are no solipsistic systems, then all systems of every kind responds in some way to changes in the environment. We categorize environments by the kinds of changes we find in them. However, in one sense there really is only one kind of environment in terms of changes that take place in it. All environments are effectively systems within much larger systems, i.e. environments. Thus all environments are subject to unpredictable forces in the very long term. The three types of environments that we use to explain how systems are coupled with them to be considered “fit.” Those environments are: 1) stable, 2) stationary stochastic, and 3) non-stationary stochastic.
Again the subject is much more than I can do justice to in this piece. A stable environment should be intuitively obvious. The forces on the system of interest can be dynamic but are measured in intervals that are bounded. For example, the diurnal cycle of light and dark is a semi-complex oscillation of light intensity (taking latitude and seasons into account). Up until recently, Homo sapiens experienced the daily fluctuations in weather as essentially stationary stochastic. That is, while weather related variables such as temperature and rainfall have been seemingly difficult to predict precisely, the range of possible variations have been somewhat restricted. The weather/climate environment is stochastic but essentially stationary over the several thousands of years that we have been paying attention. Chaotic stationary stochastic processes are marked by variables that are episodic, intermittent, variable in amplitude and duration, but always remaining within tolerable ranges so far as existing systems were concerned.
Non-stationary stochastic processes are those in which the statistical properties of the variables of a stochastic process (e.g. mean temperature and its variance) are actually undergoing changes over very long periods. The most immediate example of such and environment is the shifting in climate (and especially storm) statistics due to heat forcings thanks to the human use of fossil fuels for industrial energy. In truth, however, the Earth as a whole has always been a non-stationary stochastic process for many reasons. Most of the forces that have affected the environment have acted over very long time scales compared with a human lifetime. Notable exceptions, such as the meteorite that crashed into the Yucatan peninsula 65 mya, have brought on rapid stressing of existing systems. It is this stressing that affects selection.
Simple systems require relatively stable environments with respect to the kinds of external forces that operate on them. A simple example is a molecule that remains stable as long as temperature, pH, etc. are maintained.
Even in stable environments simple systems will still interact with other simple systems and the opportunity for further auto-organization exists. Even in stable environments, so long as there is a source of free energy available, additional interesting things can happen. The interactions between multiple simple systems leads to complexification.
Complexity and Complex Systems
We use Herbert Simon's definition of complexity as the depth of a hierarchy of partially decomposable subsystems[2]. Figure 2 attempts to show this schematically. In the figure the term “atom” is used generically to designate the level where no further decomposition is needed in order to understand the dynamics of the system.
Figure 2. Complexity is measured in the kinds and numbers of subsystems that can auto-organize starting from basic atomic types. Subsystems in level 1 (L1) develop new “personalities” and can have more complex interactions as a result, leading to level 2 (L2) organization. The hierarchy can continue to build upward toward entities of even greater complexity in structures and functions. There is no necessary upper limit on this combinatory process so long as free energy is available in the environment. It also requires that the environment does not introduce destabilizing forces, e.g. a higher temperature would tend to degrade more complex structures.
So long as free energy flows through the environment (energy not yet participating in the work of new construction) the hierarchy will deepen as at each level new properties and behaviors (personalities) emerge and enable a new round of auto-organization. This process is hypothesized as what happened on ancient Earth that leads from pre-biotic chemistry to proto-life where biological evolution (in the form we know from neo-Darwinism) could begin to operate[3].
As system complexity develops the behaviors of the emerging systems become increasingly complex as well. Also more complex systems tend to have fewer strongly coupled connections between subsystems. As a result their behaviors become increasingly malleable or pliable. Even so, their basic structures that make them systems must be maintained. There is an hypothetical level of complexity in which systems have to gain a specific capacity to undergo controlled internal changes in couplings in order to exist in more stochastic environments. That is, at some point in the complexity hierarchy, systems must be capable of adaptivity.
Adaptivity
We define adaptivity as a system's ability to shift internal resources such that response mechanisms are strengthened when needed or reduced when not needed. The system cannot change the fundamental arrangement of subsystems and linkages (e.g. flows of resources like energy) but they have a capacity to control the routing of flows or relative coupling strengths of linkages based on shifting demands from the environment. Living systems are the quintessence of adaptive systems. They are, by derivation, complex systems, with complex behaviors, but possessing wider ranges of responses to environmental fluctuations than a merely complex system.
An example we use for adaptivity in the book is homeostasis, something I have written about before in these blogs. The word means “same staying,” meaning that the system is capable of adjusting its response mechanisms as needed to adapt to an environmental change that puts increased stress on the system. For example, when a warm-blooded animal finds itself in a colder than normal environment, it begins to shiver in order to generate more heat to protect its core temperature. Homeostasis is at the core of regulation in biological systems. It is depicted in Figure 3.
Figure 3. At a sufficient level of complexity, cybernetic (closed loop control) subsystems emerge and achieve a higher degree of stability/resilience in stationary stochastic environments. Homeostasis provides a good example of such a system. Pre-life versions of auto catalytic cycles have been proposed as pathways leading to life.
The part of the system in Figure 3 that is within the black dashed elliptical is a message processing subsystem that monitors the state of the critical factor[4]. It computes an error signal when the state of the critical factor is different from the ideal, or within an acceptable range around the ideal. The error signal is a message that conveys information (news of difference) to the response mechanism. The latter then takes the action necessary to compensate for the environmental stress. As long as the stressor does not exceed a normal range value (established by the fact that the environment had been stationary stochastic over the period in which the mechanism emerged and established itself) the system will respond and keep the critical factor within acceptable boundaries.
Homeostasis, as effective as it is for operational control of critical factors is not sufficient to maintain life. Humberto Maturana and Francisco Varela developed a concept, for the purpose of “defining” life called autopoiesis (self constructing). As it happens the concept actually describes higher order cybernetic systems that are highly resilient (adaptive). Figure 4 depicts a very general kind of system that includes subsystems for handling recycling of materials that came from components that had been degraded (second law effects) and for building subsystems to replace them. An autopoeisis system is more effective at adaptation because it can repair and rebuild itself unless it suffers too much damage.
Figure 4. Living systems have developed a hierarchy of control mechanisms that go beyond simple homeostasis. An autopoietic system is one that includes subsystems that work to coordinate the work processes in much more complex systems. They also include subsystems that repair and construct new components within the system itself. The system is not only adaptive, it is capable of maintaining its function over very long time scales. See text for explanations.
All individual organisms are autopoietic adaptive systems. The evolution of proto-biological systems probably went through phases in which individual proto-cells resembled essentially naked homeostatic systems. One theory of how the boundary from proto-life to life was crossed involves cooperation between merely homeostatic systems in which products of one were needed by others and vice versa.
Self maintenance and construction depends very much on the system including some kind of blueprint for the structures of the subsystems. Genes embodied in chromosomes in cells fill this role. We show, in the book, how such a set of blueprints can come into existence in the emergence of the hierarchical cybernetic subsystem when systems evolve models of themselves. This too is, unfortunately, beyond the scope of this work. Suffice it to say that one of the stocks (not shown in the figure) associated with the controller is that of “data.” The data needs to be encoded in a suitable medium that is compact, easily accessed, and most important, long-lived (or easily repairable). DNA appears to be a universal molecule that can fulfill these requirements.
Evolvable Systems
Individual organisms are not, strictly speaking, evolvable. They can only adapt within the range of their response mechanisms' abilities to change their capacities. If the environment is non-stationary in the time scale of the individual and the stressors go beyond the range, then the organism cannot but succumb. But just a bit higher up in the complexity hierarchy a new phenomenon emerges to permit higher level systems to become evolvable. That is, these systems can not only adapt, but change their own structures and functions, given enough time.
Evolvability involves the following requirements:
- Many functionally redundant components that are being continually generated from a base pattern. The components age by the second law so that there is a continuous replacement. Reproduction in biological systems is the paradigm example, but so is employee replacement in organizations.
- The generation process should be of fairly high fidelity, but occasional heritable errors should occur to generate variation in functionality. Genetic mutations fulfill this requirement in biological reproduction.
- The environment of the components (in essence the rest of the embedding system) should be effectively stationary over the time scale of component life cycles, preferably over several generations. But it must be non-stationary on longer time scales. In other words, it needs to change. The geophysical/climatological environments on Earth fulfill this requirement nicely (usually).
- When changes do occur they push the critical factors toward one or the other extreme, or can push the range of stressors on the factor. Thus over the life cycle time of an individual component they live in stressful conditions relative to what the previously normal environment had been. As an example, when a company adopts some new technology to process its work some employees may find the change uncomfortable, possibly to the point of quitting their job.
- Among the variations in the response capabilities of individual components there will be some that are pre-adapted to the new normal stressor, i.e., the environmental factor while changed, is less stressful for some individuals. Those then are able to function better in the new environment relative to those components that are being stressed.
- There needs to be a mechanism that ensures that the prototype (blueprint) of the more successful components are used differentially in the generation process. In Darwinian evolution this is embedded in the survival of the fittest notion. Those members of a species better able to compete for limited resources will generate more offspring over time resulting in the domination of the favorable variation (mutated allele) in the system. In a company management may note that as the nature of the work changed a certain set of skills possessed by some employees led to better productivity. They will then make sure human resources screens new replacement employees for those skills.
- Over a longer time scale the new capacity becomes the norm.
As hinted in the requirements list, examples of evolvable systems are biological species/population, societies of learning agents, organizations (also of learning agents), and ecosystems. The Earth as a whole system is clearly evolving.
The term evolvability refers to the ability of a system to evolve following the above list of requirements. It turns out that there are different levels of evolvability. For many species the ability to promote mutations in specific genes (or control DNA segments) in response to an environmental stress can accelerate the evolution. It only works in cases where there are substantial numbers of redundant components, such as in bacterial colonies, because you need a very large number in order to increase the odds that some of the mutations will actually turn out to be beneficial. Most mutations tend to either be neutral or detrimental.
But when we open up the concept of evolution (and adaptability, etc.) we see that the notion of evolution can be applied more universally. The Universe is constantly changing. According to ergodic theory if the Universe ages long enough we will see that it is truly stationary. There are, however, people like Stewart Kauffman[5] who have asserted that the Universe is more likely non-ergodic (e.g. ultimately non-stationary). The recent discovery of dark energy (if it proves out) may lend some weight to this notion. However, whether ultimately ergodic or non-ergodic, in the meantime, in the time scale that is of interest to the evolution of life and culture on this planet, it certainly seems to be non-stationary! As far as we are concerned another asteroid could plow into us almost any time.
Learning Agents
It turns out that the brains of birds and mammals have cortical structures that allow for on-the-fly encoding of percepts and concepts. The human neocortex seems to have a nearly infinite capacity to rewire itself as concepts change with experience. This is a system that is intermediary between merely adaptive and evolvable. Clearly neurons are able to adapt to changes in synaptic inputs, the basis for encoding memory traces or engrams. Neural circuits in many areas of the brain are able to rewire to accommodate external stresses. For example the loss of a digit on the hand can result in the cortical areas surrounding the map of that digit in the sensorio-motor areas taking over the neural nets therein.
I think the brain is an example of a limited evolvable system. Most of the requirements for evolvability given above are met in the neocortex to some extent. But it is also not an open ended system. There are constraints on just how much variation is possible in the population of synapses that form the redundant components. This will be an interesting area for further investigation one day. I hope I can get back to it in light of this hierarchy of complex, adaptive, evolvable systems.
Footnotes
[1] Note that our term “auto-organization” is actually what most people call “self-organization.” We prefer the former in that it implies an automatic process of forming structures. The latter term, while popular, is also a source of mystery for much of the public. The word “self” implies that there is somehow, some kind of intentionality involved in the process. We are concerned that this implication carries baggage that can get in the way of a proper cognitive understanding of what the process really is.
[2] Simon, Herbert A. (1996). The Sciences of the Artificial, Third Edition, The MIT Press, Cambridge MA. Especially chapter 8, “The Architecture of Complexity: Hierarchic Systems”, p183.
[3] Morowitz, Harold J. (2004), Beginnings of Cellular Life: Metabolism Recapitulates Biogenesis, Yale University Press, New York.
[4] For example the nervous system of a warm blooded animal has the ability to monitor skin temperature and when it falls below a threshold level the brainstem initiates the shiver response in the muscles.
[5] Kauffman, Stuart (1995). At Home in the Universe, Oxford University Press, New York.