Systems Science 7 — Cybernetics: The Science of Control
In the prior post I asserted that what exists are systems, or unities, entities, networks, comprised of aggregates of components. Over time, and under the influence of energy flow, these components tend to self-organize into structures and processes in which energy flows in and dissipates out while doing work to maintain the internal organization. Living systems are the epitome of this system-as-process ontology.
One might well ask how a process, a dynamic structure, can maintain its integrity over time, especially when energy flows might vacillate (such as in the diurnal cycle) and material flows, for resources, are uneven. A system becomes long-term steady-state or stable when part of the internal organization acts to control the overall function of the process, to keep it doing what it should be doing in order to remain stable and adapted to the external conditions.
Cybernetics is the science of control of systems, processes, such that they continue to function in consonance with the environment in which they evolved. Once a network of functional relationships has been evolved, and under conditions of reasonably stable (if slowly varying) energy flow, a control subsystem can act to maintain functional stability over the long haul.
Within-process control is achieved by the use of the feedback of messages to a control actuator that can have causal influence over some key elements of the internal system network, usually through amplification, such that the system as a whole changes its internal flow rates and/or configurations in order to maintain stability. Once evolved, a process' internal network represents the expected state of affairs, under normal operating conditions. When, however, conditions change enough to make a difference in the overall performance, relative to that standard, then the messages being fed back to the system are informational. That means the system has to change something in order to get back into stable operation.
Feedback and Closed-loop Control
Remember that one way to define information is "...news of difference that makes a difference." That is exactly the way it is used to manage processes. In a simple cybernetic system a transducer measures some quality parameter of the process output (see Fig. 1). The resulting message signal is compared with a reference standard or 'set point' value for the parameter. Whatever difference exists, positive or negative, between the set point and the actual real-time value constitutes a signal of error. In other words, the resulting message tells the system news of difference from the expected, zero error. To complete the loop and provide control, a controller must act, as noted above through amplification, to counter the error.
Figure 1. In this view an entity is comprised of a work process that converts inputs into outputs, one of which is considered 'valuable' (possibly by another entity) or represents the long-term health of the process. The stability of the process is accomplished by measuring the output value and comparing it to the value of the desired or target value (also called a set point value). Any error in the actual measured value is fed back through a controller, a subsystem that decides on a counter action that is initiated through some kind of amplifier.
The core adaptivity of a living system is actually its ability to control the internal structure of its basic work processes (e.g. physiology) through homeostasis. This is achieved by feedback control as shown above.
Figure 2, below, shows a more complex situation where three entities are organized such that the product outputs from two of them are resource inputs to the third. The latter uses these inputs to produce a 'final' product output. All three entities need to cooperate in order that the final product is of optimal quality. Internally, cooperation is aided by a new layer of control using essentially the same principle of feedback control but with a twist. A new kind of controller, a coordinator, is introduced. This controller uses error information from all three entities along with a general model of how to coordinate the behaviors of the three to achieve the minimum error from the final product.
I am not introducing a mysterious object here. The coordinator controller actually developed from another entity that simply specialized in the task of processing messages — a kind of computer, in essence (see below for discussion of specialization). Messages, remember, are just very low power flows of energy or energy and small bits of matter. They are modulated into a communications channel between two entities and convey information when the receiver has a low expectation of the precise message form received. The signals (thin arrows) in Fig. 1 represent transduced messages. The error signal provides the controller with information when there IS an error.
Figure 2. Coordination between multiple entities by a higher level controller, a coordinator. This controller uses a model of the optimal performance of all three entities to generate control signals that (as shown) can modify the target values of each entity in such a way as to force the individual controllers to adjust their entity's process.
The trick, as shown in the figure, is to adjust the target values slightly in order to take advantage of the existing individual controllers working on their own process. This works by changing the error signals and prompting the controllers to adjust. Of course this scheme only works if the target value can fall within a range of acceptable values (see reference to homeostasis above). It also depends on a reasonable amount of stability (predictability) in the initial input resources. There must exist a 'feasible' model solution to variations in these inputs that will maintain an acceptable range of output values.
The model used by the coordinator fills a similar role as the targets did for the individual controllers. And, just as the targets were 'given', the model is pre-established and essentially fixed. As the complexity of the network (number of cooperating entities) grows the model given must be more complex as well (sometimes even more complex than imagined). Simple networks such as depicted in Fig. 2 are generally quite brittle. That is there are few feasible solutions to optimize the output for wider ranges of distortions in the inputs. Such systems would be completely at the mercy of the environment with minimal capacity to adapt to these variations.
Note that the system is portrayed as inputs coming in from the left and products and wastes exiting on the right. While this might seem stylistic, in fact all systems can be organized as networks with this general arrangement. We can see, in this arrangement, that opportunities for increasing complexity, adding more entities, exist in terms of increasing breadth (appending new entities in parallel with existing ones) or increasing length (appending new entities at the input or output ends). Several advantages accrue with both of these possibilities. For one thing, it is possible to introduce redundancy. Suppose we simply replicate one of the input entities and add it below one of the existing input entities routing its product to the output entity as a backup input. Now if its companion input entity should have difficulties obtaining acceptable resources, perhaps the new one can make up the difference. See Fig. 3 below.
Figure 3. A system can become more complex by adding entities to the breadth. In this example an input entity has been simply 'copied' and used as a redundant or additional input processor. The coordinator must be extended to handle the new addition, which will change and complexify the model.
The system can also extend or lengthen by virtue either of copying something (either input or output) as in the broadening example, or by capturing an entity that was previously independent that produces one of the inputs, say, that the system needs (Figure 4). Of course capturing a new entity on the output side can effectively change the nature of the whole system.
In both broadening and lengthening a system it is necessary to extend the coordinator and its model to cover the new functionality.
Figure 4. In this example the system is lengthened by 'capturing' a previously independent entity that produces one of the critical inputs to the system, thus helping to ensure stability of that input. As above, the coordinator has to be modified to incorporate this new configuration into the model and control processing.
The living systems version of lengthening is exemplified by the endosymbiotic theory, which, for example, provides an explanation of how eukaryotic cells emerged by symbiosis among bacteria-like simple cells.
Imagine that each entity here represents some relatively simple bit of metabolic activity, say, for example, early bits of what we know now as the Kreb cycle. Various bits might have evolved chemically semi-independently. They would have been loosely coupled in the primordial soup, perhaps each with their own unique primitive membrane acting as a structural substrate. At some point the now ubiquitous phospholipid bipolar membrane may have allowed the uniting of these bits and pieces producing a tighter coupling and allowing the cycle to form and evolve.
Coordination control involves two basic types of control processes. One is the coordination of the entity with its environment, that is, its sources of inputs and output sinks. In the example above of lengthening the input chain by 'capturing' a source, by establishing a tighter coupling between the source entity and the coordination control of the main entity, the entity is changing the ways in which it acquires resources. This is a tactical move to ensure resource availability. Tactical control involves maneuvering with respect to the environmental entities so as to ensure resource availability (in a timely manner) and to ensure products and wastes are appropriately transferred to the environment. This latter is every bit as important as ensuring resources since the buildup, even of products but especially wastes, can be as detrimental to the entity as loss of resources might be.
The other form of coordination control that works in tandem with tactical control is logistics. Logistic control involves internal routing of materials and energies to ensure that each operating unit is adequately supplied at the right time and wastes are eliminated. In a sense, logistical control takes over the management of what had been (in earlier versions of the entity) the effective tactical control employed by each operational unit when they were not as tightly coupled. The logistical controller is the mechanism that promotes the tighter coupling and relieves the operational units from needing to supply as much energy to independent tactical control.
Both tactical and logistical controls operate over somewhat longer time spans than operational or real-time control. That is they tend to aggregate error data over time and use time-averaged values for computation of their control outputs to the operational units. This is primarily because it takes time for trends to evolve that would mandate control decisions. Acting too quickly to short timescale errors could have a destabilizing effect on the operational units.
In living systems, especially cells, very complex networks of sub-processes require intricate coordination controls for both getting food and other resources (tactics) and maintaining internal flows. Some operational-level units are tasked with repairing other units that tend to degrade due to thermal effects (entropy). They require energy and raw materials to maintain the integrity of the entire entity. There are also operational-level processes that construct other processes (another example of autocatalysis when one process constructs another process that, in turn, constructs new copies of the first!). The logistical control of such complexity is substantial, involving distributed, numerous feedback loops. The entire workings have been captured in the concept of autopoiesis conceived by biologists Humberto Maturana and Francisco Varela.
The highest level of control in the hierarchical scheme is planning. This kind of control requires that entities have the capacity to construct internal models that are changeable to reflect changing conditions. Derived evolutionarily from the model structures used in coordination control, planning models involve not only a model of internal process, but also a model (or models) of the external world. That is, the entity must be able to represent the relevant parts (other entities and physical conditions) of the environment and have a way to represent changes in those as a function of time. In deed, in order to make plans, for the future, an entity must be able to anticipate how the environment might change. The planning then involves taking anticipatory actions that will increase the likelihood of the entity thriving or avoiding damage.
Of course we are now fully in the realm of living beings. And I will restrict the discussion to animals since whatever 'planning' might be done by plants is rudimentary at best. Animals have achieved a high form of agency. That is, they are able to produce effects on the environment as much as be affected by it. They have varying degrees of autonomy in making decisions to act. Some decisions are purely reactive, based on evolutionarily hardwired responses. Others are more flexible in terms of taking more situational conditions into account before taking an action decision. Still others, in animals with more evolved nervous systems, take far more conditions into account, including historically learned propensities/exigencies, and internally generated motivations to deliberate prior to taking a decision. Humans, of course, seem to have reached the epitome of such decision processing.
When there is a dedicated, distinct third level of control in the hierarchy specifically for planning the long term behavior of the entity we call it strategic. Strategic means that the plans take into account the likely scenarios of what will happen in the environment, how other entities will behave and what conditions will prevail at some temporal horizon, scenarios of what will happen internally based on adaptive competencies, resources stored, possible weaknesses, etc, and the panoply of motivations, desires, requirements, and such. The strategic planning process is to take all of this into account and develop a program of goals to be used by all of the various tactical and logistical controllers.
Complex organization (like a living entity) involves elaborate, intricate networks of many kinds of components interacting in multiple ways to produce a whole, temporally stable, system. And as I have pointed out in previous installments, systems are really systems of subsystems or networks of networks (of networks ...). The three level hierarchical control model described above can be applied to many subsystem levels within a very complex entity. That is, for example, all cells in the body have fairly elaborate coordination controls, as discussed above, but also some very primitive planning control! Every cell devotes some of its internal processing to anticipating changes in its environment in the form of adaptive responses with memory. Consider muscle cells, for example. They will adapt to frequent and persistent demand for twitching by 'bulking up'. When a person works out routinely, their muscle strength and endurance increases over what it would be for a couch potato. Moreover, the muscles retain a readiness to react to demands based on this training for some time into the future. In fact, even if one stops working out the muscles will remain at the ready for quite a while. Only after several weeks or months of failing to work out will they start to loose their readiness.
Living tissues have a built-in ability to retain a 'trace' of capacity to respond to environmental conditions. When put under demand, the tissues (cells) build up the capacity under a schedule (essentially a logistic curve). This is a primitive form of 'learning' and acts to anticipate future demand. So, in that sense, it is a primitive version of planning for the future.
Thus it can be seen that every complex subsystem of a larger, more complex system, can have its own hierarchical control structure. Perhaps a much easier to visualize example of this is a typical large business organization. Such organizations have different departments, like production, marketing, accounting, etc. Take accounting as an example subsystem. Within the accounting department, which functions as one of the main tactical (financial accounting) and logistical (managerial accounting) control systems for the entire organization there will be operational sub-units (e.g. bookkeepers), logistical sub-units (middle managers over bookkeepers), tactical sub-units (supplies requisitioners), and strategic planners (controllers, CFOs). And every department in the organization has a similar (self-similar) internal structure. Even the production department has its chief operating officer who does strategic planning for the production facilities.
With this kind of devilish complexity is it any wonder that individuals, who frequently are called upon to make strategic, tactical, logistical, and operational decisions both for their organizations but also for themselves, are too often caught in being confused about what kind of decision they should be making at what time? You see this confusion (and its aftermath) all the time in typical organizations. In general most people don't even explicitly understand what differentiates the various kinds of decisions and the timeframe over which they should be made (or what information goes into formulating a decision). This is one of those areas that I strongly feel systems science could positively affect. If people understood the differences in each of the types of control decisions they would be better prepared to make appropriate decisions. I have personally been involved in strategic planning committees where the participants too often strayed away from thinking about strategic issues and got into logistics or tactics without realizing that they were doing so. It is hard to make distinctions, especially when all subsystems involve some amount of all three kinds of control.
And, of course, there is the problem with governance in general. Governments are hierarchical control structures (see my blogs on Sapience, esp. page 2 where I ran a series on Sapient Governance discussing this). Most of the world democracies have achieved some amount of differentiation into the various controls. But too often confusion reigns. I would argue that even in the US, for example, the notion of truly strategic planning (foreign policies, environment, energy) is sorely lacking or very weak at best. In part this is because the political system itself gets in the way of doing meaningful strategic planning (the politicians need to be more mindful of short-term issues that will get them re-elected). But it is also due to both politicos and citizens failing to adequately distinguish the kinds of decisions and time scales involved.
Perhaps a better educated citizenry, with respect to at least this part of systems science, would help move us toward better articulated decisions that address all of the relevant control aspects of modern civilization. It certainly couldn't hurt.