Systems Science 10 — Mech-Systems Examples
The Boundary Problem
Before we get started on some real world examples there is a small matter to warn you about. In the third installment, 3. Organization and Its Conceptualization, I noted that systems have boundaries, even if they are hard to distinguish. For the purposes of that subject I left it at that. But as we start to study real world systems we will find very quickly that there is a problem associated with the identification of a boundary that arises not from the notion of "systemness", but rather from the very act of studying or observing a system from the perspective of a conscious, thinking being. The problem arises because systems are always subsystems of some higher-order system, hence embedded within an environment containing other systems. Indeed, some of these other systems might have some close association with the system under study in such a way that they cannot be easily distinguished from it. Hence, we might have some difficulty isolating 'our' system for study. A lot of work is involved in resolving this issue.
It gets even more complicated when a system is highly interactive with many of its environmental components. If the system's boundary is indistinct it may not be possible to say definitively what should be included in the system and what should not. We run the risk of choosing a boundary that might include some components that appear to be members of our system of interest but are not actually essential functional members. They could have just as easily been left out, the boundary drawn between the components of our system and the other one correctly. But how do we know? The boundaries we "choose" consciously are just that — choices, not necessarily physical dividers. Or at least it seems that way.
The real problem is that the whole idea of systemness starts to look a bit arbitrary when we run into these kinds of situations and, in the end, choose a boundary based on our preferences or biases. If systemhood is an arbitrary category, decided one way by one investigator, and another way by a different investigator, our whole basis of systems science could start to look useless! Indeed the main criticism of systems science is that too often boundaries are chosen arbitrarily, thus negating the whole validity of the systems concept.
Our intuitions, not always a great guide in science unfortunately, tell us that there is a reality to the notion of a system, and fortunately, in this case, they serve us well enough. Aside from the many clear examples of systems with boundaries that are easily discernable and provide a unique individuality to a system under study, there are methods of analysis that allow us to resolve the boundary problem in those cases where we are having difficulty. As we explore various kinds of systems in these next few installments, we will come across examples of ill-defined boundaries that will give us pause. But we will also see that some of these methods can be used to help out. In one of the last installments I will actually focus on some formal methods that are used in systems analysis and provide a treatment of how this boundary problem can be solved there. For now, when we come up to the problem in these examples, we'll use a less formal approach that will anticipate the formal methods later on.
It's best to start out with some simple examples that capture the main principles of systems science and then work our way toward greater complexity. We'll start with Mech-Systems, both human-designed and naturally occurring systems since these are among the simplest of all 'interesting' systems. It is fitting to start with human-built systems since some of our earliest recognitions of systems principles started with studying our own machines, usually with the intent of improving designs.
Below I will start with a very simple object that has 'behavior' and delineate the basic principles that I've written about previously. Additionally, we will see how this machine has been 'modularized' and incorporated into larger, more complicated machines, demonstrating the principle of emergence even in human-built systems.
Deterministic Black Boxes
Suppose we have a box sitting on a table with three wires sticking out of it [in truth there will be two more wires sticking out to supply operating current to the device. These are the energy flow input and output while the three wires are signals — more on this later]. Two wires are input, in that they receive a voltage from some other source (two sources generally), and the other wire is an output. Figure 1 shows such a box in four different states of behavior given the indicated inputs. A zero indicates no voltage input while a one indicates some voltage input. Similarly a zero output means the box emits no voltage while a one means it is emitting a voltage.
Figure 1. A black box with two inputs and one output. This device is performing a logical AND operation. a) two zero inputs produce zero output; b, c) a single input of one on either input lines still produces a zero output; d) two inputs of one produce a one output.
In a real sense it doesn't matter what is inside this box, or even that the inputs and outputs are necessarily voltages. This could just as easily be done with water pipes or mechanical levers as long as the inner workings of the box are compatible with the nature of the inputs. This black box performs a simple Boolean logical operation called AND, and the device is called an AND gate. The AND behavior rule is this: the only time the output will be something rather than nothing (e.g. 1 instead of 0, or also couched in terms of 'true' rather than 'false', simply a convention of semantics) is when both inputs are something (not nothing) at the same time; at all other times the output is nothing. Or in terms of voltages on wires, if both inputs carry positive voltages than the output will be a positive voltage; otherwise the output will be no voltage.
The four situations cover the entire range of input conditions and output results. If we were to observe such a box for a very long time, testing all of the four input choices many times, we would come to the conclusion that this device, however it operated internally, was an AND gate.
We might have a similar looking box and run the same set of input experiments but get the following results: 0,0 -> 0; 0,1 -> 1; 1,0 -> 1; 1,1 -> 1. Note that the first and last output behaviors are the same as for the AND gate, but the two middle behaviors are different. What is this box doing? It is a logical OR gate, producing a 1 output if either one or both of the inputs is set to 1. There is also a gate that performs an EXCLUSIVE OR, which simply means that one or the other of the inputs is a 1, but if both are set to 1 the output is 0.
Now lets say we have yet another black box with only one input wire and one output wire. This device has an odd behavior. When the input is a 1 the output goes to 0, and vice versa. This device changes the sense of the input and is called a NOT gate.
In all three of these boxes we've assumed more or less continuous operations with inputs switching from 1 to 0 or 0 to 1. The boxes themselves retain no memory of their prior states. It isn't in their job description. But another kind of box we have has a peculiar behavior in that it is capable of retaining a memory of the last state it was put into on its output. This box is actually a combination of two of the above box types wired together to produce a nifty effect. If we take an OR gate and wire its output to a NOT gate we get what is called a NOR gate (see the Wikipedia link for the symbol for this device)! Now we take two of these NOR gates and wire them in a most peculiar way. Please take a look at the diagram in this Wikipedia article on the SR LATCH. This one is a bit difficult to explain but the behavior is not. This device uses internal feedback between the two NOR gates to capture and hold a state that was set into it by the values on the two input wires. The state behavior is given in a table in the article. The two output lines are labeled Q and a Q with a bar over it (meaning NOT Q, which means it will have the opposite output from whatever Q has). Generally speaking we will only use the Q output. The way this device works is that if both input lines, labeled S and R, are set to zero, the output at Q remains what ever value it had previously been set to. In other words, it represents the memory of the last transition (from 1 to 0 or from 0 to 1). The operation table shows you what you have to do to S and R to get the Q line to be either a 1 or a 0. The fourth situation, where both S and R are 1s is not a viable condition, it is ambiguous, so this input situation is not like the AND or OR gates.
What is important to note in this example is how we built up a slightly more complicated box from combining several less complicated boxes — wiring their outputs to inputs. Also note that we have done this by design so we can't call this device a black box because we know how it works. This is, instead, a white box — its workings are fully disclosed (we might have called it a transparent box, but that is too many syllables!). One can transform a black box into a white box if one can dissect the black box without destroying the workings. For machines this is trivial and possible. Note that certain dissections of other kinds of systems may destroy its capacity to work and we will lose information about it by the act of taking it apart.
Putting things together in cool way
At this point we can start to do some very interesting things. Boolean logic, while interesting to a point, is very simple stuff. But being able to do arithmetic with an automated machine is starting to get really interesting (OK, so today we take calculators and computers for granted, but that doesn't mean people know much about how they work inside!) We have all the elements we need to do this with the above logic gates and the memory device, the latch. We can put them together to create a device that can add two single bit binary numbers together. Binary arithmetic is based on the ability to represent binary numbers (where the digits are only 0 and 1). A small example may suffice if you already understand positional value notation, as in the decimal system where the digit to the left is worth ten times the one to the right. It works the same in binary except that the digit to the left is double the one to the right. So in binary we have the following numbers:
Zero and one are pretty obvious. The decimal number 2 if formed in the same way you form the number 10 in decimal, by adding 1 to 9 (which gives a 0 in that column) and carrying the 1 to the next column to the left. For a binary representation of 2 we simply add 1 and 1, leaving a 0 in that column and carrying a 1 to the next column. Here is an example of adding decimal 3 and 4 together in binary format.
Starting from the right-most column, 1 plus 0 = 1, no carry; same in next column; and 0 plus 1 also = 1. Hence the answer, 0111, is decimal 7 as expected. By the way the leading zero has a purpose I'll get to next.
Now lets do something a bit more interesting. Add decimal five and decimal four, that is:
which is decimal eight.
The difference here is that when you add a binary 1 to another 1 the answer is 10, or you might say 0 carry a 1 to the next column.
So what does all this mean? We can put the component boxes that we've developed so far together in a way that allows us to add two binary numbers with four column just using logic gates along with a set of memory cells (latches) wired together to form what we call a register (a four-bit register). We need three of these, two to hold the operands and one to hold the result. The description of how this is accomplished is more than I want to get into here, so I will let the interested reader refer to the Wikipedia article on adder circuits. The main point I want to get across is that starting with some reasonably simple components we have built a device that can add two numbers together. The number of bits in a register is rather arbitrary. We showed a four-bit value, but in real computers the registers are generally 16, 32, or 64 (some 128) bits wide. As homework you should try to find out how big a decimal integer can be represented in a 64 bit register. There is also an interesting fact about how negative integers can be represented, but this isn't really a course in computer architecture so we will leave it at that.
But with the assumption that you can represent and add negative integers you suddenly have subtraction from a simple ability to add. And once you get subtraction you have all of arithmetic. Multiplication is just repeated additions: 2 x 3 is just 3 added to itself. Division is repeated subtraction, with the caveat that you need to make arrangements to handle a remainder value.
By now, though, I think you get the picture that more complex machines are built up from simpler machines, combined and recombined in ways that increase the kinds of functions that they can perform. An adder circuit, plus some additional logic gates can produce a device that we call an 'arithmetic-logic unit', which is at the heart of all computational devices. Along with a bunch more registers to store intermediary results and keep track of operating instructions and where we are in the program, and a set of control logic circuits to control sequencing through the program, you have a 'Central Processing Unit', the brain of a computer
Add huge banks of registers, called memory, wires to get the bits in the CPU to and from the memory, and some specialized logic/register devices for input and output and you have the modern computer. Simple, no?
OK, maybe it isn't so simple when you look at the big picture (or inside a modern computer). At least it doesn't look simple. But in fact it is fundamentally very simple when you realize how it is built up by combining these few simple circuits in a hierarchically increasingly organized way. Note that the complexity of a computer comes not from having a vast number of different component types (unlike chemistry that has 92 naturally occurring elements to play with), but with the way the few components are recombined into a hierarchical structure. Each new level provides seemingly new functionality.
The computer is a wonderful example of both hierarchical complexity and evolution! Most of you can remember when the home computer was a largish box with small memories and primitive input/output options. Remember the days before the Internet? But engineers are always tinkering and thinking up new ways to recombine components to get more speed and function. Some of those designs work well and are picked up by the marketplace. Others wither and die, even though they worked in a technical sense, they weren't fit to survive the rigors of the market.
The computer and its related technologies illustrate even more interesting aspects of systems. The computer is a 'perfectly' deterministic machine (as long as the parts don't fail). If even one tiny bit, especially in the CPU, goes out, the whole machine crashes, even though that bit only talks to a few other bits in the machine (remember, mech-systems have much sparser interconnections among parts). Computers are examples of brittle machines. Engineers are beginning to learn from nature, biology specifically, and starting to look for ways to reduce this brittleness. For example, having multiple 'cores' or more than one CPU working in concert helps. If one CPU fails, the other can keep going. It may slow things down but there won't be a catastrophic failure of the whole computer. This is the result of redundancy. Each component is still brittle, but by having several available, if one goes down another can pick up the work load. In the future, engineers hope to develop less brittle, more 'adaptive' components. Redundancy can be quite costly, so having a component that may get sick but still perform acceptably by adapting to its hurt (like limping) could be much better.
Finally, while a computer might be a very deterministic machine, it turns out that we can now build elaborate, complex systems out of myriad computers, like the Internet, where overall behavior becomes stochastic. That is it becomes subject to the laws of probability. This is especially true for computer systems that interface with the real world through sensors and effectors, like robots. The real world is highly stochastic, so the states of inputs can never be sufficiently determined for the computer. This is how Mech-Systems can actually have more life-like behavior, or at least probabilistic behaviors. Even man-made machines are getting so complex that we cannot always be sure of their performance. It is an example of how perfectly deterministic subsystems can, when interacting through a noisy medium or interacting with external stochastic processes, become non deterministic in their overall behavior. It is not at all unlike the way non-living atoms, when combined in particular configurations and interacting with one another in a warm watery bath can produce the phenomenon we call life.
If you've ever been trout fishing or even just hiking by a mountain stream you may have noticed that despite the swirling and splashing, first running fast, then slowing in tranquil pools, the stream has a very interesting organization. The energy flow that propels the water downward is the gravitational gradient. Water condenses high in the mountain, in the watershed, and collects in a low lying crevice in the topography. It then builds momentum flowing down the gullies and valley ways. Gravity and surface tension of water, along with the banks, help form the boundaries. They are not impermeable as water evaporates or splashes up on the rocks. Animals drink from the stream and carry the water away.
Energy also enters the stream in the form of nutrient runoff from the watershed. Falling tree leaves and other debris that serves as food for insects and they for fish. Clearly the stream's energy is dissipated in flowing eventually to a lake or river or to the ocean.
With the exception of the life that the stream may harbor, it is basically a Mech-System in terms of the mechanical and chemical interactions it has with the immediate environment. Over time the stream's course will change as it carves new channels or reshapes the banks. During heavy spring snow melts the banks may overrun accelerating this process, but there is a limit to the extent of overflow and the integrity of the stream system is maintained over time, as long as precipitation continues annually. The stream overflowing is an example of the difficulty in choosing boundaries mentioned at the very beginning of this installment. If you were to see the stream only during one of these events, you might choose the boundary as high above the 'normal' bank. Contrariwise, if you visited in late summer or early autumn you might choose a narrower boundary as the flow rate has shrunk to a trickle by comparison. This is why systems require careful observation over long stretches of time and under many different environmental conditions so that you can take all of these phenomena into account when choosing a boundary.
While it is easy to think of a stream as 'only' a Mech-System, it is an important example of one that forms the environment of living systems, Bio- and Eco-Systems. Indeed the interactions between the biome and the stream can have tremendous impacts on the evolution of the stream itself. For example, salmon streams are heavily influenced by the annual migration of spawning salmon up to the higher elevations where they die after spawning. This has an incredible impact on the chemistry of the stream as their bodies decay. So, while the running water itself is a Mech-System, taking a wider boundary view can readily convert the system of interest into a full blown Eco-System. This is an example of why there must be an on-going effort to observe all interactions at your chosen boundary to make sure you have captured all of (what for you) is relevant dynamics. This is not particularly easy to do for very complex systems, but it is the only way you will know. In a sense, we adopt the black box philosophy. We watch the chosen boundary very carefully and measure everything that goes in and comes out (Fig. 2). Careful metering of the inflows and outflows will tell us a lot about the choice of boundary we've made. But then, to really know, we need to take the white box approach to make sure we know that the system we are studying has the interesting internal dynamics as well.
Figure 2. The only way to be sure of the correctness of your choice for a boundary is to monitor it closely and instrument (if you can) the inflows and outflows. The little 'meters' set on the boundary to measure all of the relevant flows can give a quantitative method for determining the black box analysis of the system. The arrows leading out of the meters are data to be collected and analyzed.
As a final example of a natural Mech-System, consider a hurricane or typhoon. These circular patterns of high winds are actually vortices in which there is a vertical temperature gradient from the warm ocean surface water to the upper cooler atmosphere. Once a tropical storm system forms, and there is also a pressure gradient (horizontal) heat can be conducted upward through the storm system in rising concentrations of moisture. The laws of physics regarding the formation of whirlpools, vertical fluidics acceleration, cause the system to form a vortex producing the winds. At the surface this increased wind speed enhances the evaporation and heat conduction even more. The heat is now being removed by convection from the ocean surface and convection is a much more rapid means to carry heat along a gradient.
The process is self-reinforcing as long as a temperature gradient exists (this is why hurricanes fall apart once they are over cooler land). Hurricanes are short-lived phenomena, compared with streams, but during their time, they develop considerably complex structures, such as the eye where no wind blows.
Note that the boundaries of a hurricane are actually harder to define than for a stream. One way, commonly used, is to mark the periphery where the wind velocity falls below a certain threshold. That is somewhat arbitrary, but it is based on years of observation and determination of just at what speed the winds are considered destructive. The air/water boundary seems easier to declare but with the wave action and evaporation pumping water into the air, it isn't all that easy to instrument!
Energy Flows and Dissipative Structures
These Mech-Systems, and all active systems, demonstrate a key fundamental principle in how systems organize and work. In every case the system exists within an energy gradient that "drives" the dynamics of the system. The components interact is constrained, that is lawful, ways, but it is the flow of energy through the system that makes things happen. Energy flows involve temporary residence, as it were, in internal micro-structures within the system's component interactions — the subsystems — and this energy, in potential form, can do very specific work before dissipating as waste heat. This is how the internal structure of systems is formed, how they become organized. In the case of the computer, or any manmade machine, new organizations come about through invention and product development rather than in situ. But in the case of a hurricane, organization develops as a result of energy flowing in real time.
As long as energy flows from a high potential source, through the system, to a low potential sink (the gradient) the system will tend toward greater organization of microstructures that best (are most fit for) dissipating heat by doing internal work. This will continue from the time the initially disorganized system first finds itself within the gradient, until a time when no better dissipative structures can develop. All the options have been tried by trial and error and no new structures emerge. This is the steady-state, which will be maintained so long as the same energy gradient applies. If the gradient intensifies or diminishes the system will respond accordingly. There will be much more to say about this energy flow phenomenon in the next two installments on Bio-Systems and Eco-Systems where the effects are truly astounding. For now, just recognize that dynamic Mech-Systems are just as subject to the organizing principle of energy flow as all other systems.
* Thanks to Charlie Hall, at SUNY-ESF for the inspiration for this example. He is teaching a Systems Ecology course (I'm sitting in on it) in which the students took a field trip to a local stream where they took samples of all sorts of parameters. They brought their samples and data back to the lab where they are now in the process of building a systems dynamics model of the stream.