Perhaps fools do rush in where wise men fear to tread. The territory we call consciousness studies is fraught with dangers, intellectual as well a professional (for a scientist). Philosophers have never felt any danger (sometimes quite the opposite) because their job is to simply raise interesting questions about the phenomenon. They don't have to explain how it comes about. René Descartes was content to just declare, “Cogito ergo sum,” and call it a done deal.
Nevertheless the subject cannot but intrigue the scientist who contemplates how the brain works. After all, the brain, working, produces mind and minds experience consciousness (at least I, like Descartes, think I do; the rest of you may be zombies for all I really know!)
Having touched on consciousness in my explorations of sapience (Part 1 with links to subsequent parts), and feeling like I have a good working theory of how wisdom comes about from that brain basis, I am thinking this is a natural turn to take. This is the first of a more in-depth exploration of that devilish hard problem of what is consciousness. I guess you could say I'm feeling foolhardy.
The So-called “Hard Problem”
It seems we need our “mysteries.”
According to Chalmers there are “easy problems” associated with consciousness. For example the mere processing of external stimuli, recognizing what they are and where they come from is easy enough to explain from mere brain theory. For Chalmers and many other philosophers of mind the real problem is subjective experience. That is, how do the stimuli evoke subjective experiences such as “redness”, what are called qualia, or “phenomenal experiences”.
This is where we run into significant rhetorical problems. As soon as we say an experience is subjective we are making a claim about our own experience, not a claim about another's experience. It is impossible to say that Carl experiences redness when looking at an object that I experience as red. At best Carl and I can agree that whenever looking at an object that I experience as red, he reports that he also experiences something he calls redness. We agree that some kind of visual experiences are consistent across objects. We both use the same name for it. And when I tell Carl that the object I just saw (which he did not see) is red, he understands what I mean in terms of his own experience. It is because of this property of consistency across shared experiences that we might readily conclude that redness is not actually a subjective experience only. There is some physical quality about the way a human brain interacts with reflected light waves to see the same basic quality as almost all other human brains. I submit to you that while the issue of qualia may keep philosophers up at night it is not a real problem when considering the nature of and brain mechanisms for producing the phenomenon we call consciousness.
There are, however, significant semantic issues involved in grappling with the idea of consciousness. When I write, “I saw a red object,” what exactly is the I (in both instances in this sentence)? There is a symbolic referent, I or me, that is used linguistically to identify the agency of a biological system. But more than that, and what is for me the truly hard problem, is that there is a locus of experience and thought that feels an identity and ownership of those experiences and thoughts as well as of the body in which it seems to reside. I can talk about “my body” as if it is a thing that does my bidding and is used to interact with the world. The I inside seems to be unique and, in a sense, somewhat isolated from the body. You will recognize this as the ancient mind-body problem so often argued by philosophers.
Famed neurologist and author, Antonio Damasio (2000) tackled this problem head on in his work, The Feeling of What Happens. Rather than ponder what consciousness must be from an armchair, Damasio has been examining the brain, its functions, and their correspondence with reported subjective experiences as well as behaviors. I have found his arguments (paraphrased below) quite convincing as far as they go. They do provide a more solid ground to start from than introspection alone. My own approach is, in a sense, similar to Damasio's but working from a kind of reverse engineering process. My work on autonomous agents starts by attempting to emulate the brains of vary primitive creatures such as a snail, paying particular attention to the critical role of memory trace encoding in neuronal synapses (Mobus,1994). It is my contention that this is the first problem to be solved before attempting to emulate whole brains. It is absolutely essential to understand the dynamics of this encoding in order to solve certain critical problems in memory trace behaviors that we know affect long-term behaviors in all animals. My immediate goals are to build brains that are progressively closer to mammalian capabilities (not necessarily human, by the way). This will be demonstrated by their capacity to adapt to non-stationary environments and still succeed at a given mission objective.
I think the answer to consciousness lay in the evolution of brains from those primitive versions up through mammals and to humans. I have elected to try to emulate the stages of brain evolution by simulating biological-like neurons and their dynamic interactions in brain-like structures (e.g. the hippocampus and its analogues in reptiles). Essentially I seek to grasp how the brain works by recapitulating its evolution.
Jeff Hawkins (2004), of PalmPilot fame, is also attempting to reverse engineer the brain (especially the human level) but is most interested in the neocortex of mammals and humans to emulate human level (like) intelligence. His approach has been a more top-down one in which he has focused on what he feels (and I agree with him) the role of the cortex is as a memory-based prediction processor that can form invariant representations of things, causal relations, and interaction dynamics in the world that actually allows the possessor to visualize the future based on experiences learned in the past. I feel he is closer to understanding real intelligence than all of the classical artificial intelligence and artificial neural network researchers combined! Real, natural intelligence will never be simulated by a program. It will only be emulated by a program that simulates the necessary details of brains. I think we can do this on a computer, but probably not a brain as complex as the human's. I will be happy if we can get to something a little more advanced than a lizard, for example a mouse.
I must say I think Hawkins' approach, while having the advantage of providing a kind of top-down framework for generating hypotheses about intelligence, is going to have difficulty in not having spent time understanding the way in which neurons (all of them) encode synaptic efficacy as the basis for memory traces. Further, we now know that neurons are actively wiring and rewiring as a result of experiences. New synaptic junctions are formed, especially between distant clusters, and the mechanism for doing this involves the dynamic behavior of existing synapses and the epigenetic controls on genes that encode, for example, channel proteins. My adaptrode model provides the basis for this mechanism and this too is one of my goals — to show how distant neural clusters can come to represent causal associations in a developing brain simulation.
The approach of reverse engineering takes the work of neuroscientists like Daniel Alkon (1985), Eric Kandel, and Larry Squires (2008) who showed how synaptic efficacy dynamics worked, and Damasio and others like him who have painted a picture of how the mind works (similar to Hawkins' framework approach) and attempting to simulate the parts that interact in such a way that the whole thing works just like brains do, but in software and silicon instead of meat. I contend that it is the causal relation encoding dynamic built into synapses that is the key. And that can be simulated reasonably well.
In any case Hawkins seems interested in consciousness as an afterthought, a consequence of neurology (see Chapter 7 in his book). He seems focused on the issue of intelligent decision-making and never considers the nature of judgement or wisdom. In the chapter on the topic, consciousness, creativity, and imagination are treated more like epiphenomena of neocortex operations. I, however, am interested in the nature of consciousness from the standpoint of that it is an essential evolutionary consequence of fitness and how it emerges from the workings of the brain. I do not think it is an epiphenomenon — a simple but unnecessary consequence of brain workings (in fairness to Hawkins he may not really think that these “extra” phenomena are truly epiphenomena, but his treatment of them seems cursory and almost dismissive, so that it seems as if he does).
For Hawkins the objective is new technology to be applied to building useful tools; tools that are truly intelligent, meaning they learn from experience and can make good decisions. He sees these applications as specifically not being humanoid robot like, but rather for things like autonomous vehicles that do not have emotions or internal drives as animals (and humans) do. But for me the motivation is quite different. I seek to reverse engineer the brain and demonstrate its functionality in a working autonomous agent as a way to better understand biological brains! I build agents not to develop commercial applications but to understand better the brain itself. Frankly I suspect Hawkins, in excluding the inclusion of limbic functions like emotional content, will run into a barrier in his quest. As Damasio (1994) pointed out in his first book, Descartes' Error, essentially all of our memory traces are tagged with emotions or feelings derived from the limbic centers and based on the emotional context of the moment in which they are formed (see Chapter 8 — The Somatic-Marker Hypothesis). Damasio has concluded that the whole brain and body &ldwquo;... form an indissociable organism... ” (page 88) that probably cannot have parts isolated and function properly. Hawkins seeks to isolate the neocortex (and perhaps part of the thalamus and hippocampus) for his ‘intelligent tools’. I am skeptical that the learning algorithms he might apply to the neurons (synaptic plasticity) will do what he expects without an underlying motivational response system. But I wish him luck.
As long-term readers well know my ultimate interest is in the nature of wisdom and its effects on intelligence, creativity, and affect (emotions) as a necessary and evolutionarily emergent capacity of our human brains. I think consciousness is for something that is deeply tied to the nature of sapience. Perhaps, as I have speculated, the two phenomena are coextensive, i.e., come from the same brain structures that evolved in humans but are almost absent in lower animals. I suppose you could say that my ultimate goal would be to show how that can emerge (evolve) in brains by building something as proof of concept. As I said earlier, that won't be possible with the current generation of computers, even the most powerful supper computers or even through massive parallel processing over the Internet. But it should be possible to make advances in that direction that demonstrate the potential of the trajectory.
Believe it or not there are a number of researchers, both in and around the field of artificial intelligence (AI), who are studying Artificial Consciousness (AC). The study of AI has, itself, helped shed light on what we really mean by intelligence even if it has not been very successful in producing the general kind of intelligence we now recognize as the basis for adaptive behavior in autonomous agents like animals. I would claim that my own modest efforts have gone a long way to show an alternative approach that does do so. Those who have considered AC do recognize that if it is possible to produce consciousness (whatever it is) artificially it will certainly include, and start with, the capacity for adaptive autonomy by an intentional agent.
The beginning of this approach is now to consider how an animal is “aware” of its world and its self as it moves about sensing that world and its own body states.
Awareness — Self vs. Non-Self
The nature of consciousness begins with the nature of an agent's awareness. Even the simplest living organisms keep track of stimuli that originate in themselves versus those that originate in their environments. All animals, certainly, from the lowliest worm to human beings have neural mechanisms that track their own bodily positions and self-stimulation versus stimulations that originate from elsewhere in their environments. In other words they keep track of self versus non-self The need to do so is really pretty simple. Organisms need to react with appropriate behaviors to the impacts of the stimuli coming from other sources. They do not need to react to stimuli from themselves. For example, a nudibranch (marine snail) needs to withdraw its gills if they are touched by other agents (live or not). It does not need to do so if it touches its own gills. So it has neural mechanisms that keep track of its own movements. It knows where every part of its body is relative to all other parts at all times. If it detects a sensation on the surface of its body while noting that its foot, for example, is curled up and is the source of the stimulation (its foot will also feel the touch of the gill), it does not need to react. Any other stimulation not accounted for by its neural tracking of self should be reacted to for safety sake.
The circuit in Figure 1 shows how this is accomplished. The circuit compares sensory inputs from any of its externally focused senses, visual, auditory, or touch. These are compared with proprioceptive sensory information for correlations. In the nudibranch case above it will have a proprioceptive map that indicates where its foot is because it keeps track of how it moved that foot to its current location (nudibranchs are quite capable of such contortions!). It also receives touch sensory data from the gills in the location corresponding to where the foot is. Thus it can determine that no response is needed since it is self-stimulating the gills. Contrariwise, if the foot has not been moved to that location then it will conclude that something not itself has touched its gills and it will retract them immediately.
Figure 1. The distinction between self and non-self is differentiated by whether proprioceptive sensing matches external somatosensory inputs. A. If the proprioceptive input does not indicate that the self has produced the sensory input, then the non-self cluster is activated indicating the need to attend to the stimulus. B.If the proprioceptive input does indicate that the self has moved and this correlates to the sensory inputs then it is recognized as a self action. This kind of circuit is what lets you know that it is you scratching your ear and not someone else trying to be friendly.
Awareness is essentially the maintenance of somatosensory maps that keep track of every sensory input that is active at any given moment. These come from the external world and from the internal body. Proprioception, as just described, provides a map of the body's movable parts so that the animal “knows” at all times where its part are relative to all other parts. It also supplies information about how much force, for example, had to be used to get the part where the motor commands directed it. This is used as feedback to help regulate the motor commands themselves. If little force is still accomplishing the task then more force is not needed. This information can be used to determine the agent's relations with objects and media in its world. A second internal sensory map is the interoception, or sensing of physiological body states such as blood sugar levels or nitrogenous waste build up in muscles. It includes hunger, hormone-driven effects like sexual urges, and pain reception. Some of these states, such as sexual urges, can be triggered by external sensory stimuli (presence of the opposite sex's pheromones) but sensed by internal sensors and relayed to the brain as body state information.
Even the most primitive brain maintains these three dynamic mappings that keep it aware of the state and position of the self and the state of the environment around it. In reptiles and below these maps are mostly processed in the nuclei-like structures of the lower and middle brain areas. Many are nonmalleable in the sense that they cannot learn new images or new behaviors. They provide instinctual behaviors. In amphibians and reptiles newer, more flexible, structures appeared. They are more cortical-like in architecture and they are flexible in the sense that they allow for non-instinctual memories (at least in short-term) to be encoded as new images from the environment. Such structures help quadrupedal mobility in more difficult to navigate terrains. They also allow more flexibility in reorganizing instinctual behaviors to achieve a more complex goal. For example mating rituals can be more elaborate and follow slightly different patterns in each instance based on current circumstances. This helps improve mating success and thus seems to have obvious selective advantage.
Figure 2. There are three sources of sensory input to the central nervous system. The exterioceptive senses are the ones we normally think about as the five senses along with a few others. The proprioceptive system keeps track of body movements and positions of parts relative to each other. The interoceptive system monitors internal body states and keeps a map of activity levels in the relevant subsystems. All maps are integrated into a “global” map of the self and its relation to the things and forces operating in its environment. This is the origin of awareness and can be found in some of the most primitive brains.
Yet even more elaborate and flexible mapping processors emerged in the form of the paleocortex. This structure may have evolved in dinosaurs or at least the last common ancestor of dinosaurs and birds, since the latter have similar structures. A cortical structure, as Hawkins and others have elaborated, allows much greater flexibility in making more complex associations between sensory inputs and leading to more complex motor outputs (behaviors). The maps shown above were replicated in these cortical structures but in a much more elaborate form. The paleocortex could process so much more and do so by acting as Hawkins' memory-prediction system that I have shown to provide anticipatory (preemptive) based actions.
The final stage of evolutionary expansion of brain systems and the gain of unparalleled adaptivity came with the emergence of the neocortex in mammals. In some respects not unlike the paleocortex this ‘new’ cortex provides a much more powerful capacity to encode memory traces and make anticipatory guesses about the near future state of the world. But even more important, the size and complexity of this subsystem allows the brain to manipulate concepts experimentally, to imagine a possible future that can be tested for possibilities before action is committed. For example a preditor such as a lion or wild dog can consider that they have often found food resources at particular water holes. When the game is more scarce, the predator can then experiment with the idea that there might be other water holes some distance away where more game might be found.
You may question my use of the word “idea” here. But I mean it literally. Most of your ideas actually start out in the subconscious processing taking place in various parts of your brain. Only a very few of these ideas make to the light of conscious awareness. Yet we know they are there because psychologists/neuroscientists have devised clever ways to elicit subconscious thinking and visualize it using fMRI and other dynamic imaging methods. Thus, though a predator like a lion might or might not have a conscious thought about ‘trying’ to find a new watering hole, the thought is there none the less. This is evidenced by the actual behavior of such animals that has every appearance of premeditation. For my part I have several reasons to believe that lions and dogs actually do experience such ideas consciously. I also suspect they have an inner language that includes complex concepts in the form of noun-like and verb-like (including tenses) abstractions of the things in their world. Recent work in animal communications indicates that their body languages convey much more of their inner thoughts than we had previously considered. I will have to write about this at a later time. For now please accept that mammals have mental capabilities, made possible by neocortex, that allow them to work with concepts in ways very similar to our own.
From Brain to Mind
The neocortex alone, as simply an expanded version of the paleocortex, would not have resulted in the explosion of complex behaviors that gave mammals tremendous survival advantages. The other concomitant development in brain structure was the development and expansion of the prefrontal cortex, the lobes of cortex just behind the eyebrows. The frontal lobes were always the seat of associating environmental situations with appropriate behavioral programs, planing of muscle contraction sequences, and then sending commands for those sequences at the appropriate timing intervals. The addition of the prefrontal cortex added a new feature, the ability to plan alternative coordination with possible future situations, extending the ability to anticipate and adding considerable flexibility to behaviors (along with increased complexity). With the addition of temporal categories, past, present, and near-future, animals with prefrontal cortex could process the present situation based on past experiences and plan future actions.
Figure 3 shows a complete set of mappings and the information flows from sensory to planning to motor coordination. The new layer of map effectively observes what the sensory integration is producing and uses memory of past experiences to decide what motor actions would be needed. In this sense it is planning for the future by anticipating future outcomes. But in the primitive animals in which this map came into being, the future is just a very few seconds.
The figure includes the feedback through the environment resulting from the animal's behavior altering its relation to objects in the environment — essentially changing the environment (red arrow) relative to the animals perceptions. The loop is continuous in time. The animal continually senses the environmental configuration of percepts and tracks how they change in the planning map. Those changes then give rise to new motor plans. Not shown in the figure are the internal feedback loops from higher order maps to lower order ones. I'll have more to say about this aspect in future postings.
Figure 3. Adding motor outputs requires the integration of sensory inputs and the coordination of motor outputs. This requires a higher-level map to plan actions that will need to be done in order to better position the agent in the environment. The sensing, planning, motor output, and feedback as the environment changes relative to the agent's perceptions is continuous in time.
Connecting complex environmental situations and body states with actions to take was a major leap in agency, the ability to flexibly choose alternatives, some of which might be learned through experience. But it was still only a slight improvement in anticipatory behavior in being limited to the immediate future. In many ways this capacity could be limited to amateur game playing; if the opponent moves here I should move there. Considerations for what the opponent might do two minutes from the present, let alone two hours, were not a factor.
As the evolution of more complex environments proceeded  selection for more behavioral flexibility became stronger. The behavior planning map expanded to provide more memory capacity for more complex situations encountered. At some point (probably in early mammals, monotremes) a new map emerged above the short-term planning map in Figure 3. In all likelihood this map evolved as many new organs/facilities often do as a duplicated structure (the planning map) that was initially redundant, but later was free to evolve additional capabilities.
That structure is depicted in Figure 4 as an “Observer Model.” sitting atop the action planning map. At this juncture the latter is more a short-term default map wherein actions chosen would hold under ordinary circumstances. But the higher-order map is capable of storing more implicit (and perhaps the beginnings of explicit - episodic) memories than could be accommodated in the lower map. This larger memory also includes longer time scales for memory retention. But more intriguingly the higher-order map is a dynamic map in that it is capable of reconfiguration (generating new wiring schemes between concept objects) and hence, as a modeling “platform”, capable of generating multiple possible scenarios for the future. The time horizon for planning actions, and hence the length of the sequencing, expanded as well. The animals could consider behaviors further into the future than before.
Figure 4. At some point of complexity (environment and behavior) a new map appeared as an adjunct to the Action Planning Map. This map introduces longer time scales of “what happened” as well as “what may happen in the future”. This map is probably better called a dynamic model but it takes current status information and constructs refinements to models of how things work. The output from these models affect the current behavior.
Note that this new capacity opened up new possibilities for exploring fitness space in mammalian evolution. The larger the spatio-temporal scope of an individual's experiential memory coupled with mechanisms for experimenting with possible scenarios gave animals a capability to increase their tactical advantages considerably. The carnivores and the primates evolved this capability to maximum effect.
The reason I call this an observer model is that unlike the planning map that directly innervates the motor coordination map, this map takes in what the lower-level maps are doing and constructs what amounts to higher order models of both the self and the environment over long time scales. The sense of “I&rdquo, with continuity across time, is a consequence of this modeling. I am reasonably certain that dogs, cats, other carnivores, cetacean, and primate species have an inner sense of self and identity associated with their life experience memories. It may be true for ungulates too (horse owners would probably agree). Maybe even lagomorphs (rabbits) too! Indeed, as I think of various mammalian species I have watched behave (e.g. squirrels and raccoons) I would guess they all have some sense of I-ness.
There is another sense that results from there being a sense of I. That is the sense of agency and will; the sense that I caused that to happen. This has to be fairly obvious from the fact that the observer is watching the motor outputs (behaviors) that change the environment (relative to the observer) as well as observing the actions of the planning map and what it was in the sensory maps that gave rise to it.
With the emergence of this observer, model constructor, model user, scenario generator we have the emergence of the mind. We have the origin of the sense of self as different from the lower-level functions (maps) because it is. Lots of things could be going on in real-time in the lower level maps. This new higher level map (model) is working in a different time domain. It is collecting experiences and consequences of past behaviors in those circumstances which it uses to build anticipatory models of what should be done in the long-run (well, some long-run). It then provides the action planning map with provisional suggestions as to what sequence of actions it should take if such-and-such a situation comes to fruition. This new map allows the animal to deal with some ambiguity and uncertainty.
The new map is in the prefrontal cortex. Its actual work is to map longer-term and broader scale concepts to all of the regions in the neocortex where the details of lower-level concepts and percepts are actually stored (e.g. parietal and temporal lobes, etc.)
The capacity to be aware of the environment and the sense of the body as a basis for short-term behavior planning is what I have called First-order Consciousness. The sense of self is primordial, consisting of a knowledge of proprioceptive senses that distinguish that what is happening is either due to some factor in the environment (awareness) or due to the animal's own actions. All animals from the most primitive (probably with what we would call nervous systems) to human beings have this fundamental consciousness or they could not act effectively (be fit) in their worlds.
A sense of self that produces also a sense of separateness, the sense of I is what I call Second-order Consciousness. There is an observer in the brain that literally tracks both what is happening in the environment and what the body does in response AND proposes longer time-scale action sequences that should better situate the animal in the future. Fitness is greatly enhanced. The animal possessing this capability is able to adapt to multiple environmental configurations within limits.
What do we see with humans? In my next posting I will tackle the next level phenomenal experience — observing the observing! Humans are conscious that they are conscious. What does it mean?
Damasio, Antonio (2000). The Feeling of What Happens: Body and Emotion in the Making of Consciousness, Mariner Books.
Damasio, Antonio (1994). Descartes' Error: Emotion, Reason, and the Human Brain, HarperCollins Publisher, New York.
Hawkins, Jeff (2004). On Intelligence, St. Martin's Griffen, New York.
Kandel, Eric & Squires, Larry (2008). Memory: From Mind to Molecules, Roberts and Company Publishers.
Koch, Christof (2004). The Quest for Consciousness: a Neurobiological Approach, Roberts and Co.
Koch, Christof (2012). Consciousness: Confessions of a Romantic Reductionist, The MIT Press, Cambridge MA.
Mobus, George E., (1994). “Toward a theory of learning and representing causal inferences in neural networks”, in Levine, D.S. and Aparicio, M (Eds.), Neural Networks for Knowledge Representation and Inference, Lawrence Erlbaum Associates. [Available on-line: http://faculty.washington.edu/gmobus/Adaptrode/causal_representation.html]
. Simulations, however, are not easy. A simulation is always an approximation to the actual system. We can never simulate the lowest level details. For example my Adaptrode does not simulate the molecular interactions that take place in a neuron from synapse to genes. Such a simulation would provide greater accuracy by capturing the sub-dynamics that contribute to the whole phenomenon. But at the cost of needing much more computing power. We always are stuck with a tradeoff between accuracy and computational overhead. What we do is try to analyze the phenomenon and identify what we think is the sufficient level of accuracy producing the desired effects (think of curve fitting approximating a non-linear time series). If there is a need to get the simulation to run in real time, then the constraints on level of detail are much more severe. Using a computer simulation of thousands of synapses with firing frequencies of 200-300 Hz requires a substantial amount of trimming of detail! Time will tell if the Adaptrode equations suffice.
 The use of the term 'map' may be confusing but the processing 'modules' reponsible for handling sensory inputs literally map the array of inputs (think of the retina as a two dimensional array of light sensitive cells) to higher order processing modules. Unlike static roadmaps, however, these neural modules are dynamic maps that track inputs across the sensory field, thus changing where activity is located based on what they are mapping. For example, think of the visual inputs from the retina as the eye moves. The objects in the field of view are moving relative to the map itself. Imagine a lattice made of rubber. An object in the field of view is like a distortion in the lattice, say pushing a finger down on it. As the eye moves and the object remains stationary it is like moving your finger across the lattice so that the distortion affects different regions.
. Here the term environment refers only to the affective environment of the animal; essentially only those forces it can detect and objects it can recognize. For worms and snails this is a pretty limited environment. For humans it is clearly much larger. Nevertheless, there are many aspects of one's immediate environment that one cannot sense directly yet they can have causal impacts on the individual.
. A cortical structure is a sheet of micro-modular units (cortical columns) that are arrayed in a matrix arrangement. The sheet is divided into regions (and likely sub-regions) that are responsible for processing various representations. The sheet can be imagined as being layed out with regions near one edge (actually the back of the brain in the neocortex) devoted to low-level sensory inputs from all modalities. These are passed to the next regions which extract meaningful conceptual images from the inputs from the “lower” regions. That is, the outputs from the sensory regions are passed to the integration regions. It is also notable that there is a tremendous amount of feedback from the integration regions to the primary sensory regions. The outputs from the integration regions pass further along the sheet to object recognition and that to whole-field (situation) recognition. From there the behavior selection processing is done in the planning or pre-motor regions. Finally motor outputs are processed in regions in the far other edge of the sheet and the outputs are sent back down to the motor control nuclei in the central and lower brain areas for passing to muscles, etc. This is a, perhaps overly, simplified description. I plan on devoting some future writing to elaborate on this subject.
. I say the environment evolved because an environment includes all other relevant species and environments and the various interacting genera coevolve. Sometimes such coevolution involves an “arms race”, as between prey and predator, called the Red Queen race.