The Origin of Self
In the prior three postings on the subject: Exploration of Consciousness
I have been developing a concept of human consciousness as it relates to sapience. I am following ideas put forth by the neurologist and neuroscientist Antonio Damasio as given in the bibliography. But I have some additional thoughts to add, drawn from the works of other scientists pertaining especially to the research on wisdom as well as my own work on the hierarchical cybernetic model of brain. I have been using some of the same terminology as Damasio so that if readers want to follow up and get more details they can read his books and find some consistency between what I am writing here and his work.
Damasio's insights regarding the origins or sources of consciousness are particularly useful and take a different tack from the regular philosophical and psychological approaches. Damasio begins with the fact that brains are about life management, starting with the oldest, most primitive forms. Evolution has favored increasing management efficacy because it provides the possessors an increased fitness, i.e. they can exploit more complex environments if they have more complex information processing ability and increased behavioral flexibility.
But with increasing brain complexity comes the need for the hierarchical organization of the control functions. The modern human brain represents the epitome of a hierarchical cybernetic system, complete with high-level planning (strategic management), very complex tactical and logistical coordination, and, of course, a panoply of highly adaptive operational level functions. And this latter level is where Damasio starts his story of the construction of consciousness in the human mind[1].
His main argument is that consciousness is built on three basic constructs: wakefulness (alertness, etc.), mind (the structured and unstructured processing of images), and the ‘self’ (an on-going narrative of the state of the individual's biology) (Damasio, 2010). In particular, the first emergence of a self (as I described in the first post) is based on an ability to differentiate between sensory inputs that originate in the body or are caused by the body versus those that originate from outside the body. The very first thing a brain has to be able to discriminate is between those sensations (stimuli) that are caused by non-self and those that are caused by self. In addition, however, the brain maintains what Damasio has called a ‘map’ of the internal milieu of the body, the conditions in the viscera and blood that are linked to homeostatic mechanisms. The brain monitors these conditions and manages the response mechanisms, such as hormone and neuromodulator secretion. It maintains a continual mapping of body states as they change and responds accordingly. This level (operations) is basic cybernetic feedback control.
The most primitive brains maintain these active maps or what I have been also calling the models[2]. In this case the brain has a genetically endowed model of the body with sensory inputs from all parts of the body ending up feeding into the model of the body which is continually updated. The model contains reference states (ideals) that are used to compare with the current state and generate homeostasis responses as needed. But what is left in the brain is an ‘image’[3] of the body state at each instance. In the most primitive of brains that is the end of it. New states generate new images. But in evolutionarily more advanced brains these images are actually available to higher-order maps (see Figure 2 in Part 1 - Exploring Consciousness) where multiple sensory images are integrated in order to produce more complex behavior actions. These higher order maps may maintain something like working memory, or at least some kind of short term memory traces. They are also subject to modulatory inputs (e.g. neuromodulators from the homeostatic mechanisms) that can affect the further processing, acting as feedback to this level from the lower levels. The residual pattern images are part of a coordination level of cybernetic management. Their persistence over slightly longer time scales allow the spatial-temporal integration of patterns that can then provide coordination feedback to the lower levels. Damasio has described this phenomenon as the basis for what he calls “feeling”. It may be hard to imagine a fish, for example, having feelings. But this is because the feelings that humans have are at a higher level and being interpreted by consciousness. There are multiple levels of a self that have respective levels of feelings.
The primitive parts of brains, no matter how advanced, generate what Damasio calls the protoself, the primitive self that worms and fish experience. In Who is I? I provided some aspects of Damasio's extension in the theory of self, showing how more complex brains produce a ‘core’ self, the first level in which a self also is aware of itself being affected by the environmental situation. From there the most complex brains with extensive neocortex, having memories of both the biographical past and the projected future (plans and imagination).
The Strategic Self
Damasio gives a good account of how the biographical self emerges as a “protagonist” (his terminology) with a private sense of agency. His argument for the evolutionary fitness of biological management of the self in an increasingly complex world, likewise, sits well. The autobiographical self embedded in a mind that is extraordinarily competent at manipulating abstract mental models is clearly advantaged in survival and reproductive success in a world of so many ecological opportunities. Humans were obviously successful in invading and exploiting almost every ecosystem on the planet.
Human consciousness is not merely autobiographical. It includes a planning model above the super-observer model shown in the Who is I post. This model answers the question, “What should I do?” But the question being asked is not simply what should I do next (short-term). That is a tactical question. Rather the planning model extends the time, spatial and social horizons greatly. The time horizon can extend over years. The spatial horizon can extend as far as the mind has been exposed to stories to foreign lands, even to alien worlds. And the social horizon extends to not just kith and kin (the tribe) as cooperators, but to perfect strangers as long as there is a context for association.
Members of the species Homo sapiens think strategically in the sense that they can consider all of the past that they have informational access to, the current situation (also that they have informational access to), and with a sense of desired outcomes in the future, they can lay plans of action to get them to those outcomes. This can be done consciously, of course, with some effort. The farther the horizons anticipated, the more effort required. But somewhat ironically, the vast majority of humans do not really think strategically in a conscious fashion. Their brains are still engaged in planning, but they are doing it subconsciously and only become aware of the results as their intuitive judgments bias them to act in ways that tend to further their plans. Their tactical management thinking, that is: “What should I do next?” is what they are aware of and their intuitions provide the answers from deeper in their strategic management brain. The vast majority of people muddle through life unaware that their brains are trying to keep them alive and in the best possible situations.
For the vast majority of people in the world strategic thinking is not foremost in their minds because by the time they reach adulthood they have settled into something of a routine. As long as their environment does not really change that much they do not need to lay additional plans beyond the limited horizons they have already mastered. Strategic thinking, for most people, is something they do more as teenagers than as fully developed adults. We often call it daydreaming or imagining.
On some rare occasions an individual is born with a tendency to maintain a youthful mental condition that allows them to continue dreaming long into their adult lives. These are the explorers who wonder if the future would be more rewarding somewhere else. Among this lot, but rarer still, there are individuals who do not merely dream and wander in exploration, but people who consciously ask questions about the future and distant locations and peoples. These are the true conscious strategic thinkers. They are not just explorers, but intentional explorers who are seeking more information, better knowledge, and looking for the best solutions to living in a changing and dangerous world.
Such people experience a much broader (bigger) form of consciousness. Their considerations of the future are not merely fantasies (wishful thinking about the future), but realistic assessments of what is likely to happen in their world that will impact them and their kin, and what they should do about it in advance. They too have intuitions but their judgments take so much more into account and are better organized by broad systemic knowledge that they are more often veridical (i.e. correspond to what actually ends up happening) than the average person's.
The strategic self, whether operating in consciousness or just below the surface, might seem to be the epitome of self and is more or less where Damasio ends his model except for noting that consciousness itself is not the epitome of human cognition. He views ‘conscience’ as occupying a higher level than mere consciousness. After all, the worst criminal minds are conscious but clearly lack a conscience.
Conscience is what makes us more humanistic. In my model of sapience I point out that our judgments are as influenced by moral sentiments (cooperation, altruism, empathy) which are the basis of our social nature. We need not be conscious of our conscience in the same way we need to be conscious of our rational thoughts and, indeed we rarely grasp exactly why we perform acts of friendship and kindness other than to say it is our nature. We can become aware of emotions that attend these acts as well as, of course, negative emotions. When we do something wrong we feel regret and possibly shame. But these remain just feelings. What a strong strategic mind will do under these circumstances is re-analyze the conditions and acts that led to these feelings and think about how to correct for mistakes (how to try to be a better person in the future).
So, my theory of sapience and Damasio's model of consciousness + conscience seem to be complementary and cover a lot of ground in trying to explain the brain basis for what we humans experience as subjective reality. But, as I indicated at the end of the last post, there seems to remain one more aspect of awareness that is not yet accounted for. The planning model of figure 3 in Part 2 provides the basis for a strategic self. But, as I argued above, the scope of the various dimension horizons, is generally limited and most people do not really consciously plan out very far. The answers to the question, what should I do, are bound by constraints due to the limits of tacit knowledge that most people have. However, the same brain region that provides the basis for this planning model can be even further expanded in some individuals so that their horizons are far. In Figure 1 I show an expanded planning model.
Figure 1. The highest level cortical map that resulted from the expansion of BA10 in late human evolution provides a hyper-consciousness feeling. The new map extends and expands the planning map of earlier species of Homo.
The Ineffable Self
As with conscience there is no need to have the expanse of this model in consciousness for it to have its effect on the basic planning model processing. This particular model takes the questions of who have I been, who am I now, and who shall I become to some much greater scope. The figure shows the ineffable self wrapped around and integrated with the planning model because it is an extension of the latter, but with very different processing properties. The reason I call it ineffable is because one senses that there is something real there, but cannot observe it as a consciously held object. The protoself was observed by an observer model that interpreted what was happening as the organism's body states were affected by the environment and sensing what that environment was. This observer gave rise to the core self in Damasio's terminology. In turn the super-observer model observed the changes in the core self and combined that situation analysis with the autobiography of the individual stored in memories to produce the autobiographical self, a being with a private experience of agency and ownership of thoughts. The feeling of self culminates in humans in not only an ongoing narrative of the self living in the world, but as a story in abstract symbols of that narrative and hundreds of parallel narrative about what else is going on in the world. The use of language allows a highly efficient mental capacity for dealing with extraordinarily complex worlds.
The next higher order model is one that takes all that comes from the biographical self and transforms the narrative of past to present into a narrative of the future. As discussed above, sometimes this is explicit in conscious thought, but more often it is running in the background, so to speak. Nevertheless its effects are felt and observed in the super observer in which self consciousness operates.
With the expansion of the planning model to greater horizons something interesting happens. The effect of this expansion is felt in consciousness but only as an ineffable sense of a super-I, an ultimate I that is observing all that came from below but cannot cause the super-observer to form words to express what is happening. The super-observer is not observing that higher-order mental capacity. It receives influences (intuitions) from the planning model, but the effects of the “Grand Ineffable Sense of Self” are not observed directly. They are only experienced through the planning model's effects on the super-observer. This, I claim, is what makes the subjective experience of consciousness seem mysterious and unexplainable. Each one of us knows there is something else there besides our direct thoughts but we have no observational access to what it might be. So the easy thing to do is imagine it as a separate spirit, a soul. Descartes' dualism fell victim to this, but so have all of the more spiritualistic explanations over human history. We all have this private experience of something being there but we cannot see it in any direct sense. Until the age of brain science we are currently in, there was very little else we could think. Introspection would not expose it since only the super-observer could produce analysis of what it observed and the grand self was not in its view.
Of course this is not the definitive word on how the brain produces this thing we call consciousness or why some aspect of what we experience seems to defy description. The model I have presented is (hopefully) reasoned conjecture on my part based on several different but converging lines of neuropsychological research. Below is a general bibliography of reference works that I have tried to assimilate to construct this model. The study of the brain and its workings and evolution is an incredibly intellectually stimulating exercise. I heartily recommend it to anyone.
Where to Next?
In modern systems analysis one generally starts with an attempt to understand the whole system as a black box in order to situate its purpose in the grander environment in which it operates. You map and parameterize its inputs and outputs, you observe its behavior. Then you start to decompose the system into its constituent subsystems, and them into their constituents recursively until you reach a natural stopping place for analysis. This is called top-down analysis. Understand the whole before you attempt to understand the parts. The reason I have been mucking about in consciousness studies is that I am trying to get a handle on the top level of human mentation in preparation for such a top-down analysis of sentient systems. Such an analysis can lead simply to better, deeper understanding. But it can also lead to ideas for designing artifacts that emulate the natural systems.
In my PhD research I embarked on an agenda to emulate natural intelligence (not worrying about either consciousness or sapience!) in a machine. My strategy was to approach it from an evolutionary perspective. That is I started by looking for ways to emulate very primitive brains that showed the capacity to adapt to changing environments[4]. I succeeded in emulating what I came to call a “moronic snail” brain in a mobile robot platform (MAVRIC). I was successful in getting some publications out on how I did it, but the field of Animatics (the simulation of animal-like behavior in robots) came to be dominated by the then-new non-traditional areas of artificial intelligence such as artificial neural networks (ANN), fuzzy logic, and some Bayesian statistical learning methods. These methods were showing some interesting first results and had the benefit of being functionally (and mathematically) easy to understand. My approach of emulating brains by simulating very low level functionality of real neurons (namely synaptic plasticity in multiple time domains) was ignored and I got discouraged (as an academic). Fortunately my work had been early enough to have garnered the necessary attention to support my granting of tenure.
I had been sitting on the sidelines as far as this research was concerned. Indeed I ignored the area because I found ANNs, for example, to be completely boring and never believed they would ever lead to the kind of sentient intelligence that remained the prize of AI. My robots had sat on the shelf for years while I went off to study energy, sapience, and then consciousness. And, wouldn't you know it, while I was ignoring the field several researchers finally started coming to the conclusion that the dominant paradigm in animatics was likely a blind alley. Concomitant with this dawning realization neuroscience had started unraveling the way in which concepts are organized and represented in the neocortex. The story is complex, but it turns out that the main premise of representation in ANNs was wrong and my neural models were right! That started me thinking about the problems again.
So now, several decades later, I have a big picture of what cognition looks like. I have worked out some details of how a hierarchical cybernetic system works and have developed a scheme for how to construct a brain using that and my version of neural simulations. So I am off to the research races again.
Back in Jan. 2011 I posted a brief on Brain Complexity at Multiple Scales in which I ran over the various size and time scales in which relevant cognitive processing took place. This followed the evolutionary strategy from simple to complex, but also previewed the program I intend to pursue. Having already done a simple invertebrate brain I now seek to invade the cognitive space of reptiles! In some future posts I plan to outline the steps I will be taking. The ultimate goal is to construct a neocortical framework for building mammalian-scale brains. This is now in the realm of feasibility given the newest generation of microcontroller devices, sensors, and actuators for robots. I plan to follow Damasio's framework of giving my brains access to more complex body states so that they can incorporate a protoself and core self (the reptile). With the addition of a neocortical framework I hope to develop a capacity for an autobiographical self in a machine.
Here is why I am pursuing this now. Human civilization is about to collapse. We may end up in a new dark age, we may even go extinct. Or we may salvage some semblance of civilization but only for a few survivors. If the latter turns out to be the case humanity is going to need some help cleaning up the mess. Assuming we find some way to power them, really intelligent robots with consciences might help out.
It is possibly a Quixotic dream of a foolish old man. I'm sure many will see it as so. But when we stop dreaming is when we stop being human altogether.
References and General Bibliography
Alkon, Daniel L. (1987). Memory Traces in the Brain, Cambridge University Press, Cambridge UK.
Blackmore, Susan (2004). Consciousness: An Introduction, Oxford University Press, Oxford UK.
Bourke AFG (2011). Social Evolution, Oxford University Press, Oxford,
Buller, David J. (2005). Adapting Minds, The MIT Press, Cambridge MA.
Cacioppo, John T., Visser, Penny S., and Pickett, Cynthia L. (eds)(2006). Social Neuroscience, The MIT Press, Cambridge MA.
Calvin, William H. (1996). How Brains Think: Evolving Intelligence, Then and Now, Weidenfeld & Nicolson, London UK.
Calvin, William H. & Ojemann, George A. (1994). Converstations with Neil's Brain: The Neural Nature of Thought and Language, Addison-Wesley Publishing Company, Reading MA.
Carter, Rita (1999). Mapping the Mind, University of California Press, Berkeley CA.
Commons ML et al (eds) (1991). Neural Network Models of Conditioning and Action, Lawrence Erlbaum Associates, Publishers, Mahwah, NJ.
Damasio, Antonio (1994). Descartes' Error: Emotion, Reason, and the Human Brain, HarperCollins Publisher, New York.
Damasio, Antonio (2000). The Feeling of What Happens, Mariner Books, New York.
Damasio, Antonio (2010). Self Comes to Mind: Constructing the Conscious Brain, Random House LLC, New York.
Deacon, Terrence W. (1997). The Symbolic Species: The Co-Evolution of Language and the Brain, W.W. Norton & Company, New York.
Dennett, Daniel C. (1991). Consciousness Explained, Little, Brown and Company, Boston MA.
Donald, Merlin (2001). A Mind So Rare: The Evolution of Human Consciousness, W.W. Norton & Company, New York.
Donald, Merlin (1991). Origins of the Modern Mind, Harvard University Press, Cambridge MA.
Friedenberg, Jay, and Silverman, Gordon (2006). Cognitive Science, Sage Publications, Thousand Oaks CA.
Fuster, Joaquin M. (1995). Memory in the Cerebral Cortex, The MIT Press, Cambridge MA.
Gangestad, Steven W. & Simpson, Jeffry A. (2007). The Evolution of Mind: Fundamental Questions and Controversies, The Guilford Press, New York.
Geary, David C. (2005). The Origin of Mind: Evolution of Brain, Cognition, and General Intelligence, American Psychological Association, Washington DC.
Hawkins, Jeff (2004). On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines, St. Martin's Griffin, New York.
Koch, Cristof (1999). Biophysics of Computation: Information Processing in Single Neurons, Oxford University Press, New York.
Levine DS, Aparicio IV M (1994). Neural Networks for Knowledge Representation and Inference, Lawrence Erlbaum Associates, Publishers, Mahwah, NJ.
Levine DS et al (eds) (2000). Oscillations in Neural Systems, Lawrence Erlbaum Associates, Publishers, Mahwah, NJ.
LeDoux J (2002). Synaptic Self: How Our Brains Become Who We Are, Viking, New York.
Marcus, Gary (2008). Kluge: The Haphazard Construction of the Human Mind, Houton Mifflin, Boston MA.
Mithen, Steven (1996). The Prehistory of the Mind: The Cognitive Origins of Art, Religion, and Science, Thames & Hudson, London UK.
Pinker, Steven (1997). How the Mind Works, W.W. Norton & Company, New York.
Rumelhart J, McClelland D (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press, Cambridge MA.
Sober E, Wilson DS (1998). Unto Others: The Evolution and Psychology of Unselfish Behavior, Harvard University Press, Cambridge MA.
Striedter GF (2005). Principles of Brain Evolution, Sinauer Associates, Inc. Sunderland MA.
Sutton, Richard S. and Barto, Andrew G. (1998). Reinforcement Learning, The MIT Press, Cambridge MA.
Staddon, J.E.R. (2001). Adaptive Dynamics: The Theoretical Analysis of Behavior, The MIT Press, Cambridge MA.
Swanson, Larry W. (2003). Brain Architecture: Understanding the Basic Plan, Oxford University Press, Oxford UK.
Footnotes
[1]. Damasio allows that all creatures with brains have minds; that the mind is not the same as consciousness, but rather the result of continuous processing of sensory-driven imgages in those brains.
[2]. A model is any representation of the “thing” under consideration, the subsystem of interest. A map is generally thought of as a more static representation of a relation between two different “domains” (technically between a domain and a range, but I'm using the word domain more generally). However, a model is a dynamic maping, one in which the interally represented relations can change with changing associative conditions. Models can be used in control systems, e.g. what is often called a ‘plant model’ is a transfer function that maps control signals to the actuators of the plant from the sensory signals that measure the plant state or output. For my purposes (and I think Damasio would concur) map and model mean essentially the same thing.
[3]. In this usage, an image is a pattern of roughly synchronized neural firings across a specific assemblage of neurons in a network. These firing patterns represent an image that is active in mind. When the system is not being stimulated (e.g. from below by sensory inputs) the images reside in dispositional form (Damasio's term) or in passive storage as long-term potentiation of the memory traces associated with that specific pattern. Only activate assembleges form images at a given instant.
[4]. The evolutionary strategy might first look like a bottom-up approach but in truth it is what I call a “piece-wise” top down approach. That is as the system at a given stage of evolution is decomposed and understood the knowledge serves as a basis for what to look for when you start decomposing a more evolutionarily advanced system. So starting with invertebrate brains has given me the first insights into the neural processing of self vs. non-self (Figure 1 in the first post). It also gave me confidence to tackle a more advanced system like an early vertebrate brain (e.g. a fish), which is what I am working on at the moment.