Eusociality
For several years now I have been writing about the kind of eusociality engendered in human beings as opposed to hive and colony insects like ants. E.O. Wilson has studied ant societies and coined the term eusocial to describe the way colonies are organized as a kind of super-organism, i.e. the colony acts as a single organism with distributed control (nervous system-like). Ants are motivated by purely instinctive behaviors triggered by pheromone and situational signals. They do their jobs for the benefit of the colony without hesitation given the proper cues. They are the ultimate in collectivist society. I have outlined a different route to eusociality via higher sapience in humans. We are strongly motivated to behave cooperatively with our fellow beings, what Tomesello calls collective intentionality, but we retain a great deal of autonomy and individualist thinking. So our kind of eusociality depends on strong moral sentiments, higher order judgement, active systems and strategic thinking as I have described in my writings on sapience.
High levels of sapience should allow autonomous agents such as us to still become cooperative and collectivist without sacrificing our individualistic qualities. Since humans can learn (have semi-evolvable brains) we can have a huge variety of ideas about how things might be done. We can generate lots of options that will keep us extraordinarily adaptive. To my way of thinking this would be ideal. A society of eusapient beings would be able to govern themselves wisely, especially with respect to being in balance with the Ecos but also achieving the ultimate in equity among ourselves [As I write this I am on my way to Berlin for a Systems Science conference on “Governance in the Anthropocene” in which I will be presenting a paper on my hierarchical cybernetics system theory]. We could be both eusocial and autonomous agents simultaneously.
That is the direction I would hope biological evolution would take our genus.
But hope is not the driver of evolution. Anything can happen in the particulars. All that I am sure of is that evolution drives toward greater organization as long as energy flows (i.e. the sun shines). Eusocial societies are part and parcel of evolution's grand trajectory as I have written here. However there may be multiple ways for humans to evolve into fully eusocial beings as part of a super-organism. Some may not be as pleasant sounding as we might like.
This is highly speculative and, frankly, I doubt it could take this course because it would require that our technological culture continue on as it has. For that to happen would require a miracle to find an energy source to replace fossil fuels. As readers will know I don't think that solar energy can do that for the kind of consumptive society we have now. And that kind of consumer demand economy is needed to drive continuing innovation in technology (e.g. smart phones). So unless something spectacular happens in the energy sector soon this scenario is just a thought experiment and no more.
Human Pheromones
I have been observing an interesting phenomenon with respect to the use of smart phones. Specifically I have been witnessing people using their phones with a maps application and GPS to navigate in unknown territories. This, at first, looks like a really cool application. The system updates almost continually and tells you down to a few feet where you are (on a road), what the next turn should be and so on. As technology goes this is “top drawer”. But it is how humans use it that makes me wonder what would happen if we coevolved long enough with this kind of technology such that our brains started incorporating it into our automatic behaviors. This might be somewhat related to transhumanism, the idea that humans and technology will meld into a new kind of super being. But in this case I suspect it would lead to a lessening of intellect not an improvement.
The evolution of behaviors is somewhat tricky. In fact there is a phenomenon, called the Baldwin effect, that looks awfully like Lamarkian evolution. It resembles the acquisition of a trait in that a learned behavior, if it can be taught to newer generations and is successful in improving fitness, can actually end up becoming essentially instinctive. That is, brain circuits that support the behavior become hard coded after many generations of successful usage. It isn't that the learned circuitry gets hardened. Rather, like most evolutionary innovations some portion of the brain that may be redundant gets copied during reproduction and is then free to diverge from the malleable portion of the brain. Pure chance then takes over to generate the hard wiring aspects and selection does the rest. What started as a learned behavior eventually gets transfered by ordinary evolutionary mechanisms to a hard wired behavior. You see this a lot in certain bird families, like ducks, where evolutionarily older species have more reflexive complex behavioral responses to stimuli while the newer derived species have more flexible responses. The point is that it is evolutionarily feasible for a behavior that initially requires thinking becomes automated. In the human brain it would be as if after learning to ride a bicycle, a skill that gets encoded in a part of the brain where it is lost to conscious awareness (you just simply know how to ride a bike without thinking) some other part of the older brain (e.g. the reptilian brain) evolves to mimic what the neocortex (and cerebellum) are doing and eventually takes over. Every kid born after that would automatically know how to ride a bike without going through the sometimes painful process of learning!
OK, bike riding is stretching it a bit, but the point is that complex behaviors can come to be encoded permanently if the selection pressures are strong enough. Based on the intensity with which I see many people working on their texting or other smart phone applications (apps) I have to wonder. There must be some strong selection force at work here. Why else would so many people be doing it? [By way of full disclosure, I still carry a dumb flip phone that only does calling and receiving!]
Most of us have heard stories of people blindly following their map/GPS apps, driving off of cliffs or into ponds when the app told them to do so. Now I think I understand why. It is incredibly easy to get intensely involved in following the instructions or the map view. I've seen drivers mount their phones on the dashboards where they can see the view. There is the voice version too that talks you through your turns. That at least isn't presumably as dangerous as far as driving is concerned [it could go the other way since texting while driving is known to cause a lot of accidents so would be strongly selected against]. In other cases a passenger is responsible for navigation and does the talking, giving directions to the driver. Either way the focus on what the machine is telling you to do versus looking up at the world around you and actively “thinking” about where you are and where you are going is absorbing. I have been in an automobile where the passenger was directing the driver directly onto a freeway going in the opposite direction of the traffic head-on. There were extenuating circumstances (to be fair to the driver) that made it momentarily feasible to take that turn, but the force to take the machine at its word seems to have prevented the driver from recognizing the inconsistency and aborting the move. Fortunately traffic was light. We were able to execute a U-turn and get going with the traffic flow. Unfortunately we were going the wrong way, but the app recovered and rerouted us to get onto the other side of the freeway and going the right direction.
Of course people not using GPS can end up going the wrong direction on a freeway; it happens often enough. So this one incident by itself is not cause for drawing any conclusions. But it started me thinking about the cognitive processes that were going on. Two people, the driver and the front seat passenger (I was in the back seat gritting my teeth) were focused so intently on following the app/GPS directions that they failed to see the signs and the lay of the land that would have warned most other drivers that the move was invalid. BTW: the app/GPS got a number of directions wrong, failing to recognize things like roads that were really narrow alleys or that were effectively one-way, etc. — the algorithm seems to simply look for an optimal path through what the database says are links between two adjacent nodes in a graph. It tries to route around blockages that it can detect, like accidents or traffic jams, but otherwise it can try to route through some impassable places. The funny thing about these pathways is that if you were really taking the locale into account visually and mentally you could immediately see that you were in a place where roads were not all that useful. People in the Northeast coast cities of the US and many other places in the world know that particularly old cities have these horribly narrow roads that twist through.
Now consider what could happen if this kind of technology were adopted by everyone and was in use for a very long time, say twenty generations. We now know that human evolvability can actually respond rather quickly to environmental pressures so that time span isn't too short particularly. But also consider that somehow there is a reproductive advantage to using app/GPS to get around. Don't ask me what; I told you this was just fanciful speculation. Such a scenario would favor the evolution of a brain that came to rely on and respond to signals that were generated by an algorithm, essentially like the pheromones triggering ant behavior. And those humans would have given up just a little bit of their autonomy! They would behave more like ants following a pheromone trail laid down by other ants to go some place without really knowing where or how to navigate there using the old fashioned methods of sense of direction and logging signs along the way.
What about other technology innovations that tell us what to do? Take dating for example. Apps that simply link you up with someone are helpful, but what about those services that strongly suggest to you with whom you are most compatible? Would people start accepting the advice of an algorithm to decide who to date and marry?
There is a difference, I think, between taking the advice of an algorithm vs. that of another human. With human advice you can always retain the idea that humans are only expressing opinions and not any kind of absolute knowledge. With algorithms, however, especially the ones that have a high reliability rating, we seem to have a tendency to take them at their word. We have much greater (but not necessarily deserved) confidence in their veracity. Humans seem to be happy to give up the effort of thinking about and evaluating advice from humans and accept the word of a computer program as what they should do. It might not occur to many people that the algorithm is only as good as the assumptions made and the implementer's skills. Once set in stone (or computer memory) the algorithm just dumbly does what it is designed to do. It isn't until a number of serious bugs are discovered (like too narrow alley ways being specified too often) that someone tries to make a change (and more often than not introduces another bug!)
I have no idea whether this is a good thing or not. I cannot judge such things in terms of ultimate consequences since I don't have access to the future results. But I find it interesting that this process seems to be taking place. We have a long history of giving up cognitive work in lieu of some technology that takes over some of the work involved. We stopped memorizing long stories about the world and our history when writing was invented, and especially when the printing press made it possible to mass distribute recorded stories for all to read. Many of us have forgotten how to do basic arithmetic (or at least it is difficult to reboot it when we need to do some computations) because we've been relying on calculators for several generations now. I've heard there are schools that simply teach elementary students how to use calculators instead of math facts arguing that that is how we all do arithmetic anyway!
We do the same thing for physical work. If we have a machine that allows us to do more work expending less of our own energy then we are damned well going to use it. As long as the fuel we need is abundant and cheap we are fools to do the work the hard and long way. But then that is what makes us weak and overweight from not getting exercise. We let the machines do everything, including now our thinking.
It might seem strange that a person who has earned a living creating new technology and teaching others how to go about creating new technology should be heralding a misgiving about that which put bread on his plate, but the truth is that I have grown exceedingly concerned that so many people have come to rely so heavily on technologies to not just amplify their capabilities but to replace them. The blind reliance on technology is worrying. Where does it lead?
As I mentioned above, I'm actually not too concerned that this kind of biological-cultural coevolution will play out this far. The fact is we are running low on fuel and there is nothing on the horizon to replace it. The machines, including the server farms that supply the apps with their intelligence or data, are going to come to a grinding halt one of these days. And when they do the social evolution of humanity will take a very different turn. There will be something of a cultural reset to the days before industrial-level agriculture. There will come a time when people will survive and thrive by their mental prowess and their physical capabilities and not be able to let some machines do the work. That environment will select for increasing smarts and physique and from a biological point of view I think that would be a good thing.
But neither am I a Luddite. I am not anti-technology, just anti-over reliance on technology. There is no reason why humans could not use appropriate technologies appropriately. What that entails is hard to pin down right now because we are all, myself included, so embedded in a culture of technology that it is difficult to distinguish. I would give it a shot though. For example there is a lot of useful mathematics that can be done well through mental effort alone. You don't need a computer to sum a column of numbers. But how many of us (and I plead mea culpa) turn to a spreadsheet program when we have such a column of numbers? It is so easy to turn on the computer, launch the application, plug in the numbers (we are fairly good at typing and thumbing) and let the machine do the sums. It probably takes longer to do all this than it would have to have done it mentally but somehow, the mental work seems harder than the keyboard work. However, if you have a system model that needs simulating and it includes (as most do) feedback loops and circular causality then mathematics as a mental effort is not really useful. Then you can plug in a set of numbers, the formulas and the dependencies and let the computer do the work. In this case it is the answer that you will use to do further hard mental work (interpreting the dynamical behavior of the system). And the computer is an extremely useful tool for getting it.
For most of us, though, making these kinds of discriminations is hard. It takes a lot of basic knowledge and superior judgement to accomplish. It then takes a lot of fortitude and willingness to do hard work to forego the benefits of letting the machines do everything. It is also tempting to simply avoid doing things that require the appropriate use of tools. For example, most people will avoid thinking about (or worrying about) the dynamics of a system (containing nonlinearities) because that, in itself, is hard mental work.
So here we are with a society that eschews hard mental work and has a perfectly good sounding excuse to do so. “Let the machine do it.” And if there is not a machine to do it, then simply ignore that “it” even exists or needs doing.
If this pattern were to continue long enough I suspect that we would end up like ants. We would achieve a high level of eusociality but at the price of losing our autonomy and ability to think for ourselves. We would be suited to live in a colony or hive and be ruled by algorithms. Who knows. Maybe that would be better for the Ecos!
Once the skill set is lost who would do the teaching? I can think of so many skills sewing, carpentry, shoe repair you name it, that generally require at least one if not more persons to teach them. You now have the controversial use of machines such as robotic surgery equipment, which in theory are there to aid and improve technique. A machine can never replace the range of cognitive human function. So what is the point in using them in this manner. Over reliance on machines and technology creates more problems than they solve. When the original skill is gone you are basically reinventing the wheel- starting from scratch. The genetically lucky and adaptive individuals would be needed to start the new course of learning.
Posted by: Ann | July 30, 2015 at 06:16 PM
Dear George
A few thoughts about the offloading of functions from humans to mechanical devices or to other living creatures. For example, Dr. Mehmet Oz wrote this blurb for David Perlmutter’s book Brain Maker:
'Humans outsource digestion to their gut bacteria. Dr Perlmutter takes us on a journey to understand how these foreigners profoundly influence our brains for good and bad.’
There are thousands of species of microbes living on and in our bodies, and a close examination would probably reveal that much besides digestion has been offloaded. But the offloading generally works to the benefit of the human proper, so that we usually don’t make a distinction between the human proper and the foreigners. When I am driving, I routinely offload to my wife the job of reminding me when I need to stop at the recycling center…so I don’t just drive right by it. These, and numerous other examples, illustrate ‘good offloads’.
I believe it is a true statement that, without the offloading, humans could not live. So, obviously, it is not the offloading itself that is dangerous. Instead, we might look for certain kinds of offloading which we might anticipate to be dangerous.
Anything that depends on the continuance of fossil fuels as an energy source would, in my opinion, be quite risky. So, for example, offloading our nutrition to industrial agriculture is risky. Offloading pest control to a diverse ecosystem has some risks, but much lower risk than dependence on fossil fuels:
Trading Biodiversity for Pest Problems
http://advances.sciencemag.org/content/advances/1/6/e1500558.full.pdf
One can think of all sorts of other examples where biology is useful, but which we have ignored and used fossil fuels. For example, nitrogen fixation, just in time supply of plant nutrients to roots, soil aeration, nutrient retention in soils, etc. And rather than garden, we offload everything to grocery stores which deprives us of exercise and mental focus and really fresh food.
In terms of computation, we have offloaded most of it to calculators and computers. Alternatives include the abacus and the slide rule. I would argue that total reliance on calculators and computers is dangerous, but that an abacus or a slide rule can both preserve the mental gymnastics which are essential to understanding and also take some of the computational load off of our brains. Children should, perhaps, be taught arithmetic using the abacus and slide rule.
Thinking about complex systems and our reliance on computers offers different challenges. When we see the computer draw some beautiful curves, we (or at least I) tend to believe them. Believing the curves carries its own risks. If we try to think of approaches which might work in the absence of fossil fuels and computers, I suggest that we look at suggestions by Frank Wilczek (the physicist) and Jane Hirshfield (the poet). On page 324 of A Beautiful Question, Wilczek looks at Complementarity, as conceived by Neils Bohr:
‘No one perspective exhausts reality, and different perspectives may be valuable, yet mutually exclusive.’
Jane Hirshfield’s book Ten Windows examines ways in which good poems are windows through which we can look at the world from many different perspectives. So I would argue that Hirshfield and Wilczek and Bohr are thinking along similar lines. Different perspectives don’t calculate beautiful curves, but perhaps a disciplined approach to Complex Systems can be built by using the principle of Complementarity or Poetic Windows.
It is difficult to imagine fractal images being computed except with computers. But Adrian Bejan, in his Constructal Theory, believes that fractals are overrated anyway. Bejan usually calculates with pencil and paper. Perhaps some thought and practice now might pay dividends as we learn to live with less energy.
Don Stewart
Posted by: Don Stewart | August 01, 2015 at 09:53 AM
Filmed January 2014 at TEDSalon NY2014
13:08 minutes
Sebastian Junger: Why veterans miss war
https://www.ted.com/talks/sebastian_junger_why_veterans_miss_war?language=en
Posted by: Robin Datta | August 07, 2015 at 02:14 AM
"Phase IV": http://www.imdb.com/title/tt0070531/
http://www.scientificamerican.com/article/weve-been-looking-at-ant-intelligence-the-wrong-way/
http://time.com/118633/ant-intelligence-google/
http://news.stanford.edu/pr/93/931115Arc3062.html
Consider the cost to the bottom 99%+ of human apes to support the growth of net-energy flows to support the complex, high-tech, high-entropy resources and income to the top 0.001-1% in NYC, Boston, DC, Chicago, Atlanta, Dallas-Houston, Denver, La La Land, Silly-Con and Social Mania Valley, and Seattle-Vancouver.
Then note that, given unprecedented debt and asset values to wages and GDP, total net financial flows to the financial AND financialized (gov't, health care, "education", retail, and financial services) sectors of the US economy since 2008 NOW EQUAL the total value-added output of the US economy, no growth of real GDP per capita is possible hereafter.
Another way to perceive it is that the top 0.001-1% to 10% have claimed (whether or not they know it) all future growth in perpetuity in terms of net-energetic, debt-money-denominated labor product, profits, and gov't receipts for social goods.
The comparable periods during which similar conditions occurred historically were prior to the French, American, and Russian revolutions, and leading up to the collapse of the Soviet Union.
The top 0.001-1% have largely disengaged from the productive economy, having accumulated sufficient debt-money-denominated fiat digital "wealth" to sustain their lifestyles for several lifetimes at the expense of the bottom 90-99%.
From an historical perspective, this is classic imperial decadence.
But no Establishment eCONomist or mass-media influential can dare say so or risk being discredited, financially ruined, unemployed, and the prospect of a homeless, solitary, Diogenes-like existence seeking an honest man by night by the light of a lantern among the company of canines.
If I were to be compelled to place a bet on human apes vs. insects, I would bet the farm on the latter, which, of course, ironically, means that I (and we) lose the bet either way, as I, as a human ape, would not survive to collect from the surviving insects who have no use for human-created currency units per net energy per capita per unit time.
Thus, I am comforted by the perception that I no longer need to be homeless with a lantern in search of honesty, as it exists in the purist form among the highly evolved, exergetically equilibriating, eusocial, six-legged, small-brained insects.
Posted by: BC | August 18, 2015 at 06:22 PM
I feel like math courses often err too far on emphasizing "understanding" and not far enough on the side of drilling and going through the mechanical algorithms mentally. Having studied a lot of math, I'm good at writing proofs, but ill equipped to solve any sort of numerical problem without the aid of a computer. I feel like a certain amount of slowly going through the mechanical motions by hand is important, since it gives you more intuition of what goes on in the computer's brain - "thinking like a machine". Of course, you never see those kinds of exercises in upper-level math courses, since they are very tedious and they don't really develop the creativity and problem solving skills a mathematician or researcher will need.
Posted by: Sari | September 12, 2015 at 01:26 PM
@Ann,
The human species is still pretty adaptive and has the native affordance mentality that gave rise to tool-making/use in the first place. And you are right. The most adaptive "survivors" will need to do a lot of reinventing some day. That might really be the best thing for planet Earth, though. My hypothesis is that only the most sapient people are going to survive and adapt (by definition they are the most adaptive) so they will also hopefully be wise enough not to repeat our mistakes in over-reliance on technology.
-----------------------------------
@Don S.
Good points all.
------------------------------------
@Robin D.,
Thanks for the link.
-------------------------------------
@BC,
You might find my early research on the brain of a "moronic snail" interesting. It is available on my academic website: http://faculty.washington.edu/gmobus/ in the Adaptive Agents Lab link.
-------------------------------------
@Sari,
Believe it or not I had a tendency to eschew math classes (enough for a minor an no more) because I wanted to use math to help me understand things, not for its own sake. I followed a different path in which I would build models (using equations) and then compute the results, draw the graph, and usually discover that my intuitions about what the math was doing were wrong. But I usually ended up seeing where I went wrong and started the process over again. When I finally got my hands on a computer and learned to program, my progress in this "test it and see" approach took off exponentially. What usually happened is that I discovered a need to understand some phenomenon that required math I did not have (like stochastic processes). So I went after the math until I could see how to use it to help me understand the problems. Rather than fill a tool chest with all kinds of tools, I waited until I found a need for a tool and then acquired it.
The upside of this approach is that I was always motivated. The downside is that it wasn't until I was deeper in computer science that I started seeing the "big picture" of the relations between various maths and began to appreciate the need for a larger tool box!
I suppose it really depends on individual interests.
George
Posted by: George Mobus | September 13, 2015 at 02:09 PM