Eusociality
For several years now I have been writing about the kind of eusociality engendered in human beings as opposed to hive and colony insects like ants. E.O. Wilson has studied ant societies and coined the term eusocial to describe the way colonies are organized as a kind of super-organism, i.e. the colony acts as a single organism with distributed control (nervous system-like). Ants are motivated by purely instinctive behaviors triggered by pheromone and situational signals. They do their jobs for the benefit of the colony without hesitation given the proper cues. They are the ultimate in collectivist society. I have outlined a different route to eusociality via higher sapience in humans. We are strongly motivated to behave cooperatively with our fellow beings, what Tomesello calls collective intentionality, but we retain a great deal of autonomy and individualist thinking. So our kind of eusociality depends on strong moral sentiments, higher order judgement, active systems and strategic thinking as I have described in my writings on sapience.
High levels of sapience should allow autonomous agents such as us to still become cooperative and collectivist without sacrificing our individualistic qualities. Since humans can learn (have semi-evolvable brains) we can have a huge variety of ideas about how things might be done. We can generate lots of options that will keep us extraordinarily adaptive. To my way of thinking this would be ideal. A society of eusapient beings would be able to govern themselves wisely, especially with respect to being in balance with the Ecos but also achieving the ultimate in equity among ourselves [As I write this I am on my way to Berlin for a Systems Science conference on “Governance in the Anthropocene” in which I will be presenting a paper on my hierarchical cybernetics system theory]. We could be both eusocial and autonomous agents simultaneously.
That is the direction I would hope biological evolution would take our genus.
But hope is not the driver of evolution. Anything can happen in the particulars. All that I am sure of is that evolution drives toward greater organization as long as energy flows (i.e. the sun shines). Eusocial societies are part and parcel of evolution's grand trajectory as I have written here. However there may be multiple ways for humans to evolve into fully eusocial beings as part of a super-organism. Some may not be as pleasant sounding as we might like.
This is highly speculative and, frankly, I doubt it could take this course because it would require that our technological culture continue on as it has. For that to happen would require a miracle to find an energy source to replace fossil fuels. As readers will know I don't think that solar energy can do that for the kind of consumptive society we have now. And that kind of consumer demand economy is needed to drive continuing innovation in technology (e.g. smart phones). So unless something spectacular happens in the energy sector soon this scenario is just a thought experiment and no more.
Human Pheromones
I have been observing an interesting phenomenon with respect to the use of smart phones. Specifically I have been witnessing people using their phones with a maps application and GPS to navigate in unknown territories. This, at first, looks like a really cool application. The system updates almost continually and tells you down to a few feet where you are (on a road), what the next turn should be and so on. As technology goes this is “top drawer”. But it is how humans use it that makes me wonder what would happen if we coevolved long enough with this kind of technology such that our brains started incorporating it into our automatic behaviors. This might be somewhat related to transhumanism, the idea that humans and technology will meld into a new kind of super being. But in this case I suspect it would lead to a lessening of intellect not an improvement.
The evolution of behaviors is somewhat tricky. In fact there is a phenomenon, called the Baldwin effect, that looks awfully like Lamarkian evolution. It resembles the acquisition of a trait in that a learned behavior, if it can be taught to newer generations and is successful in improving fitness, can actually end up becoming essentially instinctive. That is, brain circuits that support the behavior become hard coded after many generations of successful usage. It isn't that the learned circuitry gets hardened. Rather, like most evolutionary innovations some portion of the brain that may be redundant gets copied during reproduction and is then free to diverge from the malleable portion of the brain. Pure chance then takes over to generate the hard wiring aspects and selection does the rest. What started as a learned behavior eventually gets transfered by ordinary evolutionary mechanisms to a hard wired behavior. You see this a lot in certain bird families, like ducks, where evolutionarily older species have more reflexive complex behavioral responses to stimuli while the newer derived species have more flexible responses. The point is that it is evolutionarily feasible for a behavior that initially requires thinking becomes automated. In the human brain it would be as if after learning to ride a bicycle, a skill that gets encoded in a part of the brain where it is lost to conscious awareness (you just simply know how to ride a bike without thinking) some other part of the older brain (e.g. the reptilian brain) evolves to mimic what the neocortex (and cerebellum) are doing and eventually takes over. Every kid born after that would automatically know how to ride a bike without going through the sometimes painful process of learning!
OK, bike riding is stretching it a bit, but the point is that complex behaviors can come to be encoded permanently if the selection pressures are strong enough. Based on the intensity with which I see many people working on their texting or other smart phone applications (apps) I have to wonder. There must be some strong selection force at work here. Why else would so many people be doing it? [By way of full disclosure, I still carry a dumb flip phone that only does calling and receiving!]
Most of us have heard stories of people blindly following their map/GPS apps, driving off of cliffs or into ponds when the app told them to do so. Now I think I understand why. It is incredibly easy to get intensely involved in following the instructions or the map view. I've seen drivers mount their phones on the dashboards where they can see the view. There is the voice version too that talks you through your turns. That at least isn't presumably as dangerous as far as driving is concerned [it could go the other way since texting while driving is known to cause a lot of accidents so would be strongly selected against]. In other cases a passenger is responsible for navigation and does the talking, giving directions to the driver. Either way the focus on what the machine is telling you to do versus looking up at the world around you and actively “thinking” about where you are and where you are going is absorbing. I have been in an automobile where the passenger was directing the driver directly onto a freeway going in the opposite direction of the traffic head-on. There were extenuating circumstances (to be fair to the driver) that made it momentarily feasible to take that turn, but the force to take the machine at its word seems to have prevented the driver from recognizing the inconsistency and aborting the move. Fortunately traffic was light. We were able to execute a U-turn and get going with the traffic flow. Unfortunately we were going the wrong way, but the app recovered and rerouted us to get onto the other side of the freeway and going the right direction.
Of course people not using GPS can end up going the wrong direction on a freeway; it happens often enough. So this one incident by itself is not cause for drawing any conclusions. But it started me thinking about the cognitive processes that were going on. Two people, the driver and the front seat passenger (I was in the back seat gritting my teeth) were focused so intently on following the app/GPS directions that they failed to see the signs and the lay of the land that would have warned most other drivers that the move was invalid. BTW: the app/GPS got a number of directions wrong, failing to recognize things like roads that were really narrow alleys or that were effectively one-way, etc. — the algorithm seems to simply look for an optimal path through what the database says are links between two adjacent nodes in a graph. It tries to route around blockages that it can detect, like accidents or traffic jams, but otherwise it can try to route through some impassable places. The funny thing about these pathways is that if you were really taking the locale into account visually and mentally you could immediately see that you were in a place where roads were not all that useful. People in the Northeast coast cities of the US and many other places in the world know that particularly old cities have these horribly narrow roads that twist through.
Now consider what could happen if this kind of technology were adopted by everyone and was in use for a very long time, say twenty generations. We now know that human evolvability can actually respond rather quickly to environmental pressures so that time span isn't too short particularly. But also consider that somehow there is a reproductive advantage to using app/GPS to get around. Don't ask me what; I told you this was just fanciful speculation. Such a scenario would favor the evolution of a brain that came to rely on and respond to signals that were generated by an algorithm, essentially like the pheromones triggering ant behavior. And those humans would have given up just a little bit of their autonomy! They would behave more like ants following a pheromone trail laid down by other ants to go some place without really knowing where or how to navigate there using the old fashioned methods of sense of direction and logging signs along the way.
What about other technology innovations that tell us what to do? Take dating for example. Apps that simply link you up with someone are helpful, but what about those services that strongly suggest to you with whom you are most compatible? Would people start accepting the advice of an algorithm to decide who to date and marry?
There is a difference, I think, between taking the advice of an algorithm vs. that of another human. With human advice you can always retain the idea that humans are only expressing opinions and not any kind of absolute knowledge. With algorithms, however, especially the ones that have a high reliability rating, we seem to have a tendency to take them at their word. We have much greater (but not necessarily deserved) confidence in their veracity. Humans seem to be happy to give up the effort of thinking about and evaluating advice from humans and accept the word of a computer program as what they should do. It might not occur to many people that the algorithm is only as good as the assumptions made and the implementer's skills. Once set in stone (or computer memory) the algorithm just dumbly does what it is designed to do. It isn't until a number of serious bugs are discovered (like too narrow alley ways being specified too often) that someone tries to make a change (and more often than not introduces another bug!)
I have no idea whether this is a good thing or not. I cannot judge such things in terms of ultimate consequences since I don't have access to the future results. But I find it interesting that this process seems to be taking place. We have a long history of giving up cognitive work in lieu of some technology that takes over some of the work involved. We stopped memorizing long stories about the world and our history when writing was invented, and especially when the printing press made it possible to mass distribute recorded stories for all to read. Many of us have forgotten how to do basic arithmetic (or at least it is difficult to reboot it when we need to do some computations) because we've been relying on calculators for several generations now. I've heard there are schools that simply teach elementary students how to use calculators instead of math facts arguing that that is how we all do arithmetic anyway!
We do the same thing for physical work. If we have a machine that allows us to do more work expending less of our own energy then we are damned well going to use it. As long as the fuel we need is abundant and cheap we are fools to do the work the hard and long way. But then that is what makes us weak and overweight from not getting exercise. We let the machines do everything, including now our thinking.
It might seem strange that a person who has earned a living creating new technology and teaching others how to go about creating new technology should be heralding a misgiving about that which put bread on his plate, but the truth is that I have grown exceedingly concerned that so many people have come to rely so heavily on technologies to not just amplify their capabilities but to replace them. The blind reliance on technology is worrying. Where does it lead?
As I mentioned above, I'm actually not too concerned that this kind of biological-cultural coevolution will play out this far. The fact is we are running low on fuel and there is nothing on the horizon to replace it. The machines, including the server farms that supply the apps with their intelligence or data, are going to come to a grinding halt one of these days. And when they do the social evolution of humanity will take a very different turn. There will be something of a cultural reset to the days before industrial-level agriculture. There will come a time when people will survive and thrive by their mental prowess and their physical capabilities and not be able to let some machines do the work. That environment will select for increasing smarts and physique and from a biological point of view I think that would be a good thing.
But neither am I a Luddite. I am not anti-technology, just anti-over reliance on technology. There is no reason why humans could not use appropriate technologies appropriately. What that entails is hard to pin down right now because we are all, myself included, so embedded in a culture of technology that it is difficult to distinguish. I would give it a shot though. For example there is a lot of useful mathematics that can be done well through mental effort alone. You don't need a computer to sum a column of numbers. But how many of us (and I plead mea culpa) turn to a spreadsheet program when we have such a column of numbers? It is so easy to turn on the computer, launch the application, plug in the numbers (we are fairly good at typing and thumbing) and let the machine do the sums. It probably takes longer to do all this than it would have to have done it mentally but somehow, the mental work seems harder than the keyboard work. However, if you have a system model that needs simulating and it includes (as most do) feedback loops and circular causality then mathematics as a mental effort is not really useful. Then you can plug in a set of numbers, the formulas and the dependencies and let the computer do the work. In this case it is the answer that you will use to do further hard mental work (interpreting the dynamical behavior of the system). And the computer is an extremely useful tool for getting it.
For most of us, though, making these kinds of discriminations is hard. It takes a lot of basic knowledge and superior judgement to accomplish. It then takes a lot of fortitude and willingness to do hard work to forego the benefits of letting the machines do everything. It is also tempting to simply avoid doing things that require the appropriate use of tools. For example, most people will avoid thinking about (or worrying about) the dynamics of a system (containing nonlinearities) because that, in itself, is hard mental work.
So here we are with a society that eschews hard mental work and has a perfectly good sounding excuse to do so. “Let the machine do it.” And if there is not a machine to do it, then simply ignore that “it” even exists or needs doing.
If this pattern were to continue long enough I suspect that we would end up like ants. We would achieve a high level of eusociality but at the price of losing our autonomy and ability to think for ourselves. We would be suited to live in a colony or hive and be ruled by algorithms. Who knows. Maybe that would be better for the Ecos!