Jeff Lieberman’s Evolution and Future of Consciousness

Posted in philosophy, transhumanism on October 20th, 2011 by Samuel Kenyon

Recently I attended a presentation at MIT by Jeff Lieberman called “It’s Not What You Think: An Evolutionary Theory of Spiritual Enlightenment.”

jeff lieberman

Lieberman is a science-educated artist and host of a TV show called Time Warp. He’s a relatively good presenter, and given his credentials, one would expect him to juxtapose disparate fields of science and art. However, the downside is that one is not left with a single solid believable conclusion or theory—or at least I wasn’t. Of course, this was also the first time Lieberman gave this talk, so he might improve it in the future.

I think that there are really four different themes woven in his presentation:

  1. Evolution of consciousness.
  2. Future directions of human consciousness.
  3. The concept of consciousness having existed for billions of years in all things, and that human-level consciousness is simply a more complicated construction (this is the weakest point).
  4. The common psychological goal of faiths underlying the world’s popular religions.

I will attempt to describe the first three themes below. I won’t say anything about the fourth—it’s a neat concept in which Lieberman interprets religious stories to have themes about human consciousness. For instance, human thinking we have compared to our animal relatives, however I am not well read in mythology / religious texts.

Please note that I’m leaving out a lot of his talk…he covered a lot of ground.

1. Evolution Of Consciousness

The premise is that human consciousness evolves—and that seems to be a sound statement given animal evolution thus far.

Lieberman gives us a 15-minute compressed history of the universe relevant to consciousness. Our universe starts with undifferentiated energy, then we end up with these various layers of organization emerging from previous layers: energy->particles->atoms->molecules->cells->animals.

Human perception does not by default work so well for layers that are smaller/larger than our little world. We can’t even perceive these other worlds without the help of technology. The perception we have is the result of evolution, as Lieberman puts it:

What we take for granted as real on a day to day basis is completely determined by what was functional for our evolutionary past.

This means not only are we not able to observe outside our small window of perception (unless we use technology), our brain is actually “creating a lie” to operate with. The brain constructs patterns. Lieberman shows some examples, such as a visual illusion and how viewing a symbol in slow motion shows the waves which we can’t see with normal vision.

Lieberman didn’t mention this, but he gets into some subjects I’m very interested in, such as affordability and perception as an interface. I am also reminded of the excellent book Visual Intelligence by Donald D. Hoffman, which describes the rules our vision systems use to construct reality. And, as Hoffman himself suggests in that book (and expounds in his paper “The Interface Theory of Perception”), the construction of reality may not actually be a reconstruction. The phenomenal sense of something need not resemble the relational sense. Our perception builds fictions that are useful for the organism to survive.

Anyway, Lieberman goes on to talk about consciousness in evolutionarily older organisms, especially cavemen, and that we should not assume that we are the apex of consciousness. I.e., who knows what potentially better (depending on the context of “better”) consciousness will become widespread in the future.

2. Future Directions Of Human Consciousness

Lieberman has a concept called the “new mutation,” which would operate at a social level, not just at an individual level. Certainly speculation about the future of consciousness should include this possibility of going to the level of the nesting of organizations.


Lieberman mentioned a lot of the old-fashioned mind hackers such as Buddhist monks, etc. He mentioned a world full of Ghandis or something like that. There’s a premise that these kinds of self applied brain wetware changes are a way to achieve “true” consciousness, if there is a such thing. However, I’m not sure if that kind of consciousness leads to future pathways. Let’s take a monk who is selfless and full of compassion—that’s great, and maybe there is some game model in which all or a certain percentage of the world’s inhabitants could operate like that for a better all around experience for humankind, but is it a method that will work as the basis for the “new mutation”…or is there some much better way? Or are existing meditative states not even scratching the surface of useful consciousness modifications?

Lieberman seems to that existing methods for “enlightenment” are:

  1. A way to access the “lower levels of the self” and that this is in fact the primitive consciousness that he thinks is really interesting and useful to experience.
  2. The side effects of that particular method are also desirable.

I, however, would not assume such things. The way Lieberman describes this more basic consciousness makes me think of a metaphor of a computer program which can look at its own execution and data (introspection) but normally does not.

This metaphor might also let me describe a potential danger: imagine the program completely abandons its normal operation and spends all its time doing introspection. The first question is…does motivation change? Can it change its own motivation? And is it in jeopardy of dying because it’s no longer paying attention to the outside world?

Lieberman said how the self disappears when we are in dreamless sleep. So “you” as you think of yourself are essentially nonexistent quite often. One way to look at this state of mind of awareness of lower consciousness is to think of it as being aware of yourself in deep sleep. You would be disconnected from the interfaces to the real world, and, as I said previously with the computer metaphor, be in an introspection mode.

Something Lieberman omitted to mention is the overlap with mindfulness, such as described by Ellen Langer in her various books on the subject.

Lieberman did mention Flow, however I am not convinced by his interpretation that true flow means that the self is not existent. And it doesn’t really sync up with his descriptions of being aware of oneself. It seems, in fact, to be two opposite states of mind…one of being completely immersed in an interactive cycle with the real world, and the other an internal inspection that is not synced to real world events.

3. The Concept Of Consciousness Having Existed For Billions Of Years In All Things

According to Lieberman:

And consciousness is not something that comes out of the human; it starts at the bottom and is built into the complexity and form of a human.

Now, I totally understand the strategy of trying to turn a concept on its head in order to find a new path for investigation and/or new theories. So when he says this, I am still on board with the general strategy, especially since I’m very interested system-oriented explanations for consciousness and cognition.

Lieberman tells us that this view somehow makes subjective experience much easier to explain. I suppose his talking about perception and whatnot was supposed to support that claim, but I am not grokking it. He could be on the right track, or the presentation may be a shell without enough data and/or theories to fill it in.

He tries briefly to explain this by saying consciousness is composed of “attraction and repulsion.” He gives an example of electrons having attraction and repulsion via fields. But I am not sure how that is equivalent to consciousness at any level. He says that at the higher, more complicated, levels of human emotions and thought that, “it’s still attraction and repulsion to different informational structures.”

Well, that’s a nice start, but it’s not even close to a real theory. Perhaps Lieberman got this from some other source and/or decided to cut out elaboration in this beta version of his presentation. He seems to have included it as a basic assumption though, weaving it into the presentation at various points. Perhaps he is attached to it because it lets him link every human’s mind all the way back to the big bang.

Whatever the case, he could cut this theme out or isolate it, and his historical view of evolution of consciousness is still valuable (and entertaining), and his speculations on the future of consciousness is still reasonable and thought-provoking. Likewise with his suggestions for what religious faiths are about at their unadulterated core.


Hopefully Lieberman isn’t gunning to be the next pseudo-scientific spiritual guru. At the very least, talking about future directions of consciousness, especially where we might want to go as a social system, is fruit-worthy.

Image credits:
[1] Discovery Channel
[2] RambergMediaImages

Tags: ,

The Future of Emotions

Posted in artificial intelligence, transhumanism on September 11th, 2011 by Samuel Kenyon

I recently happened upon an article [1] about the work of Jennifer Lerner:

Lerner is best known for her research on two topics: the effects of emotion on judgment and decision making, and the effects of accountability on judgment and decision making. Recently, along with a number of National Science Foundation-supported scientists, she appeared on the PBS program “Mind Over Money,” a look at the 2008 stock market crash and the irrational financial decisions people make.

How the human emotional architecture fails us in modern life has been an interest of mine for a long time. Emotions seem to be an integral part of human decision making, but can we improve human emotional systems for the more dangerous and complicated situations existing in the modern world? I am reminded of an essay called “Neo-Emotions” that I wrote in 2005 [2], which I will re-post right here, and then I will mention some of the criticism of that article.


dramatic mask

One of my hypotheses right now is that emotions often seem irrational to us simply because many of them are outdated to work with modern situations, culture, and technologically-enabled existence.  The solution is to develop neo-emotions.  A neo-emotional system would take whatever beneficial roles existing emotional systems provide, and extend and modify these roles to better suit the environment.

A trivial example of primitive survival emotions influencing a modern situation negatively is the neuroeconomics experiment, in which most people will choose $10 now rather than $11 tomorrow [3].  I have seen this in action many times in bargaining situations (at yard sales, flea markets, etc.)—“Do you want $20 for this widget today or $50 possibly never?” Afterwards you can make pure logical defenses that you chose the money today because you thought the probability of the greater sum later was unlikely, but your quickly-made choices are more likely to have been driven from emotional underpinnings.

That instance may seem too minor for any concern, but imagine the possibility that a good deal of your decisions are being made with biological control systems developed for animals in the wild.  Inappropriate emotions can also come from low-level fear responses and conditioning.  “The brain seems to be wired to prevent the deliberate overriding of fear responses…Our brains seem to have been designed to allow the fear system to take control in threatening situations and prevent our conscious awareness from reigning” [4].  Decisions made from what some call “negative emotions” can sometimes be devastating depending on the situation [5]:

For instance, in 1963, when John F. Kennedy learned that the Soviets had brought nuclear missiles to Cuba, he became enraged, taking it as a personal affront—after all, the Soviet ambassador had assured him just months before that such as thing would never happen.  But Kennedy’s closest advisors deliberately made the effort to help him calm down before deciding what steps to take—and so may have averted a world war.

The need for neo-emotions may seem like a general statement that is hard to test unequivocally.  In some soft form, people may already be attempting neo-emotions through various inefficient and temporary means.  Part of the reason I thought of it in the first place, however, is that we can perform autonomous software agent experiments to confirm or falsify the hypothesis that emotional systems (which develop both phylogenetically and ontogenetically) become outdated/useless/dangerous in more complex environments.  A “more complex environment” includes the notion of rapid change in new ways, especially for high-level structures; by “high-level” we mean constructed from many stages of lower levels down to the original environment.  The complex environment also includes cultural structures and resources that may have become integral to the generations.  Any environment similar to ours should have always been dynamic, of course; indeed, you could include unexpected devastating natural events as a type of environment in which an organism is suddenly ill-equipped to operate/survive.  The quickest analogy to the human situation would be future shock in a techno-industrial world, but that is a crude fit.  For humans the problem is particularly hairy: high-level environment structures store knowledge, and this “extelligence” [6] interacts through culture with individuals.

We can still start with simple experiments.  Emotional systems are most likely substrate independent—and therefore applicable to artificial intelligence, both software-only agents and embodied robots.  But a robot’s needs may not result in the same kind of neo-emotions.  Some people will instantly retort that emotions are unexplainable, not replicable in machines, etc.  This is not the place for a detailed history of emotion and arguments of what emotion is; suffice it to say, many researchers in the realms of biology, neuroscience, artificial intelligence, psychology, philosophy, etc., have attempted to define and work with emotions at least with respect to their field.  Summarily, emotion includes various brain/nervous-system processes, external behaviors, mental states, body states (e.g., facial expressions), social feelings, cultural notions, and more.  The word “emotion” itself has only been in regular use since the early 1800s as a catch-all term overlapping passions, sentiments, feelings, and affections [7].  These terms were associated with soul and will; even now some researchers think emotion may be foundational to consciousness [8].  Emotion is intertwined with the entire evolutionary biological and socio-cultural landscape and is linked to many hot buttons.

One question that people might ask is: Will I end up as an über-rational machine?  Well, how do you define rationality?  Perhaps a perceived slice of a neo-emotional person could seem cold, harsh, or too rational, I’m not sure.  Internally, things will certainly not be just the deliberative, deductive, planning capabilities of current humans.  It will be better than that, possibly involving different types of priorities/overrides of deliberative vs. emotional vs. reflexive (this is a simplification) and more self-reprogrammable parts of emotional associations, conditioned learning, etc.

That does not mean I want us to mutate into the type of sentience like an Outer Limits alien asking an Earthling scientist, “What is love?” or a Terminator trying to figure out why we cry.  What a species needs, if this turns out to be a real issue in some form, is to modify/extend the emotions.  “Evolution made our brains so smart that we ended up building environments that made some of our mental resources obsolete…We are not slaves to our emotions, but they are hardly at our beck and call either” [4].  It would be interesting to see if giving more control to the neocortex over the amygdala would result in better-functioning humans.  But that seems too narrow a solution, especially since there is still much to learn about the neurophysiology and neuroanatomy of emotions; indeed, the human emotional architecture is not limited to just the amygdala.

Unfortunately, the word “emotion” is a tangled, many-faceted collection of often inconsistent concepts, but at the very least your emotional system is an intertwined part of your mind, and as such is involved with matters of your body and interacting/communicating with other bodies.  So it may or may not be very difficult to improve one part of, for instance, the brain, without having to tweak many other factors.  Neo-emotional architectures may only come from a complete overhaul of our standard equipment wetware, or maybe just a simple matter of explicit training (with the help of extelligence) combined with tightly-coupled human-computer interaction (not necessarily invasive).  I definitely do not think any pseudo-psychological self-help book series will do the trick.  Drugs could play a part, but we don’t want to aim for the stoic zombie society of the movie Equilibrium.  Again, the concept of neo-emotions is not about utter suppression of existing emotional/feeling faculties.

Please note that the need for neo-emotions has nothing to do with the psychological construct and commercial self-help meme known as “emotional intelligence” (see [9] for a comprehensive summary and critique of EI).  Certainly recognizing emotion as a major factor in normal human operations and trying to account for them in an individual is a step in the right direction, though.  Identifying some of the so-called mental afflictions and destructive emotions could provide examples of where neo-emotions would have served better.  Here is a relevant sample of the Dalai Lama discussing the subject [5]:

What impact do our destructive emotions—hatred, prejudice, and so forth—have on the society as a whole?  What role are they playing in the tremendous problems and sufferings that society is experiencing right now?…Are these destructive emotions to which we are subject at all malleable?  If we were to diminish them, what kind of impact would that have on society as a whole and on the myriad problems that society is experiencing?

However, it is not simply a matter of emotions being destructive, or destructive on a large scale.  We especially don’t want to cut the sources of emotions that could still be essential for our survival even in a modern world.  Also, neo-emotions would not be a mere diminishing of certain emotions through whatever training or mental exercises a human can muster—they would involve changes and extensions that could result in something very hard to imagine with our current understanding.

We have to be careful not to blindly promote the dichotomy between “rationality” and “emotion” (or worse “cognition” and “emotion”) although at certain levels of detail it may be useful.  Mythical symbolic notions of the heart versus the brain should stay in fiction.  Some people seem to think that emotion can be secluded to a little black box, a module that is simply thrown into the mind mix.  This view rarely helps figure out how emotions work and how they evolved.  Again, I must stress that our notions of emotions are actually describing several interlocked processes in our brains (and other parts of our bodies), in which it becomes difficult to separate out deliberation, planning, rationality—these ideal constructions make it sometimes easier and sometimes harder to figure out how the brain works and how to simulate it.  The point is: do not assume that potential neo-emotional brain-body systems will be simple extensions of an uncorrelated emotional system.


Well, first, I can criticize the article myself because I didn’t propose in detail various ways to achieve Neo-Emotions. Hopefully I can elaborate more in the future.

One blogger mentioned my article in 2005 [10], saying:

Ah, but our intellects are not so dark that they can’t pull themselves up by their own boot-straps! Or at least that’s what the transhumanists would have us believe.

Yes, perhaps we can change our emotions to better suit our competitive environment. We’ll get rid of pity and love, and replace them with ruthlessness and hatred. Sounds like “Genesis of the Daleks.” Or maybe just your NOW feminist.

And we haven’t even touched on the weirdest “improvement” on nature : the incredible self-mutilation of Michael Jackson. This would be funny if it weren’t true.

Ah, but it is funny.

Of course, I don’t want mental modifications to go the way of Michael Jackson’s face. And Mr. Gage raises a good point—what if people modified their emotions, not just to make better decisions, but for competitive gain? Would people really increase their ruthlessness?

I don’t think it’s that simple—as I keep saying, emotion cannot be so easily decoupled from the rest of the brain’s activities. But if we look to the existing outliers, like psychopaths, perhaps there is a danger of self-modifications resulting in similar mindsets. I don’t think that kills the concept of modifying the emotional aspect of the human mind, it just highlights the difficulty of changing evolutionarily-old structures to better handle our relatively new artificial environments and experiences.

[1] B. Mixon Jr. & NSF, “Personal Question Leads Scientist to Academic Excellence“, LiveScience, September 1, 2011.
[2] S. H. Kenyon (as Flanneltron), “Neo-Emotions.” Transhumanity, Feb. 14, 2005.
[3] L. Brown, “Why Instant Gratification Wins: Brain battle provides insight into consumer behavior.” Betterhumans, Oct. 2004. Available:
[4] S. Johnson, “The Brain + Emotions: Fear.” Discover, pp.33-39, March 2003.
[5] D. Goleman, et al, Destructive Emotions: How Can We Overcome Them? A Scientific Dialogue with the Dalai Lama. Bantam, 2003, pp.87,223-224.
[6] I. Stewart and J. Cohen, Figments of Reality: The Origins of the Curious Mind. Cambridge University Press, 1997.
[7] K. Oatley, Emotions: A Brief History. Blackwell, 2004, p.135.
[8] A. Damasio, The Feeling of What Happens: Body and Emotion in the Making of Conciousness. Harcourt, 1999.
[9] G. Matthews, M. Zeidner, & R.D. Roberts, Emotional Intelligence: Science and Myth. MIT Press, 2002.
[10] L. Gage, “Infant Formula, ‘Neo-Emotions,’ and the Incredible Melting Celebrity“, Real Physics (blog), March 03, 2005.

Image Credit: Zofie

Tags: , , , , ,

Posthuman Factors

Posted in posthuman factors, robotics, transhumanism on June 17th, 2011 by Samuel Kenyon

Apparently a concept I developed in my spare time in 2009, which I dubbed “posthuman factors,” is very similar to some guy’s PhD dissertation in 2010 in which he also used the term posthuman factors. (And I don’t mean everything in his dissertation, but there’s a lot of overlap.)

I recently learned of this through a Wikipedia article I discovered (created in April 2011 by user Nikiburri) called “Posthuman factors.” It has a good summary:

In general, posthuman factors addresses the intersection of design practices that includes (1) the design of posthumans, (2) designing for such posthumans, especially in safe and sustainable ways, and (3) designing the design methodologies that will supersede human-centered design (i.e., “posthuman-centered design”, or the processes of design that posthumans employ).

Interestingly, it cites my IEET article “Why You Should Care About (Post)Human Factors,” published Jan 8, 2010, yet claims that posthuman factors was first “articulated” by Dr. Haakon Faste in his Jan 2010 doctoral dissertation “Posthuman Factors: How Perceptual Robotic Art Will Save Humanity from Extinction.”

Most likely we were both thinking about it and writing about it at around the same time (one would assume that, as with my articles mentioned above, the writing actually started in 2009). And then there are whatever projects that lead to this particular synthesis of concepts; e.g. in my case it connects at least as far back to my attempt to describe an interface point of view for future human/robot/posthuman/etc. interactions (“Would You Still Love Me If I Was A Robot?“).

But the Wikipedia pages are a bit annoying. The Posthuman factors page has a link to a wikipedia page for Haakon Faste (created by the same user Nikiburri) which informs us that he is a leading figure in the field of posthuman factors and that he coined the term in 2010. Well, guess what—I posted my article “Do We Need a Posthuman Factors Discipline?” in December 2009 on my blog, so I guess that means I coined it first.

But it’s nice to know that I started a new field. And I’m pleased that at least one other person is thinking about these issues.

Tags: , , , , ,

Multitask! The Bruce Campbell Way

Posted in culture, interaction design, posthuman factors, transhumanism on September 7th, 2010 by Samuel Kenyon

Some have pointed out the supposed increase in multitasking during recent decades [1]. An overlapping issue is the increase in raw information that humans have access to. It is certainly a fascinating sociocultural change. However, humans are not capable of true multitasking. First I will describe what humans do have presently, and then I will discuss what future humans might be capable of.

photo of Bruce Campbell talking on a cell phone

Bruce Campbell

“Multitasking” in humans is primarily switchtasking combined with scripting:

  1. Switchtasking [2] is to switch between tasks. This can be done quite rapidly, so fast in fact that you might feel as though you are truly multitasking.In the past I have suggested that attentional consciousness is like a single-threaded manager [3]. However, I want to be clear that I’m not saying there’s a Cartesian Theatre [4]. I’m saying that the brain, although highly parallel at certain levels of detail, has a functionally singular attention and working memory system. Whether the model of a top-down manager is valid in all circumstances is undetermined. Neuroscientists have found a model with top-down influences on visuospatial working memory [5], but that is not necessarily the case for all mechanisms involved with attention.

    How can you have two centers of conscious activity at the same time?

    How can you have two centers of conscious activity at the same time?

  2. Scripting is the auto-piloting in your mind. A script is the sequence of steps that you can do without conscious attention.These scripts are often activities you had to learn at first, for instance bicycling and driving. The reason driving while multitasking is notorious is that the script works until something happens that breaks the script, such as a person wandering out in front of your car suddenly. When the script breaks, your attentional consciousness is interrupted to attend to the situation, but by the time you have decided what to do it might be too late.
Image credit: Paul Oka, CC Attribution-NonCommercial-NoDerivs 2.0 Generic

Image credit: Paul Oka, CC Attribution-NonCommercial-NoDerivs 2.0 Generic

You can drive your car with your scripts, meanwhile entertaining yourself with detailed telemetry (e.g. MPG, engine temperatures, etc.), MP3 players and satellite radio, video players, GPS navigation, your cell phone handling multiple calls and running multiple applications, etc. I think many people would love to be able to handle all those interactions at the same time. Some people try and end up crashing. And there’s always the few who abuse technology in special ways [6].

I consider background tasks like listening to music to also be just that—in the background. If you actually pay attention to music, you will find that you are not doing it at the same time as your other task (e.g. writing)—you are switching between them.

Cyborg Multitasking

In the future, humans may be able to truly increase their multitasking capacity.

An obvious question is, why bother?

My speculative answer: Society has increased the expectation for simultaeous activities; at the same time social interactions through always-on mediums are massively popular. Humans desperately want omnipresent interaction multiplicity—be it for work, social interaction, entertainment, or all of the above. The enabling technologies are already here—the real limiting factor is the human brain.

Even when people know they are less efficient due to switchtasking, it is still quite difficult to use that premise to revert to a more efficient way of working, and to be focused for longer periods of times on single tasks [7].

Personally, I switch between periods of single-tasking and switchtasking. However, being able to focus for a long period of time on one thing depends on you and your situation. Which brings us to enhancement.

One of the potentially popular mind enhancements will be multitasking. This could start off with working memory enhancements. Then we would be able to switch between more tasks (or more complex tasks) while having the necessary information still loaded for all of them. But true multitasking will require enhancements to our attention system.

This essay is not about how it can be done technically—maybe it will involve drugs, cyborg technology such as electronic implants and nervous interfaces, other types of invasive objects like nanobots, substrate changes (e.g. to digital computers) that enable programmatic enhancements, or none of those. Whatever the case, we can acknowledge some of the problems multitasking cyborgs or posthumans will face.

Problems with Multitasking

in ancient rome there was a poem
about a dog who had two bones
he picked at one he licked the other
he went in circles till he dropped dead
—Devo, “Freedom of Choice”

The main problem is that multitasking will change the architecture of attentional consciousness and working memory. The changes for the new architecture have to take into account control of the body—attempting to answer the phone with the same hand that is ironing could be disastrous. Likewise with trying to run in two directions at the same time. Choices that affect or require the use of limited body resources must reduce to a single decision.

Also, multiple tasks that require visual perception will have to wait for each other (basically resulting in switchtasking again) unless we also are enhanced with extra visual perception inputs. In general, the limits of the sensory modalities will limit the types of tasks being done at the same time, a problem we already have with our primitive switchtasking.

The other architectural problem is that the multiple attention sub-systems need a way to stay in sync. It’s feasible that a part of the mind would have to become a meta-manager, although that could just be the default attentional consciousness controlling the others.

The Man with the Screaming Brain

The Man with the Screaming Brain

From the outside, broken multitasking behavior would look like dissociative identity disorder [8], or even worse, like Bruce Campbell in the movie The Man With the Screaming Brain [9]:











Tags: , , , , , , , ,