Jeff Lieberman’s Evolution and Future of Consciousness

Posted in philosophy, transhumanism on October 20th, 2011 by Samuel Kenyon

Recently I attended a presentation at MIT by Jeff Lieberman called “It’s Not What You Think: An Evolutionary Theory of Spiritual Enlightenment.”

jeff lieberman

Lieberman is a science-educated artist and host of a TV show called Time Warp. He’s a relatively good presenter, and given his credentials, one would expect him to juxtapose disparate fields of science and art. However, the downside is that one is not left with a single solid believable conclusion or theory—or at least I wasn’t. Of course, this was also the first time Lieberman gave this talk, so he might improve it in the future.

I think that there are really four different themes woven in his presentation:

  1. Evolution of consciousness.
  2. Future directions of human consciousness.
  3. The concept of consciousness having existed for billions of years in all things, and that human-level consciousness is simply a more complicated construction (this is the weakest point).
  4. The common psychological goal of faiths underlying the world’s popular religions.

I will attempt to describe the first three themes below. I won’t say anything about the fourth—it’s a neat concept in which Lieberman interprets religious stories to have themes about human consciousness. For instance, human thinking we have compared to our animal relatives, however I am not well read in mythology / religious texts.

Please note that I’m leaving out a lot of his talk…he covered a lot of ground.

1. Evolution Of Consciousness

The premise is that human consciousness evolves—and that seems to be a sound statement given animal evolution thus far.

Lieberman gives us a 15-minute compressed history of the universe relevant to consciousness. Our universe starts with undifferentiated energy, then we end up with these various layers of organization emerging from previous layers: energy->particles->atoms->molecules->cells->animals.

Human perception does not by default work so well for layers that are smaller/larger than our little world. We can’t even perceive these other worlds without the help of technology. The perception we have is the result of evolution, as Lieberman puts it:

What we take for granted as real on a day to day basis is completely determined by what was functional for our evolutionary past.

This means not only are we not able to observe outside our small window of perception (unless we use technology), our brain is actually “creating a lie” to operate with. The brain constructs patterns. Lieberman shows some examples, such as a visual illusion and how viewing a symbol in slow motion shows the waves which we can’t see with normal vision.

Lieberman didn’t mention this, but he gets into some subjects I’m very interested in, such as affordability and perception as an interface. I am also reminded of the excellent book Visual Intelligence by Donald D. Hoffman, which describes the rules our vision systems use to construct reality. And, as Hoffman himself suggests in that book (and expounds in his paper “The Interface Theory of Perception”), the construction of reality may not actually be a reconstruction. The phenomenal sense of something need not resemble the relational sense. Our perception builds fictions that are useful for the organism to survive.

Anyway, Lieberman goes on to talk about consciousness in evolutionarily older organisms, especially cavemen, and that we should not assume that we are the apex of consciousness. I.e., who knows what potentially better (depending on the context of “better”) consciousness will become widespread in the future.

2. Future Directions Of Human Consciousness

Lieberman has a concept called the “new mutation,” which would operate at a social level, not just at an individual level. Certainly speculation about the future of consciousness should include this possibility of going to the level of the nesting of organizations.

meditation

Lieberman mentioned a lot of the old-fashioned mind hackers such as Buddhist monks, etc. He mentioned a world full of Ghandis or something like that. There’s a premise that these kinds of self applied brain wetware changes are a way to achieve “true” consciousness, if there is a such thing. However, I’m not sure if that kind of consciousness leads to future pathways. Let’s take a monk who is selfless and full of compassion—that’s great, and maybe there is some game model in which all or a certain percentage of the world’s inhabitants could operate like that for a better all around experience for humankind, but is it a method that will work as the basis for the “new mutation”…or is there some much better way? Or are existing meditative states not even scratching the surface of useful consciousness modifications?

Lieberman seems to that existing methods for “enlightenment” are:

  1. A way to access the “lower levels of the self” and that this is in fact the primitive consciousness that he thinks is really interesting and useful to experience.
  2. The side effects of that particular method are also desirable.

I, however, would not assume such things. The way Lieberman describes this more basic consciousness makes me think of a metaphor of a computer program which can look at its own execution and data (introspection) but normally does not.

This metaphor might also let me describe a potential danger: imagine the program completely abandons its normal operation and spends all its time doing introspection. The first question is…does motivation change? Can it change its own motivation? And is it in jeopardy of dying because it’s no longer paying attention to the outside world?

Lieberman said how the self disappears when we are in dreamless sleep. So “you” as you think of yourself are essentially nonexistent quite often. One way to look at this state of mind of awareness of lower consciousness is to think of it as being aware of yourself in deep sleep. You would be disconnected from the interfaces to the real world, and, as I said previously with the computer metaphor, be in an introspection mode.

Something Lieberman omitted to mention is the overlap with mindfulness, such as described by Ellen Langer in her various books on the subject.

Lieberman did mention Flow, however I am not convinced by his interpretation that true flow means that the self is not existent. And it doesn’t really sync up with his descriptions of being aware of oneself. It seems, in fact, to be two opposite states of mind…one of being completely immersed in an interactive cycle with the real world, and the other an internal inspection that is not synced to real world events.

3. The Concept Of Consciousness Having Existed For Billions Of Years In All Things

According to Lieberman:

And consciousness is not something that comes out of the human; it starts at the bottom and is built into the complexity and form of a human.

Now, I totally understand the strategy of trying to turn a concept on its head in order to find a new path for investigation and/or new theories. So when he says this, I am still on board with the general strategy, especially since I’m very interested system-oriented explanations for consciousness and cognition.

Lieberman tells us that this view somehow makes subjective experience much easier to explain. I suppose his talking about perception and whatnot was supposed to support that claim, but I am not grokking it. He could be on the right track, or the presentation may be a shell without enough data and/or theories to fill it in.

He tries briefly to explain this by saying consciousness is composed of “attraction and repulsion.” He gives an example of electrons having attraction and repulsion via fields. But I am not sure how that is equivalent to consciousness at any level. He says that at the higher, more complicated, levels of human emotions and thought that, “it’s still attraction and repulsion to different informational structures.”

Well, that’s a nice start, but it’s not even close to a real theory. Perhaps Lieberman got this from some other source and/or decided to cut out elaboration in this beta version of his presentation. He seems to have included it as a basic assumption though, weaving it into the presentation at various points. Perhaps he is attached to it because it lets him link every human’s mind all the way back to the big bang.

Whatever the case, he could cut this theme out or isolate it, and his historical view of evolution of consciousness is still valuable (and entertaining), and his speculations on the future of consciousness is still reasonable and thought-provoking. Likewise with his suggestions for what religious faiths are about at their unadulterated core.

Conclusion

Hopefully Lieberman isn’t gunning to be the next pseudo-scientific spiritual guru. At the very least, talking about future directions of consciousness, especially where we might want to go as a social system, is fruit-worthy.


Image credits:
[1] Discovery Channel
[2] RambergMediaImages

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: ,

The Future of Emotions

Posted in artificial intelligence, transhumanism on September 11th, 2011 by Samuel Kenyon

I recently happened upon an article [1] about the work of Jennifer Lerner:

Lerner is best known for her research on two topics: the effects of emotion on judgment and decision making, and the effects of accountability on judgment and decision making. Recently, along with a number of National Science Foundation-supported scientists, she appeared on the PBS program “Mind Over Money,” a look at the 2008 stock market crash and the irrational financial decisions people make.

How the human emotional architecture fails us in modern life has been an interest of mine for a long time. Emotions seem to be an integral part of human decision making, but can we improve human emotional systems for the more dangerous and complicated situations existing in the modern world? I am reminded of an essay called “Neo-Emotions” that I wrote in 2005 [2], which I will re-post right here, and then I will mention some of the criticism of that article.

Neo-Emotions

dramatic mask

One of my hypotheses right now is that emotions often seem irrational to us simply because many of them are outdated to work with modern situations, culture, and technologically-enabled existence.  The solution is to develop neo-emotions.  A neo-emotional system would take whatever beneficial roles existing emotional systems provide, and extend and modify these roles to better suit the environment.

A trivial example of primitive survival emotions influencing a modern situation negatively is the neuroeconomics experiment, in which most people will choose $10 now rather than $11 tomorrow [3].  I have seen this in action many times in bargaining situations (at yard sales, flea markets, etc.)—”Do you want $20 for this widget today or $50 possibly never?” Afterwards you can make pure logical defenses that you chose the money today because you thought the probability of the greater sum later was unlikely, but your quickly-made choices are more likely to have been driven from emotional underpinnings.

That instance may seem too minor for any concern, but imagine the possibility that a good deal of your decisions are being made with biological control systems developed for animals in the wild.  Inappropriate emotions can also come from low-level fear responses and conditioning.  “The brain seems to be wired to prevent the deliberate overriding of fear responses…Our brains seem to have been designed to allow the fear system to take control in threatening situations and prevent our conscious awareness from reigning” [4].  Decisions made from what some call “negative emotions” can sometimes be devastating depending on the situation [5]:

For instance, in 1963, when John F. Kennedy learned that the Soviets had brought nuclear missiles to Cuba, he became enraged, taking it as a personal affront—after all, the Soviet ambassador had assured him just months before that such as thing would never happen.  But Kennedy’s closest advisors deliberately made the effort to help him calm down before deciding what steps to take—and so may have averted a world war.

The need for neo-emotions may seem like a general statement that is hard to test unequivocally.  In some soft form, people may already be attempting neo-emotions through various inefficient and temporary means.  Part of the reason I thought of it in the first place, however, is that we can perform autonomous software agent experiments to confirm or falsify the hypothesis that emotional systems (which develop both phylogenetically and ontogenetically) become outdated/useless/dangerous in more complex environments.  A “more complex environment” includes the notion of rapid change in new ways, especially for high-level structures; by “high-level” we mean constructed from many stages of lower levels down to the original environment.  The complex environment also includes cultural structures and resources that may have become integral to the generations.  Any environment similar to ours should have always been dynamic, of course; indeed, you could include unexpected devastating natural events as a type of environment in which an organism is suddenly ill-equipped to operate/survive.  The quickest analogy to the human situation would be future shock in a techno-industrial world, but that is a crude fit.  For humans the problem is particularly hairy: high-level environment structures store knowledge, and this “extelligence” [6] interacts through culture with individuals.

We can still start with simple experiments.  Emotional systems are most likely substrate independent—and therefore applicable to artificial intelligence, both software-only agents and embodied robots.  But a robot’s needs may not result in the same kind of neo-emotions.  Some people will instantly retort that emotions are unexplainable, not replicable in machines, etc.  This is not the place for a detailed history of emotion and arguments of what emotion is; suffice it to say, many researchers in the realms of biology, neuroscience, artificial intelligence, psychology, philosophy, etc., have attempted to define and work with emotions at least with respect to their field.  Summarily, emotion includes various brain/nervous-system processes, external behaviors, mental states, body states (e.g., facial expressions), social feelings, cultural notions, and more.  The word “emotion” itself has only been in regular use since the early 1800s as a catch-all term overlapping passions, sentiments, feelings, and affections [7].  These terms were associated with soul and will; even now some researchers think emotion may be foundational to consciousness [8].  Emotion is intertwined with the entire evolutionary biological and socio-cultural landscape and is linked to many hot buttons.

One question that people might ask is: Will I end up as an über-rational machine?  Well, how do you define rationality?  Perhaps a perceived slice of a neo-emotional person could seem cold, harsh, or too rational, I’m not sure.  Internally, things will certainly not be just the deliberative, deductive, planning capabilities of current humans.  It will be better than that, possibly involving different types of priorities/overrides of deliberative vs. emotional vs. reflexive (this is a simplification) and more self-reprogrammable parts of emotional associations, conditioned learning, etc.

That does not mean I want us to mutate into the type of sentience like an Outer Limits alien asking an Earthling scientist, “What is love?” or a Terminator trying to figure out why we cry.  What a species needs, if this turns out to be a real issue in some form, is to modify/extend the emotions.  “Evolution made our brains so smart that we ended up building environments that made some of our mental resources obsolete…We are not slaves to our emotions, but they are hardly at our beck and call either” [4].  It would be interesting to see if giving more control to the neocortex over the amygdala would result in better-functioning humans.  But that seems too narrow a solution, especially since there is still much to learn about the neurophysiology and neuroanatomy of emotions; indeed, the human emotional architecture is not limited to just the amygdala.

Unfortunately, the word “emotion” is a tangled, many-faceted collection of often inconsistent concepts, but at the very least your emotional system is an intertwined part of your mind, and as such is involved with matters of your body and interacting/communicating with other bodies.  So it may or may not be very difficult to improve one part of, for instance, the brain, without having to tweak many other factors.  Neo-emotional architectures may only come from a complete overhaul of our standard equipment wetware, or maybe just a simple matter of explicit training (with the help of extelligence) combined with tightly-coupled human-computer interaction (not necessarily invasive).  I definitely do not think any pseudo-psychological self-help book series will do the trick.  Drugs could play a part, but we don’t want to aim for the stoic zombie society of the movie Equilibrium.  Again, the concept of neo-emotions is not about utter suppression of existing emotional/feeling faculties.

Please note that the need for neo-emotions has nothing to do with the psychological construct and commercial self-help meme known as “emotional intelligence” (see [9] for a comprehensive summary and critique of EI).  Certainly recognizing emotion as a major factor in normal human operations and trying to account for them in an individual is a step in the right direction, though.  Identifying some of the so-called mental afflictions and destructive emotions could provide examples of where neo-emotions would have served better.  Here is a relevant sample of the Dalai Lama discussing the subject [5]:

What impact do our destructive emotions—hatred, prejudice, and so forth—have on the society as a whole?  What role are they playing in the tremendous problems and sufferings that society is experiencing right now?…Are these destructive emotions to which we are subject at all malleable?  If we were to diminish them, what kind of impact would that have on society as a whole and on the myriad problems that society is experiencing?

However, it is not simply a matter of emotions being destructive, or destructive on a large scale.  We especially don’t want to cut the sources of emotions that could still be essential for our survival even in a modern world.  Also, neo-emotions would not be a mere diminishing of certain emotions through whatever training or mental exercises a human can muster—they would involve changes and extensions that could result in something very hard to imagine with our current understanding.

We have to be careful not to blindly promote the dichotomy between “rationality” and “emotion” (or worse “cognition” and “emotion”) although at certain levels of detail it may be useful.  Mythical symbolic notions of the heart versus the brain should stay in fiction.  Some people seem to think that emotion can be secluded to a little black box, a module that is simply thrown into the mind mix.  This view rarely helps figure out how emotions work and how they evolved.  Again, I must stress that our notions of emotions are actually describing several interlocked processes in our brains (and other parts of our bodies), in which it becomes difficult to separate out deliberation, planning, rationality—these ideal constructions make it sometimes easier and sometimes harder to figure out how the brain works and how to simulate it.  The point is: do not assume that potential neo-emotional brain-body systems will be simple extensions of an uncorrelated emotional system.

Criticism

Well, first, I can criticize the article myself because I didn’t propose in detail various ways to achieve Neo-Emotions. Hopefully I can elaborate more in the future.

One blogger mentioned my article in 2005 [10], saying:

Ah, but our intellects are not so dark that they can’t pull themselves up by their own boot-straps! Or at least that’s what the transhumanists would have us believe.

Yes, perhaps we can change our emotions to better suit our competitive environment. We’ll get rid of pity and love, and replace them with ruthlessness and hatred. Sounds like “Genesis of the Daleks.” Or maybe just your NOW feminist.

And we haven’t even touched on the weirdest “improvement” on nature : the incredible self-mutilation of Michael Jackson. This would be funny if it weren’t true.

Ah, but it is funny.

Of course, I don’t want mental modifications to go the way of Michael Jackson’s face. And Mr. Gage raises a good point—what if people modified their emotions, not just to make better decisions, but for competitive gain? Would people really increase their ruthlessness?

I don’t think it’s that simple—as I keep saying, emotion cannot be so easily decoupled from the rest of the brain’s activities. But if we look to the existing outliers, like psychopaths, perhaps there is a danger of self-modifications resulting in similar mindsets. I don’t think that kills the concept of modifying the emotional aspect of the human mind, it just highlights the difficulty of changing evolutionarily-old structures to better handle our relatively new artificial environments and experiences.

References
[1] B. Mixon Jr. & NSF, “Personal Question Leads Scientist to Academic Excellence“, LiveScience, September 1, 2011.
[2] S. H. Kenyon (as Flanneltron), “Neo-Emotions.” Transhumanity, Feb. 14, 2005.
[3] L. Brown, “Why Instant Gratification Wins: Brain battle provides insight into consumer behavior.” Betterhumans, Oct. 2004. Available: http://www.betterhumans.com/News/news.aspx?articleID=2004-10-14-2
[4] S. Johnson, “The Brain + Emotions: Fear.” Discover, pp.33-39, March 2003.
[5] D. Goleman, et al, Destructive Emotions: How Can We Overcome Them? A Scientific Dialogue with the Dalai Lama. Bantam, 2003, pp.87,223-224.
[6] I. Stewart and J. Cohen, Figments of Reality: The Origins of the Curious Mind. Cambridge University Press, 1997.
[7] K. Oatley, Emotions: A Brief History. Blackwell, 2004, p.135.
[8] A. Damasio, The Feeling of What Happens: Body and Emotion in the Making of Conciousness. Harcourt, 1999.
[9] G. Matthews, M. Zeidner, & R.D. Roberts, Emotional Intelligence: Science and Myth. MIT Press, 2002.
[10] L. Gage, “Infant Formula, ‘Neo-Emotions,’ and the Incredible Melting Celebrity“, Real Physics (blog), March 03, 2005.


Image Credit: Zofie

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , ,

Posthuman Factors

Posted in posthuman factors, robotics, transhumanism on June 17th, 2011 by Samuel Kenyon

Apparently a concept I developed in my spare time in 2009, which I dubbed “posthuman factors,” is very similar to some guy’s PhD dissertation in 2010 in which he also used the term posthuman factors. (And I don’t mean everything in his dissertation, but there’s a lot of overlap.)

I recently learned of this through a Wikipedia article I discovered (created in April 2011 by user Nikiburri) called “Posthuman factors.” It has a good summary:

In general, posthuman factors addresses the intersection of design practices that includes (1) the design of posthumans, (2) designing for such posthumans, especially in safe and sustainable ways, and (3) designing the design methodologies that will supersede human-centered design (i.e., “posthuman-centered design”, or the processes of design that posthumans employ).

Interestingly, it cites my IEET article “Why You Should Care About (Post)Human Factors,” published Jan 8, 2010, yet claims that posthuman factors was first “articulated” by Dr. Haakon Faste in his Jan 2010 doctoral dissertation “Posthuman Factors: How Perceptual Robotic Art Will Save Humanity from Extinction.”

Most likely we were both thinking about it and writing about it at around the same time (one would assume that, as with my articles mentioned above, the writing actually started in 2009). And then there are whatever projects that lead to this particular synthesis of concepts; e.g. in my case it connects at least as far back to my attempt to describe an interface point of view for future human/robot/posthuman/etc. interactions (“Would You Still Love Me If I Was A Robot?“).

But the Wikipedia pages are a bit annoying. The Posthuman factors page has a link to a wikipedia page for Haakon Faste (created by the same user Nikiburri) which informs us that he is a leading figure in the field of posthuman factors and that he coined the term in 2010. Well, guess what—I posted my article “Do We Need a Posthuman Factors Discipline?” in December 2009 on my blog, so I guess that means I coined it first.

But it’s nice to know that I started a new field. And I’m pleased that at least one other person is thinking about these issues.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , ,

Multitask! The Bruce Campbell Way

Posted in culture, interaction design, posthuman factors, transhumanism on September 7th, 2010 by Samuel Kenyon

I have a new essay up on the h+ magazine website:

photo of Bruce Campbell talking on a cell phone

Some have pointed out the supposed increase in multitasking during recent decades.  An overlapping issue is the increase in raw information that humans have access to.  It is certainly a fascinating sociocultural change.  However, humans are not capable of true multitasking.  First I will describe what humans do have presently, and then I will discuss what future humans might be capable of.

Read more…

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , , ,

Why You Should Care About (Post)Human Factors

Posted in interaction design, interfaces, posthuman factors on January 7th, 2010 by Samuel Kenyon

Your experiences and interactions were designed.

Maybe not with people, but certainly your interactions with computers, cameras, cars, software, cell phones, websites, wrappers, games, guns, power tools, pants, chairs, stairs, screens, shows, sports equipment and so on were designed. Because technologies affect society, it is worthwhile to be aware of how they are designed to work with—or in failures, against—people. People may one day include posthumans.

To avoid confusion I will define “posthuman” as it applies to this essay. First, a quote from the IEET definition:

Posthumans could be a symbiosis of human and artificial intelligence, or uploaded consciousnesses, or the result of making many smaller but cumulatively profound technological augmentations to a biological human, i.e. a cyborg. Some examples of the latter are redesigning the human organism using advanced nanotechnology or radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, life extension therapies, neural interfaces, advanced information management tools, cognitive enhancement drugs, wearable or implanted computers, and cognitive techniques.

Another point of view would describe a posthuman as somebody who is outside of the normal ranges of human capacities. The “post” qualification may be due to small out-of-bounds differences in many capacities or a huge difference in only one capacity. This is the point of view that is particularly relevant to the discipline of human factors.

“Human factors” is a term that covers both the science of human properties (cognitive and physical) and applying this science for design and development. HCI (human-computer interaction), HRI (human-robot interaction), and human-automation interaction are all interaction design disciplines which could be considered as specializations of human factors engineering.

Even before considering how posthumans might affect these disciplines, you should already care about human factors and interaction design. Here are a few reasons why:

  • Only a small segment of society has a chance (or is trained) to use a technology that provides unusable interfaces and/or bad user experiences.
  • Ignorance of human factors, poor interface design, etc. can cause major accidents; likewise good use of human factors can prevent major accidents.
  • Good interfaces and user experiences can help a product or type of product become mega-popular, which causes more sociocultural impact than a product nobody buys.
  • Human factors uses knowledge of human cognitive psychology, which can be used to design interfaces that influence human minds.

Human Factors Meets Posthumans

The amount of change in the usability guidelines is much less than the change in Web technology during the same period. The reason usability changes more slowly is that it mainly derives from the characteristics of human behavior, which is remarkably constant. After all, we don’t get bigger brains as the years go by.

This sound statement from the king of usability, Jakob Nielson (“Usability makes business sense”), will no longer be true when users no longer have human behavior.

Historically, two of the most influential technologies to human factors were aviation and computing. As impressive and world-changing as those were, posthuman technology will have even more impact. Since posthuman technology will create new cognitive and physical capacities, it will break the limits of human factors so much that the discipline will have to change significantly—possibly mutating into what one might call “posthuman factors.”

Some Existing Problems

Not only will human factors and related disciplines have to change or spawn new disciplines to handle the new and/or altered abilities of enhanced persons, but they also have to deal with various problems already existing today. Here are three current human factors and interaction design problems to consider:

  1. The first issue is accounting for changes due to adoption of the technology being introduced [1]. Technologies often change the systems they are introduced in to—and often in surprising ways. System changes include: new ways to work, new tempos of work, more complexities, new adaptations of users to the technology, new types of failures, etc. We live in a time of rapid technological change. Thus it is also a time of rapid system change.

    Posthumans will amplify this problem by adding whole new dimensions of different types and ranges of mental and physical capacities.

    One counterargument against the increasing difficulty of predicting technology change—with posthumans in the mix—is that we might have more knowledge of how the posthuman minds work. One of the limitations of modeling or observing an interaction is that the internal mechanisms of the human behavior are largely unknown [2]. But, we will know most of the internal mechanisms of cognitive enhancements and completely-artificial cognitive architectures. Also, human psychology, cognitive science, neuroscience, etc. will presumable keep marching forward so we should know more about human minds in the future as well.

    (Post)human factors, however, will need to be equipped with sufficiently advanced tools for modeling and predicting behavior in a particular system even if the designs of the cognitive enhancements and architectures are known. And as anybody who has observed emergent behavior should suspect, there will probably still be severe limits in practical situations of trying to predict the outcome of a technology introduction.

  2. The second issue is the “dialectic between tradition and transcendence” (a phrase attributed to Pelle Ehn) [3]. Designers can fix small problems, but designing a product that will significantly change how a user conducts an activity is much more difficult. It is just as difficult for the user to know what major change would help them out. And then even if the new technology exists, a lot of people won’t comprehend how it can improve anything beyond the traditional methods.

    Would posthumans inherently solve this problem by having wider adaptive potential? I can’t answer that, but it would certainly depend on the particular posthuman user. The inclinations for tradition vs. transcendence could have more variation in a posthuman group of users. Designers may have to increase the amount of customization and/or automatic adjustments in products—or shrink the target group for each product to specific bandwidths of mental and physical capacities in the posthuman spectrum. A groundbreaking new tool for human users could be irrelevant to a lot of posthuman users.

  3. There is a point of view that technology drives innovations as opposed to needs (see a rebuttal here), and therefore design only works for incremental changes. Basically these innovations would be cases of extremely open-ended problem spaces, i.e. the technologists have no clue how the users will use it. This has been the case, for example, with many kinds of general purpose robots.

    This applies to posthumans because both the posthuman technology, such as cognitive enhancements, and the products for posthumans to interact with might be major innovations. These technology-based innovations will have many hurdles to become not only products, but useful products, and many will fail or disappear along the way.

    Of course, if this technology-first view is incorrect, then posthuman-related needs and opportunities can drive research and technology, and posthuman factors and interaction design can create major innovations.

References

[1] Woods, David, and Dekker, Sidney, “Anticipating the effects of technological change: a new era of dynamics for human factors,” Theoretical Issues in Ergonomics Science, vol. 1, no. 3, pp.272-282, 2000.

[2] Rouse, William B., Systems Engineering Models of Human-Machine Interaction. New York: Elsevier North Holland, 1980, pp.129-132.

[3] Preece, Jennifer, “Interview with Terry Winograd” in Interaction Design: Beyond Human-Computer Interaction. New York: Wiley, 2002, p.71.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , ,

Do We Need a Posthuman Factors Discipline?

Posted in interaction design, interfaces, posthuman factors on December 29th, 2009 by Samuel Kenyon

Credit: Boris Artzybasheff (1899 – 1965)

Posthumans will necessarily push the boundaries of human factors, ergonomics, HCI (human computer interaction) and HRI (human robot interaction).  Some of the interactions to be accounted for are interpersonal—how will a posthuman talk to other humans in a given context?

Posthumans will have an interaction and interface legacy situation.  They will have to maintain old bodily and social languages, protocols, etc. for backwards-compatibility with stock humans.  Sometimes the solution to that may fall squarely into the realm of computers and networks, e.g. the people might communicate only indirectly through various software interfaces and filters.  Sometimes the solutions may involve other physical entities such as robots.

An aside: on the subject of interface standards, certainly there will be pressures (such as the market) to make posthuman technology that various types of humans find functional and convenient, which leads to at least some adoption of common standards.  But sometimes companies and people do not adhere to common standards.  Current technology interfaces are often defined by open standards, but sometimes they are not completely open (e.g., royalties are to be paid to an organization), or they are proprietary and/or secret.  Sometimes the proprietary protocols and formats become popular; those proprietary standards are often reverse-engineered, however the originator can redefine the protocol/format at its whim causing at least temporary incompatibilities.  Whether they are reverse-engineered or not, many implementations are incomplete or break the specification.  Thus there is no guarantee, at least based on human history, that any given posthuman technology will be compatible with anything else.  Perhaps we will eventually curtail this situation with more adaptive protocols combined with smarter technology companies.

Interfaces

Credit: Are Mokkelbost via A Journey Round My Skull

Even if you are a brain in a vat or a pure information entity living in a computer-based system, you will need interfaces in the form of protocols.  Protocols will start with our current ones, but eventually posthumans may require more advanced protocols.  For instance, a protocol set specific to posthumans might be mental-capability handshaking and mind docking.  But the physical substrate can still rear its ugly head.  An example of this harsh reality: a superintelligence in Australia is conversing with a superintelligence in the United States about superstring theory and right at the cusp of a breakthrough a shark chomps through the undersea fiber trunk and science is set back 100 years.

If that example is not far out enough for the audience, then you could instead imagine faster-than-light intercommunications between intelligence clusters spread across the galaxy.  But one day the nature of the universe fluctuates (due to to the actions of enemy alien superintelligences), rendering the physical properties that FTL depended on obsolete, which results in disintegrating the entire intergalactic intelligence cloud.

Of course, eventually, one would expect supersmart entities to find more robust solutions for information-based intelligence.  The point of this section was to illustrate just one of the many interface issues which are amplified by posthuman technology.

Change and Feedback

The discipline of human factors can already predict problems that will occur when trying to design and integrate a piece of technology into a system, and these problems apply to posthuman technology as well.  But posthumans make things even more complex: the biological aspect may no longer be constant.  Human factors, ergonomics, HCI and HRI all depend on a relatively static biological norm.  Occasional humans fall out of the normal ranges but for most humans a fit can be made.  Not necessarily so with posthumans.  Advanced drugs, gene therapy, physical modifications, etc. could change the physical properties of a person.  Likewise with cyborg parts and androids.  Cognitive enhancements will totally change the psychological aspect of design.  User centered interaction design depends on known cognitive relationships which are no longer necessarily true when the user is non-human.

The main problem that human factors has to deal with already, which will be amplified by posthumans, is accounting for changes due to adoption of the technology being introduced [1].

Imagine a group of people on a mission, for instance to colonize another planet.  Let’s say we give them all cognitive enhancement A.  This changes how they do their jobs, sometimes in unexpected ways.  Then we develop cognitive enhancement B—but to design B we have to redefine the user as user+A and take into account the changes in the mission operation due to A.  Once again, B changes not only their minds but also how they do their jobs, sometimes in unexpected ways.  Now we design computer interface 2, but to do that we have to redefine human factors and HCI for users with cognitive enhancement B and the usage is for the new B-enhanced mission.  Computer interface 2 also changes how they do their job, sometimes in unexpected ways.  And so on.

It seems that the difficulty and rate of change will be increasing for human factors and interface design, however there is one counterargument against difficulty.  One of the limitations of modeling or observing an interaction is that the internal mechanisms of the human behavior are largely unknown [2].  But, we will know most of the internal mechanisms of posthumans and AIs.  However, human factors will need to be equipped with sufficiently advanced tools for modeling and predicting posthuman behavior in a context even if the design of the posthuman or AI is known.

Credit: Shane Willis

Human factors may also have to adapt to additional feedback loops, such as when the designers of technology are themselves posthumans who are potentially also being rapidly updated.  Hopefully, this will lead to a trend towards better predictions of effects of technological change and/or faster dynamics to handle and redesign due to the effects.

References

[1] Woods, David, and Dekker, Sidney, “Anticipating the effects of technological change: a new era of dynamics for human factors,” Theoretical Issues in Ergonomics Science, vol. 1, no. 3, pp. 272-282, 2000.

[2] Rouse, William B., Systems Engineering Models of Human-Machine Interaction. New York: Elsevier North Holland, 1980, pp. 129-132.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , ,