Posthuman Factors

Posted in posthuman factors, robotics, transhumanism on June 17th, 2011 by Samuel Kenyon

Apparently a concept I developed in my spare time in 2009, which I dubbed “posthuman factors,” is very similar to some guy’s PhD dissertation in 2010 in which he also used the term posthuman factors. (And I don’t mean everything in his dissertation, but there’s a lot of overlap.)

I recently learned of this through a Wikipedia article I discovered (created in April 2011 by user Nikiburri) called “Posthuman factors.” It has a good summary:

In general, posthuman factors addresses the intersection of design practices that includes (1) the design of posthumans, (2) designing for such posthumans, especially in safe and sustainable ways, and (3) designing the design methodologies that will supersede human-centered design (i.e., “posthuman-centered design”, or the processes of design that posthumans employ).

Interestingly, it cites my IEET article “Why You Should Care About (Post)Human Factors,” published Jan 8, 2010, yet claims that posthuman factors was first “articulated” by Dr. Haakon Faste in his Jan 2010 doctoral dissertation “Posthuman Factors: How Perceptual Robotic Art Will Save Humanity from Extinction.”

Most likely we were both thinking about it and writing about it at around the same time (one would assume that, as with my articles mentioned above, the writing actually started in 2009). And then there are whatever projects that lead to this particular synthesis of concepts; e.g. in my case it connects at least as far back to my attempt to describe an interface point of view for future human/robot/posthuman/etc. interactions (“Would You Still Love Me If I Was A Robot?“).

But the Wikipedia pages are a bit annoying. The Posthuman factors page has a link to a wikipedia page for Haakon Faste (created by the same user Nikiburri) which informs us that he is a leading figure in the field of posthuman factors and that he coined the term in 2010. Well, guess what—I posted my article “Do We Need a Posthuman Factors Discipline?” in December 2009 on my blog, so I guess that means I coined it first.

But it’s nice to know that I started a new field. And I’m pleased that at least one other person is thinking about these issues.

Tags: , , , , ,

Multitask! The Bruce Campbell Way

Posted in culture, interaction design, posthuman factors, transhumanism on September 7th, 2010 by Samuel Kenyon

Some have pointed out the supposed increase in multitasking during recent decades [1]. An overlapping issue is the increase in raw information that humans have access to. It is certainly a fascinating sociocultural change. However, humans are not capable of true multitasking. First I will describe what humans do have presently, and then I will discuss what future humans might be capable of.

photo of Bruce Campbell talking on a cell phone

Bruce Campbell

“Multitasking” in humans is primarily switchtasking combined with scripting:

  1. Switchtasking [2] is to switch between tasks. This can be done quite rapidly, so fast in fact that you might feel as though you are truly multitasking.In the past I have suggested that attentional consciousness is like a single-threaded manager [3]. However, I want to be clear that I’m not saying there’s a Cartesian Theatre [4]. I’m saying that the brain, although highly parallel at certain levels of detail, has a functionally singular attention and working memory system. Whether the model of a top-down manager is valid in all circumstances is undetermined. Neuroscientists have found a model with top-down influences on visuospatial working memory [5], but that is not necessarily the case for all mechanisms involved with attention.

    How can you have two centers of conscious activity at the same time?

    How can you have two centers of conscious activity at the same time?

  2. Scripting is the auto-piloting in your mind. A script is the sequence of steps that you can do without conscious attention.These scripts are often activities you had to learn at first, for instance bicycling and driving. The reason driving while multitasking is notorious is that the script works until something happens that breaks the script, such as a person wandering out in front of your car suddenly. When the script breaks, your attentional consciousness is interrupted to attend to the situation, but by the time you have decided what to do it might be too late.
Image credit: Paul Oka, CC Attribution-NonCommercial-NoDerivs 2.0 Generic

Image credit: Paul Oka, CC Attribution-NonCommercial-NoDerivs 2.0 Generic

You can drive your car with your scripts, meanwhile entertaining yourself with detailed telemetry (e.g. MPG, engine temperatures, etc.), MP3 players and satellite radio, video players, GPS navigation, your cell phone handling multiple calls and running multiple applications, etc. I think many people would love to be able to handle all those interactions at the same time. Some people try and end up crashing. And there’s always the few who abuse technology in special ways [6].

I consider background tasks like listening to music to also be just that—in the background. If you actually pay attention to music, you will find that you are not doing it at the same time as your other task (e.g. writing)—you are switching between them.

Cyborg Multitasking

In the future, humans may be able to truly increase their multitasking capacity.

An obvious question is, why bother?

My speculative answer: Society has increased the expectation for simultaeous activities; at the same time social interactions through always-on mediums are massively popular. Humans desperately want omnipresent interaction multiplicity—be it for work, social interaction, entertainment, or all of the above. The enabling technologies are already here—the real limiting factor is the human brain.

Even when people know they are less efficient due to switchtasking, it is still quite difficult to use that premise to revert to a more efficient way of working, and to be focused for longer periods of times on single tasks [7].

Personally, I switch between periods of single-tasking and switchtasking. However, being able to focus for a long period of time on one thing depends on you and your situation. Which brings us to enhancement.

One of the potentially popular mind enhancements will be multitasking. This could start off with working memory enhancements. Then we would be able to switch between more tasks (or more complex tasks) while having the necessary information still loaded for all of them. But true multitasking will require enhancements to our attention system.

This essay is not about how it can be done technically—maybe it will involve drugs, cyborg technology such as electronic implants and nervous interfaces, other types of invasive objects like nanobots, substrate changes (e.g. to digital computers) that enable programmatic enhancements, or none of those. Whatever the case, we can acknowledge some of the problems multitasking cyborgs or posthumans will face.

Problems with Multitasking

in ancient rome there was a poem
about a dog who had two bones
he picked at one he licked the other
he went in circles till he dropped dead
—Devo, “Freedom of Choice”

The main problem is that multitasking will change the architecture of attentional consciousness and working memory. The changes for the new architecture have to take into account control of the body—attempting to answer the phone with the same hand that is ironing could be disastrous. Likewise with trying to run in two directions at the same time. Choices that affect or require the use of limited body resources must reduce to a single decision.

Also, multiple tasks that require visual perception will have to wait for each other (basically resulting in switchtasking again) unless we also are enhanced with extra visual perception inputs. In general, the limits of the sensory modalities will limit the types of tasks being done at the same time, a problem we already have with our primitive switchtasking.

The other architectural problem is that the multiple attention sub-systems need a way to stay in sync. It’s feasible that a part of the mind would have to become a meta-manager, although that could just be the default attentional consciousness controlling the others.

The Man with the Screaming Brain

The Man with the Screaming Brain

From the outside, broken multitasking behavior would look like dissociative identity disorder [8], or even worse, like Bruce Campbell in the movie The Man With the Screaming Brain [9]:











Tags: , , , , , , , ,

Why You Should Care About (Post)Human Factors

Posted in interaction design, interfaces, posthuman factors on January 7th, 2010 by Samuel Kenyon

Your experiences and interactions were designed.

Maybe not with people, but certainly your interactions with computers, cameras, cars, software, cell phones, websites, wrappers, games, guns, power tools, pants, chairs, stairs, screens, shows, sports equipment and so on were designed. Because technologies affect society, it is worthwhile to be aware of how they are designed to work with—or in failures, against—people. People may one day include posthumans.

To avoid confusion I will define “posthuman” as it applies to this essay. First, a quote from the IEET definition:

Posthumans could be a symbiosis of human and artificial intelligence, or uploaded consciousnesses, or the result of making many smaller but cumulatively profound technological augmentations to a biological human, i.e. a cyborg. Some examples of the latter are redesigning the human organism using advanced nanotechnology or radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, life extension therapies, neural interfaces, advanced information management tools, cognitive enhancement drugs, wearable or implanted computers, and cognitive techniques.

Another point of view would describe a posthuman as somebody who is outside of the normal ranges of human capacities. The “post” qualification may be due to small out-of-bounds differences in many capacities or a huge difference in only one capacity. This is the point of view that is particularly relevant to the discipline of human factors.

“Human factors” is a term that covers both the science of human properties (cognitive and physical) and applying this science for design and development. HCI (human-computer interaction), HRI (human-robot interaction), and human-automation interaction are all interaction design disciplines which could be considered as specializations of human factors engineering.

Even before considering how posthumans might affect these disciplines, you should already care about human factors and interaction design. Here are a few reasons why:

  • Only a small segment of society has a chance (or is trained) to use a technology that provides unusable interfaces and/or bad user experiences.
  • Ignorance of human factors, poor interface design, etc. can cause major accidents; likewise good use of human factors can prevent major accidents.
  • Good interfaces and user experiences can help a product or type of product become mega-popular, which causes more sociocultural impact than a product nobody buys.
  • Human factors uses knowledge of human cognitive psychology, which can be used to design interfaces that influence human minds.

Human Factors Meets Posthumans

The amount of change in the usability guidelines is much less than the change in Web technology during the same period. The reason usability changes more slowly is that it mainly derives from the characteristics of human behavior, which is remarkably constant. After all, we don’t get bigger brains as the years go by.

This sound statement from the king of usability, Jakob Nielson (“Usability makes business sense”), will no longer be true when users no longer have human behavior.

Historically, two of the most influential technologies to human factors were aviation and computing. As impressive and world-changing as those were, posthuman technology will have even more impact. Since posthuman technology will create new cognitive and physical capacities, it will break the limits of human factors so much that the discipline will have to change significantly—possibly mutating into what one might call “posthuman factors.”

Some Existing Problems

Not only will human factors and related disciplines have to change or spawn new disciplines to handle the new and/or altered abilities of enhanced persons, but they also have to deal with various problems already existing today. Here are three current human factors and interaction design problems to consider:

  1. The first issue is accounting for changes due to adoption of the technology being introduced [1]. Technologies often change the systems they are introduced in to—and often in surprising ways. System changes include: new ways to work, new tempos of work, more complexities, new adaptations of users to the technology, new types of failures, etc. We live in a time of rapid technological change. Thus it is also a time of rapid system change.

    Posthumans will amplify this problem by adding whole new dimensions of different types and ranges of mental and physical capacities.

    One counterargument against the increasing difficulty of predicting technology change—with posthumans in the mix—is that we might have more knowledge of how the posthuman minds work. One of the limitations of modeling or observing an interaction is that the internal mechanisms of the human behavior are largely unknown [2]. But, we will know most of the internal mechanisms of cognitive enhancements and completely-artificial cognitive architectures. Also, human psychology, cognitive science, neuroscience, etc. will presumable keep marching forward so we should know more about human minds in the future as well.

    (Post)human factors, however, will need to be equipped with sufficiently advanced tools for modeling and predicting behavior in a particular system even if the designs of the cognitive enhancements and architectures are known. And as anybody who has observed emergent behavior should suspect, there will probably still be severe limits in practical situations of trying to predict the outcome of a technology introduction.

  2. The second issue is the “dialectic between tradition and transcendence” (a phrase attributed to Pelle Ehn) [3]. Designers can fix small problems, but designing a product that will significantly change how a user conducts an activity is much more difficult. It is just as difficult for the user to know what major change would help them out. And then even if the new technology exists, a lot of people won’t comprehend how it can improve anything beyond the traditional methods.

    Would posthumans inherently solve this problem by having wider adaptive potential? I can’t answer that, but it would certainly depend on the particular posthuman user. The inclinations for tradition vs. transcendence could have more variation in a posthuman group of users. Designers may have to increase the amount of customization and/or automatic adjustments in products—or shrink the target group for each product to specific bandwidths of mental and physical capacities in the posthuman spectrum. A groundbreaking new tool for human users could be irrelevant to a lot of posthuman users.

  3. There is a point of view that technology drives innovations as opposed to needs (see a rebuttal here), and therefore design only works for incremental changes. Basically these innovations would be cases of extremely open-ended problem spaces, i.e. the technologists have no clue how the users will use it. This has been the case, for example, with many kinds of general purpose robots.

    This applies to posthumans because both the posthuman technology, such as cognitive enhancements, and the products for posthumans to interact with might be major innovations. These technology-based innovations will have many hurdles to become not only products, but useful products, and many will fail or disappear along the way.

    Of course, if this technology-first view is incorrect, then posthuman-related needs and opportunities can drive research and technology, and posthuman factors and interaction design can create major innovations.


[1] Woods, David, and Dekker, Sidney, “Anticipating the effects of technological change: a new era of dynamics for human factors,” Theoretical Issues in Ergonomics Science, vol. 1, no. 3, pp.272-282, 2000.

[2] Rouse, William B., Systems Engineering Models of Human-Machine Interaction. New York: Elsevier North Holland, 1980, pp.129-132.

[3] Preece, Jennifer, “Interview with Terry Winograd” in Interaction Design: Beyond Human-Computer Interaction. New York: Wiley, 2002, p.71.

Tags: , , , ,

Do We Need a Posthuman Factors Discipline?

Posted in interaction design, interfaces, posthuman factors on December 29th, 2009 by Samuel Kenyon

Credit: Boris Artzybasheff (1899 – 1965)

Posthumans will necessarily push the boundaries of human factors, ergonomics, HCI (human computer interaction) and HRI (human robot interaction).  Some of the interactions to be accounted for are interpersonal—how will a posthuman talk to other humans in a given context?

Posthumans will have an interaction and interface legacy situation.  They will have to maintain old bodily and social languages, protocols, etc. for backwards-compatibility with stock humans.  Sometimes the solution to that may fall squarely into the realm of computers and networks, e.g. the people might communicate only indirectly through various software interfaces and filters.  Sometimes the solutions may involve other physical entities such as robots.

An aside: on the subject of interface standards, certainly there will be pressures (such as the market) to make posthuman technology that various types of humans find functional and convenient, which leads to at least some adoption of common standards.  But sometimes companies and people do not adhere to common standards.  Current technology interfaces are often defined by open standards, but sometimes they are not completely open (e.g., royalties are to be paid to an organization), or they are proprietary and/or secret.  Sometimes the proprietary protocols and formats become popular; those proprietary standards are often reverse-engineered, however the originator can redefine the protocol/format at its whim causing at least temporary incompatibilities.  Whether they are reverse-engineered or not, many implementations are incomplete or break the specification.  Thus there is no guarantee, at least based on human history, that any given posthuman technology will be compatible with anything else.  Perhaps we will eventually curtail this situation with more adaptive protocols combined with smarter technology companies.


Credit: Are Mokkelbost via A Journey Round My Skull

Even if you are a brain in a vat or a pure information entity living in a computer-based system, you will need interfaces in the form of protocols.  Protocols will start with our current ones, but eventually posthumans may require more advanced protocols.  For instance, a protocol set specific to posthumans might be mental-capability handshaking and mind docking.  But the physical substrate can still rear its ugly head.  An example of this harsh reality: a superintelligence in Australia is conversing with a superintelligence in the United States about superstring theory and right at the cusp of a breakthrough a shark chomps through the undersea fiber trunk and science is set back 100 years.

If that example is not far out enough for the audience, then you could instead imagine faster-than-light intercommunications between intelligence clusters spread across the galaxy.  But one day the nature of the universe fluctuates (due to to the actions of enemy alien superintelligences), rendering the physical properties that FTL depended on obsolete, which results in disintegrating the entire intergalactic intelligence cloud.

Of course, eventually, one would expect supersmart entities to find more robust solutions for information-based intelligence.  The point of this section was to illustrate just one of the many interface issues which are amplified by posthuman technology.

Change and Feedback

The discipline of human factors can already predict problems that will occur when trying to design and integrate a piece of technology into a system, and these problems apply to posthuman technology as well.  But posthumans make things even more complex: the biological aspect may no longer be constant.  Human factors, ergonomics, HCI and HRI all depend on a relatively static biological norm.  Occasional humans fall out of the normal ranges but for most humans a fit can be made.  Not necessarily so with posthumans.  Advanced drugs, gene therapy, physical modifications, etc. could change the physical properties of a person.  Likewise with cyborg parts and androids.  Cognitive enhancements will totally change the psychological aspect of design.  User centered interaction design depends on known cognitive relationships which are no longer necessarily true when the user is non-human.

The main problem that human factors has to deal with already, which will be amplified by posthumans, is accounting for changes due to adoption of the technology being introduced [1].

Imagine a group of people on a mission, for instance to colonize another planet.  Let’s say we give them all cognitive enhancement A.  This changes how they do their jobs, sometimes in unexpected ways.  Then we develop cognitive enhancement B—but to design B we have to redefine the user as user+A and take into account the changes in the mission operation due to A.  Once again, B changes not only their minds but also how they do their jobs, sometimes in unexpected ways.  Now we design computer interface 2, but to do that we have to redefine human factors and HCI for users with cognitive enhancement B and the usage is for the new B-enhanced mission.  Computer interface 2 also changes how they do their job, sometimes in unexpected ways.  And so on.

It seems that the difficulty and rate of change will be increasing for human factors and interface design, however there is one counterargument against difficulty.  One of the limitations of modeling or observing an interaction is that the internal mechanisms of the human behavior are largely unknown [2].  But, we will know most of the internal mechanisms of posthumans and AIs.  However, human factors will need to be equipped with sufficiently advanced tools for modeling and predicting posthuman behavior in a context even if the design of the posthuman or AI is known.

Credit: Shane Willis

Human factors may also have to adapt to additional feedback loops, such as when the designers of technology are themselves posthumans who are potentially also being rapidly updated.  Hopefully, this will lead to a trend towards better predictions of effects of technological change and/or faster dynamics to handle and redesign due to the effects.


[1] Woods, David, and Dekker, Sidney, “Anticipating the effects of technological change: a new era of dynamics for human factors,” Theoretical Issues in Ergonomics Science, vol. 1, no. 3, pp. 272-282, 2000.

[2] Rouse, William B., Systems Engineering Models of Human-Machine Interaction. New York: Elsevier North Holland, 1980, pp. 129-132.

Tags: , , , ,