TRONtastic New Year’s Eve and Human Factors of Crotch Access

Posted in interaction design, making and hacking on January 1st, 2011 by Samuel Kenyon

I improved my TRON:Legacy-esque illuminated vest and added some illuminated leg cladding. Here is a photo of me from last night (New Year’s Eve):

Me at GALACTICA: A TOGETHER NEW YEAR'S EVE @ Think Tank, Cambridge MA

What Worked

This time the vest front and back worked without fail for several hours.  The overall design worked—everybody either understood the TRON reference or thought it was cool even if they had not heard of TRON (yes, there are many people who have no clue what TRON is despite all the advertising).

The leg cladding looked really cool, but it only worked for a few minutes.

leg cladding

What Failed

One point in the EL wire on my left leg right before the knee failed before I even got to the destination.  After a while the EL wire next to that one also broke.  Since the leg cladding was one long wire, this caused my entire trousers to be conspicuously not shining.

partially illuminated left upper leg plate

partially illuminated upper leg plate

So I had to walk around and dance with a bunch of cardboard strapped to my legs for no reason.

the broken connections

Human factors of crotch access: Another problem with the cyber trousers ensemble is due to rushing at the last minute I used just one long wire for both legs instead of two.  This resulted in an illuminated wire going straight across my fly, a clear violation of human factors.  After all, I would be drinking and that will inevitably result in needing to urinate, and hence needing to open my fly. Also if I wanted to illuminate my crotch I could come up with a much more attractive scheme than a wire going straight across.  So my quick fix was to cover it with a black wire shroud and push it against my belt, but that in turn probably made the strain on the EL wire much worse.

The arm band: I figured the tiny connector and wires on my arm band would probably fail, and sure enough they did.  I had added some tape as strain relief but it wasn’t enough:

broken wires (pulled out from the heat shrink)

Lessons Learned

Having dealt with lots of wires and connectors in the past on robotics and wearable computers, I knew that the connectors and wires should be robustified, however I ran out of time before the event.  Also, EL wire really does not handle flexing and pulling very well, so I will pay special attention to that.  Also, a more robust solution than having one long EL wire for both legs would be to have separate EL wires so that if one fails the other will stay illuminated (it’s somewhat annoying though to solder EL wire because you have to scrape the phosphorous off the center lead).

Also, making the cardboard attachments for my legs forced me to figure out how to make crude patterns.  Interesting, but certainly not something I’d want to do all the time!

So, next time we will see what else I can come up with to improve this getup before I get completely bored with it.

Tags: , , , , , ,

Multitasking, Consciousness, and George Lucas

Posted in interaction design on August 8th, 2010 by Samuel Kenyon

Humans can only be conscious of one task at a time.

Tasks that user experience and interaction designers are concerned with are usually relatively complex.  Tasks that require you to think about them.  Generally this means you are aware of what you are doing.  Later on you might be so familiar with a standard task that you don’t have to be aware of it, but at first you have to learn it.  You might think this means consciousness is only needed for learning tasks.  However, in many cases not being aware during a task can result in failure–because your consciousness is required to handle new problems.

And yet it seems like we are multitasking all the time.  I routinely have 3-4 computers and 5-6 monitors with dozens of applications running at work, typing a line of code while somebody asks me a question.  In this photo you can see me multitasking while teleoperating a robot (this was my old office in 2008 with only 4 monitors…)

But I’m not consciously attentive of all that simultaneously.  I just switch between them quickly.  Typing while listening to someone talk is difficult without accidental cross-pollination, but it is easy if you have a buffer of words/code already in your head and you’re just unconsciously typing it while your attention is now focused on the completely different context of listening to a human talk.

Task switching and flipping between conscious and unconscious control happens so quickly and effortlessly that it’s hard to believe that there is really just one task getting “processed” at a time.  For some strange people, like computer engineers, this makes perfect sense, since that’s how basic CPUs work–one simple instruction at a time, millions of times per second.  Multiple programs can run on serial computers because the computer keeps all the programs in memory, and then hops between them very fast.  A little bit of this program, then a little bit of that program, and so on.

As Missy Cummings, a former Navy pilot and human factors researcher, puts it: “In complex problem solving tasks, humans are serial processors in that they can only solve a single complex problem or task at a time, and while they can rapidly switch between tasks, any sequence of tasks requiring complex cognition will form a queue…” [1].

For this reason, Cummings has warned people of the dangers of cell phone use while driving.  However, you can in fact drive while using a cell phone.  You can do lots of things while driving.  Have you ever been spaced out while driving (or walking) and found yourself transported to another location?  Who was driving in the interim?  You have trained yourself to drive enough that your mind can actually do it unconsciously.  However, if there is a problem or an unexpected event you will be alerted to that consciously–or you will not be alert and crash into something or someone.

But, since we can get close to multitasking–by switching quickly and letting learned tasks run unconsciously–why would user interaction designers be worried about multitasking?

Well first, as we already mentioned, often you need to be snapped out of auto-pilot to handle a new or emergency situation.  In some situations, not being conscious most of the time on the primary task can be very dangerous.  Do you want your ambulance driver to be playing GTA IV and polishing his/her nails on the way to rescue you (from your texting-related auto accident)?

Second, the more you multitask, generally the less efficient you become at all the tasks.  Personally, I have also found that if the tasks are in very different contexts, the context switching itself uses a lot of energy.

As Dave Crenshaw said (quote via Janna DeVylder) [2]:

When most people refer to multitasking, they are really talking about switchtasking. No matter how they do it, switching rapidly between two things is just not very efficient or effective.

And see DeVylder’s blog post “Save Me From Myself: Designing for Multitasking” for a good intro to the design considerations of multitasking.

    
Why is it Serial?

I think that serial consciousness evolved in animals because they are situated and embodied.  It wouldn’t work to have two conscious threads trying to drive one body in different directions.  Multiple threads have to share resources.  Having one thread conscious at a time gets closer to guaranteeing that multiple threads don’t conflict.  I would expect that when the system breaks down it would be very confused and might hurt itself.

Note: If the term “thread” is too computerese for your liking, then perhaps you can think of trains.  Consciousness is like a train station with only one track.  The metaphor breaks down pretty quickly, but hopefully that will get us on the same page.

Certainly there is parallelism in the brain–indeed that is touted as one of the brain’s great advantages.  The parallelism is also very different from most of our digital computers (for those who like to compare brains to computers).  But cell networks are at a much lower level in the skyscraper of the mind.  

What about behaviors?  Somewhere in the middle levels of the mental skyscraper, we do have parallel behaviors, but they are automatic.  The autonomic nervous system (ANS) keeps everything running–breathing, heart rate, sweating, digestion, sexual arousal, etc.  You can be conscious about some of these behaviors, such as breathing, but you don’t need to do that.  And I would venture that if you could, and tried, to turn off the ANS and control all those functions consciously at the same time, you would die quickly.

It may be trite but it’s worth invoking a manager hierarchy metaphor: The top manager is consciousness, and as you go lower, things become more automatic and less directly controllable by the higher up manager.  And this top manager is not director George Lucas, who supposedly micro-manages the tiniest details in his movies.  This manager is more like the other George Lucas, the one who oversees a vast empire–he doesn’t care about details (fast-forward to 08:15 in the video below for the relevant discussion).

References

[1] Cummings, M.L.,& Mitchell P.J., “Predicting Controller Capacity in Remote Supervision of Multiple Unmanned Vehicles”, IEEE Systems, Man, and Cybernetics,Part A Systems and Humans, (2008) 38(2), p. 451-460.

[2] D. Crenshaw, The Myth of Multitasking: How “Doing It All” Gets Nothing Done.  Jossey-Bass, 2008.

Crosspost with my other blog, In the Eye of the Brainstorm.
Tags: , , , , , , , , , ,

Following Myself With Robots

Posted in interaction design, interfaces, robotics on July 10th, 2010 by Samuel Kenyon

With teleoperated robots it is relatively easy to experience telepresence–just put a wireless camera on a radio controlled truck and you can try it. Basically you feel like you are viewing the world from the point of view of the radio-controlled vehicle.

This clip from a Jame Bond movie is realistic in that he is totally focused on the telepresence via his cell phone to remotely drive a car, with only a few brief local interruptions.

It’s also interesting that the local and remote physical spaces intersected, but he was still telepresenced to the car’s point of view.

Humans cannot process more than one task simultaneously–but they can quickly switch between tasks (although context switching can be very tiresome in my experience). Humans can also execute a learned script in the background while focusing on a task–for instance driving (the script) while texting (the focus). Unfortunately, the script cannot handle unexpected problems like a large ladder falling off of a van in front of you in the highway (which happened to me a month ago). You have to immediately drop the focused task of texting and focus on avoiding a collision.

In the military, historically, one or more people would be dedicated to operating a single robot. The robot operator would be in a control station, a Hummer, or have a suitcase-style control system set up near a Hummer with somebody guarding them. You can’t operate the robot and effectively observe your own situation at the same time. If somebody shoots you, it might be too late to task switch. Also people under stress can’t handle as much cognitive load. When under fire, just like when giving a public presentation, you are often dumber than normal.

But what if you want to operate a robot while being dismounted (not in a Hummer) and mobile (walking/running around)? Well my robot interface (for Small Unmanned Ground Vehicle) enables that. The human constraints are still there, of course, so the user will never have complete awareness immediate surroundings simultaneously as operating the robot–but the user can switch between those situations almost instantly. However, this essay is not about the interface itself, but about an interesting usage in which you can see yourself from the point of view of the robot. So all you need to know about this robot interface is that it is a wearable computer system with a monocular head-mounted display.

An Army warfighter using one of our wearable robot control systems

One effective method I noticed while operating the robot at the Pentagon a few years ago is to follow myself. This allows me to be in telepresence and still walk relatively safely and quickly. Since I can see myself from the point of view of the robot, I will see any obvious dangers near my body. It was quite easy to get into this out-of-body mode of monitoring myself.

Unfortunately, this usage is not appropriate for many scenarios. Often times you want the robot to be ahead of you, hopefully keeping you out of peril. In many cases neither you or the robot will be in line-of-sight with each other.

As interaction design and autonomy improve for robots, they will more often than not autonomously follow their leaders, so a human will not have to manually drive them. However, keeping yourself in the view of cameras (or other sensors) could still be useful–you might be cognitively loaded with other tasks such as controlling arms attached to the robot, high level planning of robots, viewing information, etc., while being mobile yourself.

This is just one of many strange new interaction territories brought about by mobile robots. Intelligent software and new interfaces will make some of the interactions easier/better, but they will be constrained by human factors.

<object width=”480″ height=”385″><param name=”movie” value=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″></param><param name=”allowFullScreen” value=”true”></param><param name=”allowscriptaccess” value=”always”></param><embed src=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″ type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”480″ height=”385″></embed></object>
Crosspost with my other blog, In the Eye of the Brainstorm.
Tags: , , , , , , , ,

Why You Should Care About (Post)Human Factors

Posted in interaction design, interfaces, posthuman factors on January 7th, 2010 by Samuel Kenyon

Your experiences and interactions were designed.

Maybe not with people, but certainly your interactions with computers, cameras, cars, software, cell phones, websites, wrappers, games, guns, power tools, pants, chairs, stairs, screens, shows, sports equipment and so on were designed. Because technologies affect society, it is worthwhile to be aware of how they are designed to work with—or in failures, against—people. People may one day include posthumans.

To avoid confusion I will define “posthuman” as it applies to this essay. First, a quote from the IEET definition:

Posthumans could be a symbiosis of human and artificial intelligence, or uploaded consciousnesses, or the result of making many smaller but cumulatively profound technological augmentations to a biological human, i.e. a cyborg. Some examples of the latter are redesigning the human organism using advanced nanotechnology or radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, life extension therapies, neural interfaces, advanced information management tools, cognitive enhancement drugs, wearable or implanted computers, and cognitive techniques.

Another point of view would describe a posthuman as somebody who is outside of the normal ranges of human capacities. The “post” qualification may be due to small out-of-bounds differences in many capacities or a huge difference in only one capacity. This is the point of view that is particularly relevant to the discipline of human factors.

“Human factors” is a term that covers both the science of human properties (cognitive and physical) and applying this science for design and development. HCI (human-computer interaction), HRI (human-robot interaction), and human-automation interaction are all interaction design disciplines which could be considered as specializations of human factors engineering.

Even before considering how posthumans might affect these disciplines, you should already care about human factors and interaction design. Here are a few reasons why:

  • Only a small segment of society has a chance (or is trained) to use a technology that provides unusable interfaces and/or bad user experiences.
  • Ignorance of human factors, poor interface design, etc. can cause major accidents; likewise good use of human factors can prevent major accidents.
  • Good interfaces and user experiences can help a product or type of product become mega-popular, which causes more sociocultural impact than a product nobody buys.
  • Human factors uses knowledge of human cognitive psychology, which can be used to design interfaces that influence human minds.

Human Factors Meets Posthumans

The amount of change in the usability guidelines is much less than the change in Web technology during the same period. The reason usability changes more slowly is that it mainly derives from the characteristics of human behavior, which is remarkably constant. After all, we don’t get bigger brains as the years go by.

This sound statement from the king of usability, Jakob Nielson (“Usability makes business sense”), will no longer be true when users no longer have human behavior.

Historically, two of the most influential technologies to human factors were aviation and computing. As impressive and world-changing as those were, posthuman technology will have even more impact. Since posthuman technology will create new cognitive and physical capacities, it will break the limits of human factors so much that the discipline will have to change significantly—possibly mutating into what one might call “posthuman factors.”

Some Existing Problems

Not only will human factors and related disciplines have to change or spawn new disciplines to handle the new and/or altered abilities of enhanced persons, but they also have to deal with various problems already existing today. Here are three current human factors and interaction design problems to consider:

  1. The first issue is accounting for changes due to adoption of the technology being introduced [1]. Technologies often change the systems they are introduced in to—and often in surprising ways. System changes include: new ways to work, new tempos of work, more complexities, new adaptations of users to the technology, new types of failures, etc. We live in a time of rapid technological change. Thus it is also a time of rapid system change.

    Posthumans will amplify this problem by adding whole new dimensions of different types and ranges of mental and physical capacities.

    One counterargument against the increasing difficulty of predicting technology change—with posthumans in the mix—is that we might have more knowledge of how the posthuman minds work. One of the limitations of modeling or observing an interaction is that the internal mechanisms of the human behavior are largely unknown [2]. But, we will know most of the internal mechanisms of cognitive enhancements and completely-artificial cognitive architectures. Also, human psychology, cognitive science, neuroscience, etc. will presumable keep marching forward so we should know more about human minds in the future as well.

    (Post)human factors, however, will need to be equipped with sufficiently advanced tools for modeling and predicting behavior in a particular system even if the designs of the cognitive enhancements and architectures are known. And as anybody who has observed emergent behavior should suspect, there will probably still be severe limits in practical situations of trying to predict the outcome of a technology introduction.

  2. The second issue is the “dialectic between tradition and transcendence” (a phrase attributed to Pelle Ehn) [3]. Designers can fix small problems, but designing a product that will significantly change how a user conducts an activity is much more difficult. It is just as difficult for the user to know what major change would help them out. And then even if the new technology exists, a lot of people won’t comprehend how it can improve anything beyond the traditional methods.

    Would posthumans inherently solve this problem by having wider adaptive potential? I can’t answer that, but it would certainly depend on the particular posthuman user. The inclinations for tradition vs. transcendence could have more variation in a posthuman group of users. Designers may have to increase the amount of customization and/or automatic adjustments in products—or shrink the target group for each product to specific bandwidths of mental and physical capacities in the posthuman spectrum. A groundbreaking new tool for human users could be irrelevant to a lot of posthuman users.

  3. There is a point of view that technology drives innovations as opposed to needs (see a rebuttal here), and therefore design only works for incremental changes. Basically these innovations would be cases of extremely open-ended problem spaces, i.e. the technologists have no clue how the users will use it. This has been the case, for example, with many kinds of general purpose robots.

    This applies to posthumans because both the posthuman technology, such as cognitive enhancements, and the products for posthumans to interact with might be major innovations. These technology-based innovations will have many hurdles to become not only products, but useful products, and many will fail or disappear along the way.

    Of course, if this technology-first view is incorrect, then posthuman-related needs and opportunities can drive research and technology, and posthuman factors and interaction design can create major innovations.

References

[1] Woods, David, and Dekker, Sidney, “Anticipating the effects of technological change: a new era of dynamics for human factors,” Theoretical Issues in Ergonomics Science, vol. 1, no. 3, pp.272-282, 2000.

[2] Rouse, William B., Systems Engineering Models of Human-Machine Interaction. New York: Elsevier North Holland, 1980, pp.129-132.

[3] Preece, Jennifer, “Interview with Terry Winograd” in Interaction Design: Beyond Human-Computer Interaction. New York: Wiley, 2002, p.71.

Tags: , , , ,