Softer, Better, Faster, Stronger

Posted in interfaces, transhumanism on September 22nd, 2010 by Samuel Kenyon

Now published on h+ magazine: my article “Softer, Better, Faster, Stronger: The Coming of Soft Cybernetics.” Check it out!

I have a titanium screw in my head.  It is a dental implant (root-form endosseous) covered with a crown.

xray of a dental implant

Note: This is a representative photo from Wikipedia, not my personal implant

Osseointegration (fusing implants with bone) is used for many things these days, such as bone-anchored hearing aids and bone-anchored leg prostheses.

photo of a bone-anchored leg prosthetic

This is cool, but there’s a major interface problem if you have a metal rod poking out of your skin–it’s basically an open wound.  Researchers have found a solution however based on deer antlers, called ITAP (Intraosseous Transcutaneous Amputation Prosthesis), in which they can get the skin to actually grow into the titanium.

photo of deer head with antlers

Deer antlers go through the skin, like bone-anchored prosthetics

They do this by carefully shaping the titanium and putting lots of tiny holes in it.  ITAPs are what the momentarily famous “bionic” cat Oscar received last June.

In these examples, biology is doing most of the work.  Sure, the chemical properties of titanium make it compatible, but when will the artificial technology pull its weight?  Where are the implants that integrate seamlessly with your body and with other implants?  Where are the computer interfaces that automatically and robustly integrate with any person’s nervous system?

Sure, there’s a lot of great medical technology which does successfully interface with human biology.  Let’s not forget the AbioCor artificial implantable replacement heart as featured in the illustrious film Crank: High Voltage.

photo of abiocor artificial heart

AbioCor

image of jason statham with battery charger attached to nipple and tongue (from the movie Crank 2)

Crank: High Voltage

But there’s a lot of things that don’t work well yet, such as direct neural interfaces–although there are glimmers of hope such as optical interfaces to the nervous system.  And besides medical technology, what about all machines–why are they so inflexible and high-maintenance?  And it’s not just hardware–the software realm seems to be particularly behind with “soft” and flexible interfaces.

In recent article called “Building Critical Systems as a Cyborg”, software architect Greg Ball compares the von Neumann algorithmic approach of most conventional software to the cybernetics approach.  He says:

Don’t assume those early cyberneticists would be impressed by our modern high-availability computer systems. They might even view our conventional approach to software as fatally arrogant, requiring a programmer to anticipate everything.

What if instead of fighting changes and new interactions, our software embraced them?  A cybernetic approach to software would more oriented around self regulation, including parts that are added in to the system from outside.

You might argue that regulation with feedback loops has been part of engineering systems for a long time.  But we still have a lot of brittleness in the interfaces.  It’s not easy to make systems out of components unless the interfaces match up perfectly.  In the software realm, things are pretty much the same.  Most of our technology behaves very differently from biology in terms of interfacing, adaptation, learning and growth.  Eventually we can do better than biology, but first we need to be as soft as biology.  This will help us not only for making machines that operate in the dynamic real world of humans, but will also help us make devices that directly attach to humans.

Do We Need Fuzzy Substrates?

photo of fuzzy thing

Computers are embedded in almost all of our devices, and most of them are digital.  Information at the low levels is stored as binary.  Biology, in contrast, often makes use of analog systems.  But does that matter?  Take fuzzy logic for example.  Fuzzy logic techniques typically involve the concept of intermediate values between true and false.  It’s a way of dealing with vagueness.  But you don’t need a special computer for fuzzy logic–it’s just a program running on the digital computer like any other program.

Fuzzy logic, probability and other soft-computing approaches could go a long way to cover the role of adaptive interfaces in the computer code of a cyborg.  But are adaptive layers running on digital substrates enough?

USCD has been doing research with electronic neurons, which are made from analog computers.  So unlike most computers, the substrate does not represent information with discrete values.

Joseph Ayers and his lab members at Northeastern University were at one point attempting to use these electronic neurons in biomimetic lobster robots.  The electronic nervous system (ENS) would generate the behaviors of the robot, such as the pattern of signals to cause useful motion of the legs.  The legs are powered by nitinol (an alloy of titanium and nickel) wires, which expand and shrink thus causing movement.

photo of biomimetic lobster robot from Northeastern University

biomimetic lobster robot from Northeastern University

The robots already had a digital control system, so the main point of moving to the ENS was for chaotic dynamics.  As Ayers described the situation:

The present controller is inherently deterministic, i.e., the robot does what we program it to do. A biological nervous system however is self-organizing in a stimulus-dependent manner and can use neurons with chaotic dynamics to make the behavior both robust and adaptive. It is in fact this capability that differentiates robotic from biological movements and the goal of ENS-based controllers.

Besides the dynamic chaos in nervous systems, the aforementioned USCD also researches synchronized chaos.  It sounds paradoxical, but it actually happens.  It could potentially be used for certain kinds of adaptable interfaces.  For instance, synchronized chaos can achieve “asymptotic stability,” which means that two systems can recover synchronization quickly after an external force messes up their sync.

I have given you a mere taste of soft cybernetics.  Its usage may have to increase, although it is not clear yet whether we need new information substrates such as analog computers.

Image Credits:

  1. DRosenbach at en.wikipedia
  2. Elizabeth Banuelos-Totman, University of Utah
  3. Marieke IJsendoorn-Kuijpers
  4. ABIOMED
  5. Crank: High Voltage (2009), Lionsgate
  6. Mostaque Chowdhury
  7. Jan Witting
Tags: , , , , , , , , , ,

Multitask! The Bruce Campbell Way

Posted in culture, interaction design, posthuman factors, transhumanism on September 7th, 2010 by Samuel Kenyon

Some have pointed out the supposed increase in multitasking during recent decades [1]. An overlapping issue is the increase in raw information that humans have access to. It is certainly a fascinating sociocultural change. However, humans are not capable of true multitasking. First I will describe what humans do have presently, and then I will discuss what future humans might be capable of.

photo of Bruce Campbell talking on a cell phone

Bruce Campbell

“Multitasking” in humans is primarily switchtasking combined with scripting:

  1. Switchtasking [2] is to switch between tasks. This can be done quite rapidly, so fast in fact that you might feel as though you are truly multitasking.In the past I have suggested that attentional consciousness is like a single-threaded manager [3]. However, I want to be clear that I’m not saying there’s a Cartesian Theatre [4]. I’m saying that the brain, although highly parallel at certain levels of detail, has a functionally singular attention and working memory system. Whether the model of a top-down manager is valid in all circumstances is undetermined. Neuroscientists have found a model with top-down influences on visuospatial working memory [5], but that is not necessarily the case for all mechanisms involved with attention.

    How can you have two centers of conscious activity at the same time?

    How can you have two centers of conscious activity at the same time?

  2. Scripting is the auto-piloting in your mind. A script is the sequence of steps that you can do without conscious attention.These scripts are often activities you had to learn at first, for instance bicycling and driving. The reason driving while multitasking is notorious is that the script works until something happens that breaks the script, such as a person wandering out in front of your car suddenly. When the script breaks, your attentional consciousness is interrupted to attend to the situation, but by the time you have decided what to do it might be too late.
Image credit: Paul Oka, CC Attribution-NonCommercial-NoDerivs 2.0 Generic

Image credit: Paul Oka, CC Attribution-NonCommercial-NoDerivs 2.0 Generic

You can drive your car with your scripts, meanwhile entertaining yourself with detailed telemetry (e.g. MPG, engine temperatures, etc.), MP3 players and satellite radio, video players, GPS navigation, your cell phone handling multiple calls and running multiple applications, etc. I think many people would love to be able to handle all those interactions at the same time. Some people try and end up crashing. And there’s always the few who abuse technology in special ways [6].

I consider background tasks like listening to music to also be just that—in the background. If you actually pay attention to music, you will find that you are not doing it at the same time as your other task (e.g. writing)—you are switching between them.

Cyborg Multitasking

In the future, humans may be able to truly increase their multitasking capacity.

An obvious question is, why bother?

My speculative answer: Society has increased the expectation for simultaeous activities; at the same time social interactions through always-on mediums are massively popular. Humans desperately want omnipresent interaction multiplicity—be it for work, social interaction, entertainment, or all of the above. The enabling technologies are already here—the real limiting factor is the human brain.

Even when people know they are less efficient due to switchtasking, it is still quite difficult to use that premise to revert to a more efficient way of working, and to be focused for longer periods of times on single tasks [7].

Personally, I switch between periods of single-tasking and switchtasking. However, being able to focus for a long period of time on one thing depends on you and your situation. Which brings us to enhancement.

One of the potentially popular mind enhancements will be multitasking. This could start off with working memory enhancements. Then we would be able to switch between more tasks (or more complex tasks) while having the necessary information still loaded for all of them. But true multitasking will require enhancements to our attention system.

This essay is not about how it can be done technically—maybe it will involve drugs, cyborg technology such as electronic implants and nervous interfaces, other types of invasive objects like nanobots, substrate changes (e.g. to digital computers) that enable programmatic enhancements, or none of those. Whatever the case, we can acknowledge some of the problems multitasking cyborgs or posthumans will face.

Problems with Multitasking

in ancient rome there was a poem
about a dog who had two bones
he picked at one he licked the other
he went in circles till he dropped dead
—Devo, “Freedom of Choice”

The main problem is that multitasking will change the architecture of attentional consciousness and working memory. The changes for the new architecture have to take into account control of the body—attempting to answer the phone with the same hand that is ironing could be disastrous. Likewise with trying to run in two directions at the same time. Choices that affect or require the use of limited body resources must reduce to a single decision.

Also, multiple tasks that require visual perception will have to wait for each other (basically resulting in switchtasking again) unless we also are enhanced with extra visual perception inputs. In general, the limits of the sensory modalities will limit the types of tasks being done at the same time, a problem we already have with our primitive switchtasking.

The other architectural problem is that the multiple attention sub-systems need a way to stay in sync. It’s feasible that a part of the mind would have to become a meta-manager, although that could just be the default attentional consciousness controlling the others.

The Man with the Screaming Brain

The Man with the Screaming Brain

From the outside, broken multitasking behavior would look like dissociative identity disorder [8], or even worse, like Bruce Campbell in the movie The Man With the Screaming Brain [9]:


References

[1] http://singularityhub.com/2010/08/26/are-we-too-plugged-in-distracted-vs-enhanced-minds/

[2] http://blog.crankingwidgets.com/2008/08/19/switchtasking/

[3] http://www.science20.com/eye_brainstorm/multitasking_consciousness_and_george_lucas

[4] http://en.wikipedia.org/wiki/Cartesian_theater

[5] http://www.klingberglab.se/pub/Edin2009.pdf

[6] http://www.thesmokinggun.com/documents/bizarre/woman-nabbed-auto-erotic-crime

[7] http://lifehacker.com/5041144/debunking-the-myths-of-multitasking

[8] http://en.wikipedia.org/wiki/Dissociative_identity_disorder

[9] http://www.imdb.com/title/tt0365478/

Tags: , , , , , , , ,

The Great Drama of Interfaces

Posted in culture, interfaces on August 30th, 2010 by Samuel Kenyon

The great drama of the next few decades will unfold under the crossed stars of the analog and the digital.

—Steven Johnson, Interface Culture

Credit: urbanartcore.eu, CC by-nc-sa 2.0

Credit: Brian Despain

Credit: E. Benyaminso via A Journey Round My Skull, CC by- 2.0

Credit: J (mtonic.com), CC by- 2.0

Credit: J (mtonic.com), CC by- 2.0

Credit: Roberto Rizzato, CC by-nc 2.0

Credit: ARE MOKKELBOST

Tags: , , ,

What Bruce Campbell Taught Me About Robotics

Posted in artificial intelligence, robotics on March 16th, 2010 by Samuel Kenyon

One of the films which inspired me as a kid was Moontrap, the plot of which has something to do with Bruce Campbell and his comrade Walter Koenig bringing an alien seed back to earth.

moontrap

Nothing ever happens on the moon

This alien (re)builds itself out of various biological and electromechanical parts.

The Moontrap robot

The Moontrap robot

At one point the robot had a skillsaw end effector, not unlike the robot in this exquisite depiction of saw-hand prowess:

Cyborg Justice

Cyborg Justice (Sega Genesis, 1993)

In that game—which I also played as a child—you could mix-and-match legs, torsos, and arms to create robots.

The later movie Virus had a similar creature to the one in Moontrap, and if I remember correctly, the alien robots in the movie *Batteries Not Included could modify and reproduce themselves from random household junk.

The ability for a creature to compose and extend itself is quite fascinating. Not only can it figure out what to do with the objects it happens to encounter, but it can adjust its mental models in order to control these new extensions.

I think that building yourself out of parts is only a difference in degree from tool use.

Tools

During the long watches of the night the solitary sailor begins to feel that the boat is an extension of himself, moving to the same rhythms toward a common goal.  The violinist, wrapped in the stream of sound she helps to create, feels as if she is part of the “harmony of the spheres.”  The climber, focusing all her attention on the small irregularities of the rock wall that will have to support her weight safely, speaks of the sense of kinship that develops between fingers and rock, between the frail body and the context of stone, sky, and wind. —Csikszentmihalyi [1]

Human tool use

Humans are perhaps the most adaptable of animals on earth (leave a comment if you know of a more adaptable organism).

Our action-perception system may have morphology-specific programming. But it’s not so specific that we cannot add or subtract from it. For instance, anything you hold in your hand becomes essentially an extension of your arm. Likewise, you can adapt to a modification in which you completely replace your hand with a different type of end effector.

Alternate human end effector

You might argue that holding something does not really extend your arm. After all, you aren’t hooking it directly to your nervous system. But the brain-environment system does treat external objects as part of the body.

We have always been coupled with technology. We have always been prosthetic bodies.
-Stelarc

Something unique about hands is that they may have evolved due to tool use. Bipedalism allowed this to happen. About 5 million years after bipedalism, tool use and a brain expansion appeared [2]. It’s possible that the homo sapiens brain was the result of co-evolution with tools.

Oldowan Handaxe

Oldowan Handaxe (credit: University of Missouri)

The body itself is part of the environment, albeit a special one as far as the brain is concerned. The brain has no choice but to have this willy-nilly freedom of body size changes—or else how would you be able to grow from a tiny baby to the full size lad/gal/transgender you are today?

An example of body-environment overlap is the cutaneous rabbit hopping out of the body experiment [3].

rabbit tatoo

The white cutaneous rabbit

The original cutaneous (==”of the skin”) rabbit experiment demonstrated a somatosensory illusion: your body map (in the primary somatosensory cortex) will cause you to report tapping (the “rabbit” hopping) on your skin in between the places where the stimulus was actually applied. The out of the body version extends this illusion onto an external object held by your body (click on figure below for more info).

Hopping out of the body

Hopping out of the body (credit: Miyazaki, et al)

Some other relevant body map illusions are the extending nose illusion, the rubber hand illusion, and the face illusion.

Get Your Embody Beat

Metzinger’s self-model theory of subjectivity [4] defines three levels of embodiment:

First-order: Purely reflexive with no self-representation. Most uses of subsumption architecture would be categorized as such.

Second-order: Uses self-representation, which affects its behavior.

Third-order: In addition to self-representation, “you consciously experience yourself as embodied, that you possess phenomenal self-model (PSM)”. Humans, when awake, fall into this category.

introspection

Introspection

Metzinger refers to the famous starfish robot as an example of a “second-order embodiment” self-model implementation. The starfish robot develops its walk with a dynamic internal self model, and can also adapt to body subtractions (e.g. via damage).

I don’t see why we can’t develop robots that learn how to use tools and even adapt them into their bodies. The natural way may not be the only way, but it’s at least a place to start when making artificial intelligence. AI has an advantage though, even when using the naturally inspired methods, which is that the researchers can speed up phylogenetic development.

What I mean by that is I could adapt a robot to a range of environments through evolution in simulations running much faster than real time. Then, I can deploy that robot in real life where it continues its learning, but it has already learned via evolution the important and general stuff to keep it alive.

Body Mods

The ancient art of cyborg hands

This natural adaptability that you have as part of your interaction with the world could also help you modify yourself with far stranger extensions than chainsaws and cyborg hands.

Well-designed cyborg parts will exploit this natural adaptability to modify your morphology, if you so desire. Perhaps the same scheme could work even with a complete body replacement, or a mind-in-computer scenario in which you may have multiple physical bodies to choose from.

————

References

[1] M. Csikszentmihalyi, Flow: The Psychology of Optimal Experience. New York: Harper Perennial, 1990.

[2] R. Leaky, The Origin of Humankind. New York: BasicBooks, 1994.

[3] M. Miyazaki, M. Hirashima, D. Nozaki, “The ‘Cutaneous Rabbit’ Hopping out of the Body.” The Journal of Neuroscience, February 3, 2010, 30(5):1856-1860; doi:10.1523/JNEUROSCI.3887-09.2010. http://www.jneurosci.org/cgi/content/full/30/5/1856

[4] T. Metzinger, “Self models.” Scholarpedia, 2007, 2(10):4174. http://www.scholarpedia.org/article/Self_models

Tags: , , , , , , ,