Fiber Optic Neural Interfaces: Tests to Begin Soon

Posted in interfaces, transhumanism on March 2nd, 2011 by Samuel Kenyon

Popular Science [1] has reported a tidbit of information: Marc Christensen’s team at SMU is supposed to start testing if they can stimulate a rat’s leg with optical fibers.

Fiber optic to nervous system interface

Fiber optic to nervous system interface

This is the same DARPA-funded project I mentioned last September in my article “Softer, Better, Faster, Stronger” [2].  DARPA held a related “Reliable Neural Interface Technology (RE-NET)” workshop back in 2009 [3]:

A well-meaning motor prosthesis with even 90% reliability, such as a prosthetic leg that fails once every 10 steps, would quickly be traded for a less capable but more reliable alternative (e.g., a wheelchair). The functionality of any viable prostheses using recorded neural signals must be maintained while the patient is engaged in or has their attention directed to unrelated activities (e.g., moving, talking, eating, etc.). Since the neural-prosthesis-research community has yet to demonstrate the control of even a simple 1-bit switch with a long-term high level of speed and reliability, the success of more ambitious goals (e.g., artificial limbs) are placed in doubt.

DARPA is interested in identifying the specific fundamental challenges preventing clinical deployment of Reliable Neural Technology (RE-NET), where new agency funding might be able to advance neural-interface technology, thus facilitating its great potential to enhance the recovery of our injured servicemembers and assist them in returning to active duty.

Neurophotonics

Technology comparison

Some of the challenges listed for the optical (neurophotonic sensing) approach are [4][5]:

  • Transduce action potential into optically measurable quantity
  • Modes: ionic concentration / flux vs. electromagnetic field
  • Field Overlap
  • Can’t go straight from voltage (indirect detection)
  • Sensitivity, Parallelism
  • Packaging, Size
  • Untested
  • “What is the minimum level of control-signal information required to recover a range of activities of daily living in both military and civilian situations?”
  • “Need a method for characterizing tissue near implant to better understand long term degradation.”

Some of those challenges probably apply to all forms of neuro sensing.  Likewise, the metrics for neurophotonic interfaces—resolution, signal-to-noise ratio, and density—probably apply to other methods as well.

The Need for Better Neural Interfaces

Future prosthetics

Future prosthetics

Maybe the neurophotonic approach won’t work in the end, or it will only work in combination with another method.  Whatever the case, a lot of money should be put into this kind of project.  We are in desperate need for more advanced neural interfaces.  As Dr. Principe of the University of Florida writes [6]:

Just Picture yourself being blindfolded in a noisy and cluttered night club that you need to navigate by receiving a voice command once a second…And you will understand the problem faced by engineers designing a BMI [Brain Machine Interface].

Present systems are signal translators and will not be the blue print for clinical applications.  Current decoding methods use kinematic training signals – not available in the paralyzed. I/O models cannot contend with new environments without retraining.  BMIs should not be simply a passive decoder – incorporate cognitive abilities of the user.

Interfaces to the nervous systems are the key enablers for all of future prosthetics—and of course other exotic devices that don’t even exist yet.  Without overcoming this interface hurdle, we’ll be stuck in the stone age of prosthetics and nervous system repair.

References:
[1] M. Peck, “Talk To The Hand: A New Interface For Bionic Limbs,” Popular Science, Feb 24, 2011.
[3] J.W. Judy & M.B. Wolfson, RE-NET website.
[2] “Softer, Better, Faster, Stronger: The Coming of Soft Cybernetics,” H+ Magazine, Sept 21, 2010.
[4] M.P. Christensen, “Neuro-photonic Sensing: Possibilities & Directions”, DARPA RE-NET Workshop, Nov 19, 2009.
[5] Optical Breakout Session Report, DARPA RE-NET Workshop, Nov 20, 2009.
[6] J.C. Principe, “Architectures for Brain-Machine Interfaces,”  DARPA RE-NET Workshop, Nov 19, 2009.

Image Credits:
[1] Rajeev Doshi, PopSci

[2] DARPA / CIPhER via Physorg
[3] scan of book cover, art by John Berkey

Tags: , , ,

Softer, Better, Faster, Stronger

Posted in interfaces, transhumanism on September 22nd, 2010 by Samuel Kenyon

Now published on h+ magazine: my article “Softer, Better, Faster, Stronger: The Coming of Soft Cybernetics.” Check it out!

I have a titanium screw in my head.  It is a dental implant (root-form endosseous) covered with a crown.

xray of a dental implant

Note: This is a representative photo from Wikipedia, not my personal implant

Osseointegration (fusing implants with bone) is used for many things these days, such as bone-anchored hearing aids and bone-anchored leg prostheses.

photo of a bone-anchored leg prosthetic

This is cool, but there’s a major interface problem if you have a metal rod poking out of your skin–it’s basically an open wound.  Researchers have found a solution however based on deer antlers, called ITAP (Intraosseous Transcutaneous Amputation Prosthesis), in which they can get the skin to actually grow into the titanium.

photo of deer head with antlers

Deer antlers go through the skin, like bone-anchored prosthetics

They do this by carefully shaping the titanium and putting lots of tiny holes in it.  ITAPs are what the momentarily famous “bionic” cat Oscar received last June.

In these examples, biology is doing most of the work.  Sure, the chemical properties of titanium make it compatible, but when will the artificial technology pull its weight?  Where are the implants that integrate seamlessly with your body and with other implants?  Where are the computer interfaces that automatically and robustly integrate with any person’s nervous system?

Sure, there’s a lot of great medical technology which does successfully interface with human biology.  Let’s not forget the AbioCor artificial implantable replacement heart as featured in the illustrious film Crank: High Voltage.

photo of abiocor artificial heart

AbioCor

image of jason statham with battery charger attached to nipple and tongue (from the movie Crank 2)

Crank: High Voltage

But there’s a lot of things that don’t work well yet, such as direct neural interfaces–although there are glimmers of hope such as optical interfaces to the nervous system.  And besides medical technology, what about all machines–why are they so inflexible and high-maintenance?  And it’s not just hardware–the software realm seems to be particularly behind with “soft” and flexible interfaces.

In recent article called “Building Critical Systems as a Cyborg”, software architect Greg Ball compares the von Neumann algorithmic approach of most conventional software to the cybernetics approach.  He says:

Don’t assume those early cyberneticists would be impressed by our modern high-availability computer systems. They might even view our conventional approach to software as fatally arrogant, requiring a programmer to anticipate everything.

What if instead of fighting changes and new interactions, our software embraced them?  A cybernetic approach to software would more oriented around self regulation, including parts that are added in to the system from outside.

You might argue that regulation with feedback loops has been part of engineering systems for a long time.  But we still have a lot of brittleness in the interfaces.  It’s not easy to make systems out of components unless the interfaces match up perfectly.  In the software realm, things are pretty much the same.  Most of our technology behaves very differently from biology in terms of interfacing, adaptation, learning and growth.  Eventually we can do better than biology, but first we need to be as soft as biology.  This will help us not only for making machines that operate in the dynamic real world of humans, but will also help us make devices that directly attach to humans.

Do We Need Fuzzy Substrates?

photo of fuzzy thing

Computers are embedded in almost all of our devices, and most of them are digital.  Information at the low levels is stored as binary.  Biology, in contrast, often makes use of analog systems.  But does that matter?  Take fuzzy logic for example.  Fuzzy logic techniques typically involve the concept of intermediate values between true and false.  It’s a way of dealing with vagueness.  But you don’t need a special computer for fuzzy logic–it’s just a program running on the digital computer like any other program.

Fuzzy logic, probability and other soft-computing approaches could go a long way to cover the role of adaptive interfaces in the computer code of a cyborg.  But are adaptive layers running on digital substrates enough?

USCD has been doing research with electronic neurons, which are made from analog computers.  So unlike most computers, the substrate does not represent information with discrete values.

Joseph Ayers and his lab members at Northeastern University were at one point attempting to use these electronic neurons in biomimetic lobster robots.  The electronic nervous system (ENS) would generate the behaviors of the robot, such as the pattern of signals to cause useful motion of the legs.  The legs are powered by nitinol (an alloy of titanium and nickel) wires, which expand and shrink thus causing movement.

photo of biomimetic lobster robot from Northeastern University

biomimetic lobster robot from Northeastern University

The robots already had a digital control system, so the main point of moving to the ENS was for chaotic dynamics.  As Ayers described the situation:

The present controller is inherently deterministic, i.e., the robot does what we program it to do. A biological nervous system however is self-organizing in a stimulus-dependent manner and can use neurons with chaotic dynamics to make the behavior both robust and adaptive. It is in fact this capability that differentiates robotic from biological movements and the goal of ENS-based controllers.

Besides the dynamic chaos in nervous systems, the aforementioned USCD also researches synchronized chaos.  It sounds paradoxical, but it actually happens.  It could potentially be used for certain kinds of adaptable interfaces.  For instance, synchronized chaos can achieve “asymptotic stability,” which means that two systems can recover synchronization quickly after an external force messes up their sync.

I have given you a mere taste of soft cybernetics.  Its usage may have to increase, although it is not clear yet whether we need new information substrates such as analog computers.

Image Credits:

  1. DRosenbach at en.wikipedia
  2. Elizabeth Banuelos-Totman, University of Utah
  3. Marieke IJsendoorn-Kuijpers
  4. ABIOMED
  5. Crank: High Voltage (2009), Lionsgate
  6. Mostaque Chowdhury
  7. Jan Witting
Tags: , , , , , , , , , ,

What Bruce Campbell Taught Me About Robotics

Posted in artificial intelligence, robotics on March 16th, 2010 by Samuel Kenyon

One of the films which inspired me as a kid was Moontrap, the plot of which has something to do with Bruce Campbell and his comrade Walter Koenig bringing an alien seed back to earth.

moontrap

Nothing ever happens on the moon

This alien (re)builds itself out of various biological and electromechanical parts.

The Moontrap robot

The Moontrap robot

At one point the robot had a skillsaw end effector, not unlike the robot in this exquisite depiction of saw-hand prowess:

Cyborg Justice

Cyborg Justice (Sega Genesis, 1993)

In that game—which I also played as a child—you could mix-and-match legs, torsos, and arms to create robots.

The later movie Virus had a similar creature to the one in Moontrap, and if I remember correctly, the alien robots in the movie *Batteries Not Included could modify and reproduce themselves from random household junk.

The ability for a creature to compose and extend itself is quite fascinating. Not only can it figure out what to do with the objects it happens to encounter, but it can adjust its mental models in order to control these new extensions.

I think that building yourself out of parts is only a difference in degree from tool use.

Tools

During the long watches of the night the solitary sailor begins to feel that the boat is an extension of himself, moving to the same rhythms toward a common goal.  The violinist, wrapped in the stream of sound she helps to create, feels as if she is part of the “harmony of the spheres.”  The climber, focusing all her attention on the small irregularities of the rock wall that will have to support her weight safely, speaks of the sense of kinship that develops between fingers and rock, between the frail body and the context of stone, sky, and wind. —Csikszentmihalyi [1]

Human tool use

Humans are perhaps the most adaptable of animals on earth (leave a comment if you know of a more adaptable organism).

Our action-perception system may have morphology-specific programming. But it’s not so specific that we cannot add or subtract from it. For instance, anything you hold in your hand becomes essentially an extension of your arm. Likewise, you can adapt to a modification in which you completely replace your hand with a different type of end effector.

Alternate human end effector

You might argue that holding something does not really extend your arm. After all, you aren’t hooking it directly to your nervous system. But the brain-environment system does treat external objects as part of the body.

We have always been coupled with technology. We have always been prosthetic bodies.
-Stelarc

Something unique about hands is that they may have evolved due to tool use. Bipedalism allowed this to happen. About 5 million years after bipedalism, tool use and a brain expansion appeared [2]. It’s possible that the homo sapiens brain was the result of co-evolution with tools.

Oldowan Handaxe

Oldowan Handaxe (credit: University of Missouri)

The body itself is part of the environment, albeit a special one as far as the brain is concerned. The brain has no choice but to have this willy-nilly freedom of body size changes—or else how would you be able to grow from a tiny baby to the full size lad/gal/transgender you are today?

An example of body-environment overlap is the cutaneous rabbit hopping out of the body experiment [3].

rabbit tatoo

The white cutaneous rabbit

The original cutaneous (==”of the skin”) rabbit experiment demonstrated a somatosensory illusion: your body map (in the primary somatosensory cortex) will cause you to report tapping (the “rabbit” hopping) on your skin in between the places where the stimulus was actually applied. The out of the body version extends this illusion onto an external object held by your body (click on figure below for more info).

Hopping out of the body

Hopping out of the body (credit: Miyazaki, et al)

Some other relevant body map illusions are the extending nose illusion, the rubber hand illusion, and the face illusion.

Get Your Embody Beat

Metzinger’s self-model theory of subjectivity [4] defines three levels of embodiment:

First-order: Purely reflexive with no self-representation. Most uses of subsumption architecture would be categorized as such.

Second-order: Uses self-representation, which affects its behavior.

Third-order: In addition to self-representation, “you consciously experience yourself as embodied, that you possess phenomenal self-model (PSM)”. Humans, when awake, fall into this category.

introspection

Introspection

Metzinger refers to the famous starfish robot as an example of a “second-order embodiment” self-model implementation. The starfish robot develops its walk with a dynamic internal self model, and can also adapt to body subtractions (e.g. via damage).

I don’t see why we can’t develop robots that learn how to use tools and even adapt them into their bodies. The natural way may not be the only way, but it’s at least a place to start when making artificial intelligence. AI has an advantage though, even when using the naturally inspired methods, which is that the researchers can speed up phylogenetic development.

What I mean by that is I could adapt a robot to a range of environments through evolution in simulations running much faster than real time. Then, I can deploy that robot in real life where it continues its learning, but it has already learned via evolution the important and general stuff to keep it alive.

Body Mods

The ancient art of cyborg hands

This natural adaptability that you have as part of your interaction with the world could also help you modify yourself with far stranger extensions than chainsaws and cyborg hands.

Well-designed cyborg parts will exploit this natural adaptability to modify your morphology, if you so desire. Perhaps the same scheme could work even with a complete body replacement, or a mind-in-computer scenario in which you may have multiple physical bodies to choose from.

————

References

[1] M. Csikszentmihalyi, Flow: The Psychology of Optimal Experience. New York: Harper Perennial, 1990.

[2] R. Leaky, The Origin of Humankind. New York: BasicBooks, 1994.

[3] M. Miyazaki, M. Hirashima, D. Nozaki, “The ‘Cutaneous Rabbit’ Hopping out of the Body.” The Journal of Neuroscience, February 3, 2010, 30(5):1856-1860; doi:10.1523/JNEUROSCI.3887-09.2010. http://www.jneurosci.org/cgi/content/full/30/5/1856

[4] T. Metzinger, “Self models.” Scholarpedia, 2007, 2(10):4174. http://www.scholarpedia.org/article/Self_models

Tags: , , , , , , ,