Enactive Interface Perception and Affordances

Posted in artificial intelligence, interfaces on November 14th, 2011 by Samuel Kenyon

I just published version 2 of my Enactive Interface Perception essay over on Science 2.0.

It’s now called “Enactive Interface Perception and Affordances”.

Tags: , , , , ,

GUI Prototyping at a BostonCHI Seminar

Posted in interaction design on March 28th, 2011 by Samuel Kenyon

On Friday I attended BostonCHI’s seminar Tools of the Trade: User Experience Research and Design Skills. Since all courses were day-long, I had to choose only one.  My choice was Prototyping Tips and Tools for Effective UX Design.

Here is an embedding of the instructor’s Prezi used during the course.  It’s pretty good, except for the right-brain/left-brain crap, which is a myth.

A few things I learned:

  • (meta) Prezi seems to be an extremely useful tool for presentations and/or videos
  • CaseComplete appears to be a very useful tool for managing use cases and requirements and traceability between everything, even to test plans.
  • FlairBuilder is a pretty good tool for prototyping. You can also use it for wireframing.  Unfortunately, it still has some major bugs (it crashed several times for everybody in the class).  The file format use XML; it’s simple enough to read it manually and I messed around with a bit and reloaded that modified file (the program didn’t choke with my hacked files, even when I purposely did weird things).  Anyway, I like the fact that somebody else could easily write a program or script to reuse one’s Flair files, for instance creating visualizations of how all the elements are connected or activity diagrams.
  • FlairBuilder card stacks are very useful.  For some, it’s a new concept; for me, I had fond memories of card stack apps I’ve used in the distant past such as HyperStudio and one I made myself in high school with QuickBasic.
  • Prototyping tools like FlairBuilder and Axure are worth using if you need to demonstrate a GUI with lots of transitions, etc. that would take a long time to actually code.

Prototyping may be more associated with web design but I have found it to be useful for other kinds of GUIs.  Indeed, I am not a web designer at all.  But I have no problem stealing good ideas from web design.  In my experience, a working demo is better, but if you can’t do that in time (or if it would be a waste of effort) then prototype or at least make static mockups.

Tags: , , , ,

Fiber Optic Neural Interfaces: Tests to Begin Soon

Posted in interfaces, transhumanism on March 2nd, 2011 by Samuel Kenyon

Popular Science [1] has reported a tidbit of information: Marc Christensen’s team at SMU is supposed to start testing if they can stimulate a rat’s leg with optical fibers.

Fiber optic to nervous system interface

Fiber optic to nervous system interface

This is the same DARPA-funded project I mentioned last September in my article “Softer, Better, Faster, Stronger” [2].  DARPA held a related “Reliable Neural Interface Technology (RE-NET)” workshop back in 2009 [3]:

A well-meaning motor prosthesis with even 90% reliability, such as a prosthetic leg that fails once every 10 steps, would quickly be traded for a less capable but more reliable alternative (e.g., a wheelchair). The functionality of any viable prostheses using recorded neural signals must be maintained while the patient is engaged in or has their attention directed to unrelated activities (e.g., moving, talking, eating, etc.). Since the neural-prosthesis-research community has yet to demonstrate the control of even a simple 1-bit switch with a long-term high level of speed and reliability, the success of more ambitious goals (e.g., artificial limbs) are placed in doubt.

DARPA is interested in identifying the specific fundamental challenges preventing clinical deployment of Reliable Neural Technology (RE-NET), where new agency funding might be able to advance neural-interface technology, thus facilitating its great potential to enhance the recovery of our injured servicemembers and assist them in returning to active duty.

Neurophotonics

Technology comparison

Some of the challenges listed for the optical (neurophotonic sensing) approach are [4][5]:

  • Transduce action potential into optically measurable quantity
  • Modes: ionic concentration / flux vs. electromagnetic field
  • Field Overlap
  • Can’t go straight from voltage (indirect detection)
  • Sensitivity, Parallelism
  • Packaging, Size
  • Untested
  • “What is the minimum level of control-signal information required to recover a range of activities of daily living in both military and civilian situations?”
  • “Need a method for characterizing tissue near implant to better understand long term degradation.”

Some of those challenges probably apply to all forms of neuro sensing.  Likewise, the metrics for neurophotonic interfaces—resolution, signal-to-noise ratio, and density—probably apply to other methods as well.

The Need for Better Neural Interfaces

Future prosthetics

Future prosthetics

Maybe the neurophotonic approach won’t work in the end, or it will only work in combination with another method.  Whatever the case, a lot of money should be put into this kind of project.  We are in desperate need for more advanced neural interfaces.  As Dr. Principe of the University of Florida writes [6]:

Just Picture yourself being blindfolded in a noisy and cluttered night club that you need to navigate by receiving a voice command once a second…And you will understand the problem faced by engineers designing a BMI [Brain Machine Interface].

Present systems are signal translators and will not be the blue print for clinical applications.  Current decoding methods use kinematic training signals – not available in the paralyzed. I/O models cannot contend with new environments without retraining.  BMIs should not be simply a passive decoder – incorporate cognitive abilities of the user.

Interfaces to the nervous systems are the key enablers for all of future prosthetics—and of course other exotic devices that don’t even exist yet.  Without overcoming this interface hurdle, we’ll be stuck in the stone age of prosthetics and nervous system repair.

References:
[1] M. Peck, “Talk To The Hand: A New Interface For Bionic Limbs,” Popular Science, Feb 24, 2011.
[3] J.W. Judy & M.B. Wolfson, RE-NET website.
[2] “Softer, Better, Faster, Stronger: The Coming of Soft Cybernetics,” H+ Magazine, Sept 21, 2010.
[4] M.P. Christensen, “Neuro-photonic Sensing: Possibilities & Directions”, DARPA RE-NET Workshop, Nov 19, 2009.
[5] Optical Breakout Session Report, DARPA RE-NET Workshop, Nov 20, 2009.
[6] J.C. Principe, “Architectures for Brain-Machine Interfaces,”  DARPA RE-NET Workshop, Nov 19, 2009.

Image Credits:
[1] Rajeev Doshi, PopSci

[2] DARPA / CIPhER via Physorg
[3] scan of book cover, art by John Berkey

Tags: , , ,

Metaphysics of Interfaces

Posted in interfaces, philosophy on December 22nd, 2010 by Samuel Kenyon

We have an everyday sense of interfaces.  The computers we use all have interfaces, both in software and hardware.  If they didn’t, we wouldn’t be able to use them (of course, some interfaces are clearly better than others).  But interfaces aren’t just for computers—every tool or entertainment device has interfaces.  For instance the size and shape of a hammer or a pistol affords a certain usage by human hands which is very effective, and even comfortable.

But is there a more fundamental, general concept of interface?

First, we can enumerate a few of the more important roles that our common human interfaces can take: Interfaces can be thought of as translators, for instance human-computer interfaces translate a machine language into something humans can deal with such as text and/or graphics.  Interfaces can be masks, for instance avatars and augmented reality insert a layer of reality modification between users and worlds.  Interfaces can connect different types of substrates, for instance biological to electronics.  Interfaces can connect objects of different scales, for instance the interfaces of heavy machinery allow a single human to move massive quantities of material (or in a somewhat less common example, a human can manipulate specific atoms with the interfaces provided by a scanning tunneling microscope).

There are other types of interfaces, such as chemical surface boundaries between two phases.  Biology has various kinds of interfaces; computer science has its kinds of interfaces; and so on.  Basically, whenever two or more objects interact, there is an interface at that interaction.  Some interfaces are natural, and some are designed to make the interaction between the objects effective.  But there doesn’t need to be a third thing that is the interface.  The interface can be the transient place at which two or more things intersect.

What is the metaphysical situation for interfaces?  Do interfaces exist as universals?  Are they abstract?  Are they objective or subjective?  Let’s say that I am ontologically committed to the existence of objective interfaces.  So these could be concrete, but can an interface in its simplest form be concrete or must it be abstract?  Perhaps there is a universal interface—a class of which all interfaces are instances of.  This would posit that the phenomenon of interfacing is the same at all scales and regardless of whatever particulars were involved in the interfacing.

Now, let’s say that we thought there were real world instances everywhere of the universal interface.  At what scales would that stop?  Is there some underlying level in which entities no longer interface?

Now, why would I even bother to think about objective abstract interfaces?  Because, it’s possible that interfaces at the simplest conception are the basic connector of objects.  If that premise is true, then without the existence of objective interfaces there would not objectively exist anything separate from anything else—or there could be but they would effectively be in their own universes because they would never be able to interact.

If objective interfaces do not actually exist in this world, then we have to deal with the concept of interface just as a metaphor.

At the human scale, discussing interfaces seems to embrace an object-oriented point of view, which is basically the natural human point of view.  Humans operate largely by perceiving the world in terms of objects, with agents being a special class of object that operate autonomously.  Other humans are agents, other animals are agents, anything that appears to move by its own volition is suspicious and given at the very least temporary status as an agent.  But are objects, i.e. particular entities, necessary for the concept of interface?  Perhaps an objective theory of interface would not require objects.  Maybe objects are just slices of the world which are convenient for our minds to process.  Although it seems like we interface with objects, it’s possible that all interfaces operate between folds of the same cloth—some continuity that is not composed of objects (or the world itself is the only object).

Cross-posted with Science 2.0.

Tags: , ,