Evaluating Webots

Posted in artificial intelligence, robotics on February 4th, 2014 by Samuel Kenyon

I’m trying to find a better simulator than Breve for robots and 3D physics world creation.

I have been examining the Webots simulation environment. It seems pretty useful since I could write controllers in C++ and it comes with several robot models out of the box. And I like the scene graph (or “scene tree” as they call it) approach for environments (they use VRML97 which is obsolete but at least it’s a well known standard); I only played with the interface enough to add a sphere and a box but it seems good enough so far. A lot easier than doing it completely programmatically and/or completely from scratch with raw data files. And I have made 3D models from scratch with data files in the past and it was not that efficient (compared to some ideal GUI) except for tweaking exact numbers.

They have some Nao (aka “NAO”) robot models and it would be awesome to use that for some mental development research. I’m thinking about affordances, and certainly the Nao with its 25 DOF and lots of sensors is more than sufficient for interesting affordances with real world (or simulated 3D world) environments. It may actually be overkill…

simulated Nao looks at a brick box

simulated Nao looks at a brick box

Not that I have access to a real Nao, although I programmed two little little test scripts last year on an actual Nao using Choregraphe, a primarily visual programming tool. Webots can run a Nao server so you can actually hook Choregraphe to the Webots simulation (Choreograph has a sim but just of the robot model, not of environmental interactions). Unfortunately I couldn’t try this out as it’s blocked for free users (the screenshot below shows an attempt to use the robot soccer world).

naoqisim denied!

naoqisim denied!

And I just realized if I write a Webots Nao controller, there’s no documentation or obvious way that I can see to know what the exact actuator names are to pass to wb_robot_get_device() and the demo doesn’t show all motors (or maybe they haven’t implemented all motors?). [Update: I have been informed that you can get the device tags from the Robot Window which can be made visible with the menu (Robot -> Show Robot Window) or double-clicking on the robot in the sim view. Not as good as a text list but at least the info is there.] Making motion files manually would be a pain as well. Maybe I will end up making a simpler robot model from scratch.

The lack of access to the Superviser is also making me question using this for my non-funded research. I might try running some experiments in it, and just see how far I can go without a Superviser.

Tags: ,

Recursion and the Human Mind

Posted in artificial intelligence on December 5th, 2011 by Samuel Kenyon

It’s certainly not new to propose recursion as a key element of the human mind—for instance Douglas Hofstadter has been writing about that since the 1970s.

nested recursion

Michael C. Corballis, a former professor of psychology, came out with a new book this year called The Recursive Mind. It explains his specific theory that I will attempt to outline here.

The Recursive Mind

As I understand it, his theory is composed of these parts:

  1. The ability of the human mind to generate concepts recursively is what causes the main differences between homo sapiens and other animals.
  2. A Chomskian internal language is the basis for all external languages and other recursive abilities. (See this blog post by Corballis for a summary of an internal language as a universal grammar).
  3. External languages evolved on top of the recursive abilities primarily for storytelling and social cohesion.
  4. External languages started with gestures, and most likely were followed by mouth clicking languages before vocal languages emerged.

Read more »

Tags: , , , , , ,

Hating Technology is Hating Yourself

Posted in artificial intelligence, culture, robotics, transhumanism on November 6th, 2010 by Samuel Kenyon

Kevin Kelly concluded a chapter in his new book What Technology Wants with the declaration that if you hate technology, you basically hate yourself.

photo of art installation of human statue shooting another humanoid statue with the head of a CRT

The rationale is twofold:

1. As many have observed before, technology–and Kelly’s superset “technium”–is in many ways the natural successor to biological evolution.  In other words, human change is primarily through various symbiotic and feedback-looped systems that comprise human culture.

2. It all started with biology, but humans throughout their entire history have defined and been defined by their tools and information technologies.  I wrote an essay a few months ago called “What Bruce Campbell Taught Me About Robotics” concerning human co-evolution with tools and the mind’s plastic self-models.  And of course there’s the whole co-evolution with or transition to language-based societies.

So if the premise that human culture is a result of taking the path of technologies is true, then to reject technology as a whole would be reject human culture as it has always been.  If the premise that our biological framework is a result of a back-and-forth relationship with tools and/or information, then you have another reason to say that hating technology is hating yourself (assuming you are human).

In his book, Kelly argues against the noble savage concept.  Even though there are many useless implementations of technology, the tech that is good is extremely good and all humans adopt them when they can.  Some examples Kelly provides are telephones, antibiotics and other medicines, and…chainsaws.  Low-tech villagers continue to swarm to slums of higher-tech cities, not because they are forced, but because they want their children to have better opportunities.

So is it a straw man that actually hates technology?  Certainly people hate certain implementations of technology.  Certainly it is ok, and perhaps needed more than ever, to reject useless technology artifacts.  I think one place where you can definitely find some technology haters are the ones afraid of obviously transformative technologies, in other words the ones that purposely and radically alter humans.  And they are only “transformative” in an anachronistic sense–e.g., if you compare two different time periods in history, you can see drastic differences.

Also, although perhaps not outright hate in most cases, there are many who have been infected by the meme that artificial creatures such as robots and/or super-smart computers (and/or super-smart networks of computers) present a competition to humans as they exist now.  This meme is perhaps more dangerous than any computer could be because it tries to divorce humans from the technium.

Image credit: whokilledbambi

Cross-posted with Science 2.0.

Tags: , , , , , , , ,

What Bruce Campbell Taught Me About Robotics

Posted in artificial intelligence, robotics on March 16th, 2010 by Samuel Kenyon

One of the films which inspired me as a kid was Moontrap, the plot of which has something to do with Bruce Campbell and his comrade Walter Koenig bringing an alien seed back to earth.


Nothing ever happens on the moon

This alien (re)builds itself out of various biological and electromechanical parts.

The Moontrap robot

The Moontrap robot

At one point the robot had a skillsaw end effector, not unlike the robot in this exquisite depiction of saw-hand prowess:

Cyborg Justice

Cyborg Justice (Sega Genesis, 1993)

In that game—which I also played as a child—you could mix-and-match legs, torsos, and arms to create robots.

The later movie Virus had a similar creature to the one in Moontrap, and if I remember correctly, the alien robots in the movie *Batteries Not Included could modify and reproduce themselves from random household junk.

The ability for a creature to compose and extend itself is quite fascinating. Not only can it figure out what to do with the objects it happens to encounter, but it can adjust its mental models in order to control these new extensions.

I think that building yourself out of parts is only a difference in degree from tool use.


During the long watches of the night the solitary sailor begins to feel that the boat is an extension of himself, moving to the same rhythms toward a common goal.  The violinist, wrapped in the stream of sound she helps to create, feels as if she is part of the “harmony of the spheres.”  The climber, focusing all her attention on the small irregularities of the rock wall that will have to support her weight safely, speaks of the sense of kinship that develops between fingers and rock, between the frail body and the context of stone, sky, and wind. —Csikszentmihalyi [1]

Human tool use

Humans are perhaps the most adaptable of animals on earth (leave a comment if you know of a more adaptable organism).

Our action-perception system may have morphology-specific programming. But it’s not so specific that we cannot add or subtract from it. For instance, anything you hold in your hand becomes essentially an extension of your arm. Likewise, you can adapt to a modification in which you completely replace your hand with a different type of end effector.

Alternate human end effector

You might argue that holding something does not really extend your arm. After all, you aren’t hooking it directly to your nervous system. But the brain-environment system does treat external objects as part of the body.

We have always been coupled with technology. We have always been prosthetic bodies.

Something unique about hands is that they may have evolved due to tool use. Bipedalism allowed this to happen. About 5 million years after bipedalism, tool use and a brain expansion appeared [2]. It’s possible that the homo sapiens brain was the result of co-evolution with tools.

Oldowan Handaxe

Oldowan Handaxe (credit: University of Missouri)

The body itself is part of the environment, albeit a special one as far as the brain is concerned. The brain has no choice but to have this willy-nilly freedom of body size changes—or else how would you be able to grow from a tiny baby to the full size lad/gal/transgender you are today?

An example of body-environment overlap is the cutaneous rabbit hopping out of the body experiment [3].

rabbit tatoo

The white cutaneous rabbit

The original cutaneous (==”of the skin”) rabbit experiment demonstrated a somatosensory illusion: your body map (in the primary somatosensory cortex) will cause you to report tapping (the “rabbit” hopping) on your skin in between the places where the stimulus was actually applied. The out of the body version extends this illusion onto an external object held by your body (click on figure below for more info).

Hopping out of the body

Hopping out of the body (credit: Miyazaki, et al)

Some other relevant body map illusions are the extending nose illusion, the rubber hand illusion, and the face illusion.

Get Your Embody Beat

Metzinger’s self-model theory of subjectivity [4] defines three levels of embodiment:

First-order: Purely reflexive with no self-representation. Most uses of subsumption architecture would be categorized as such.

Second-order: Uses self-representation, which affects its behavior.

Third-order: In addition to self-representation, “you consciously experience yourself as embodied, that you possess phenomenal self-model (PSM)”. Humans, when awake, fall into this category.



Metzinger refers to the famous starfish robot as an example of a “second-order embodiment” self-model implementation. The starfish robot develops its walk with a dynamic internal self model, and can also adapt to body subtractions (e.g. via damage).

I don’t see why we can’t develop robots that learn how to use tools and even adapt them into their bodies. The natural way may not be the only way, but it’s at least a place to start when making artificial intelligence. AI has an advantage though, even when using the naturally inspired methods, which is that the researchers can speed up phylogenetic development.

What I mean by that is I could adapt a robot to a range of environments through evolution in simulations running much faster than real time. Then, I can deploy that robot in real life where it continues its learning, but it has already learned via evolution the important and general stuff to keep it alive.

Body Mods

The ancient art of cyborg hands

This natural adaptability that you have as part of your interaction with the world could also help you modify yourself with far stranger extensions than chainsaws and cyborg hands.

Well-designed cyborg parts will exploit this natural adaptability to modify your morphology, if you so desire. Perhaps the same scheme could work even with a complete body replacement, or a mind-in-computer scenario in which you may have multiple physical bodies to choose from.



[1] M. Csikszentmihalyi, Flow: The Psychology of Optimal Experience. New York: Harper Perennial, 1990.

[2] R. Leaky, The Origin of Humankind. New York: BasicBooks, 1994.

[3] M. Miyazaki, M. Hirashima, D. Nozaki, “The ‘Cutaneous Rabbit’ Hopping out of the Body.” The Journal of Neuroscience, February 3, 2010, 30(5):1856-1860; doi:10.1523/JNEUROSCI.3887-09.2010. http://www.jneurosci.org/cgi/content/full/30/5/1856

[4] T. Metzinger, “Self models.” Scholarpedia, 2007, 2(10):4174. http://www.scholarpedia.org/article/Self_models

Tags: , , , , , , ,