Robot Marathon Blazes New Paths on the Linoleum

Posted in humor, robotics on February 28th, 2011 by Samuel Kenyon

Whereas in America we’ve been wasting our robot races on autonomous cars that can drive on real roads without causing Michael Bay levels of collateral damage, Japan has taken a more subtle approach.

Their “history making” bipedal robot race involves expensive toy robots stumbling through 422 laps of a 100 meter course, which is followed by visually tracking colored tape on the ground (I’m making an assumption there–the robots may actually be even less capable than that).  This is surely one of the most ingenious ways to turn old technology into a major new PR event.

the finish line

I assume they weren't remote controlling these "autonomous" robots during the race.

Unfortunately, I don’t have any video or photos of the 100 meter course; instead I have this:

And the winner is…Robovie PC!  Through some uncanny coincidence, that was operated by the Vstone team, the creators of the race.

Robovie PC

Robovie PC sprints through the miniature finish line. The robots on the sides are merely finish line holder slaves.

The practical uses of this technology are numerous.  For instance, if you happen to have 42.2 km of perfectly level and flat hallways with no obstructions, one of these robots can follow colored tape around all day without a break(down), defending the premises from vile insects and dust bunnies.

photoshop of a cat carrying a small robot

There's no doubt that in the next competition, they will continue to improve their survival capabilities.

Tags: , ,

Mel Hunter’s Lonely Robot

Posted in culture on February 27th, 2011 by Samuel Kenyon

During my adventures through the mysterious evo-devo circus freakshow known as childhood, I found myself encountering a lot of science fiction stories and art from 1950s-1970s. Old issues of The Magazine of Fantasy & Science Fiction that I recovered from the dump were just as interesting to my larval mind as pornography.

The one cover that I remember the most was Mel Hunter‘s depiction of a retro-futuristic vacuum tube powered robot, sitting alone in a post-apocalyptic world, listening to a vinyl record.  This was one of several covers by Hunter featuring the lonely robot.

May 1960 issue of F&SF

May 1960 issue of F&SF

Recently, I saw the painting in real life (unless it was a reproduction?) at Boskone, a science fiction literature convention in Boston.

Photo of Mel Hunter painting at Boskone

Photo of Mel Hunter painting at Boskone 48

Some people might assume that the lonely robot had something to do with the apocalypse. However, I interpret it to show the sad fate of a robot more rugged than biological life.

The image reminds me of Ray Bradbury’s short story, “There Will Come Soft Rains.” In that story, a home automation system continues working day after day despite that all the humans are gone, like an artificial mega-Jeeves except without the kind of common sense that would make it realize its owners were dead. One day the house is destroyed by a fire.

Among the ruins, one wall stood alone. Within the wall, a last voice said, over and over again and again, even as the sun rose to shine upon the heaped rubble and steam:

“Today is August 5, 2026, today is August 5, 2026, today is…”

Tags: , , , ,

I Will Not Be Told: Stephen Fry’s Speech at Harvard

Posted in culture, transhumanism on February 22nd, 2011 by Samuel Kenyon

I just attended Stephen Fry‘s acceptance of the Outstanding Lifetime Achievement Award in Cultural Humanism, given by the Humanist Chaplaincy at Harvard.

Stephen Fry

Stephen Fry

His speech was quite different than the one he gave for the Intelligence² Debate. The main theme tonight was “I will not be told.”

To be told is to wallow in revealed truths. Bibles and similar religious texts are all about revealed truths which cannot be questioned, and the origins of which require the readers to make many assumptions. And it was even worse in the dark times of religious book control and illiteracy in which you might not even be allowed to read the book—you have to get the mediated verbal account from someone supposedly holier than you.

Discovered truths, on the other hand, are not told. Of course somebody could tell you a discovered truth, but if you don’t trust them you can question it. Discovered truths can be discussed. They are questioned and tested.

Fry suggests humility before facts—reason or sounding reasonable is not enough. Back to question and test. And so on.

Stephen Fry then fumbled through a quick version of history to describe how the Greeks had some free inquiry and attempts to discover truth around 2000 years ago, but that was almost extinguished for 1500 years by the Christians. But not all hope was lost, and then the Enlightenment brought discovered truths back into action. Science kicked into gear, the United States was born, and so on.

Later on, Fry was discussing Oscar Wilde’s adventures—somewhat like the Beatles, Wilde was not well known in England and then burst onto the scene in America in large part just by being an interesting character. When somebody asked him how America, born from the greatest ideals of freedom and reason, could have disintegrated into the Civil War, Wilde responded that it’s because American wallpaper is ugly.

The concept is that violence breaks out when people have no self worth because, which in turn is assisted by ugly artificial habitats.

Stephen Fry says how he used to see posters of Che and Marx in college dorms mixed in which pictures of John Lennon and Jimi Hendrix. People thought that revolutionary politics and rock music could change the world for the better. But they can’t. The posters he prefers to see on people’s dorm walls are Einstein and Oscar Wilde—the life of the mind instead. Fry then said something about the Oxford way to “play gracefully with ideas.”

Stephen Fry tells us (note this is not verbatim): “It’s not humanists’ job to tell religious people they are wrong. However,” and there is a pause as the crowd laughs, “it is none of their fucking business to impose their revealed truth on the wonderful world of doubt.”

Amidst anecdotes of Oscar Wilde, Fry repeatedly asserted his second theme, which is that humanists should not tell other people how to live. In fact, he accepted the award on the condition that the Humanist Chaplaincy would not try to convert religious people and smugly tell them they are wrong. It’s all about showing vs. telling.

As Fry so splendidly puts it: “You can tickle the minds of others, you can seduce the minds of others, but don’t try to own the minds of others.”


The high point of the question and answer session, which Fry compared to a KGB interrogation, was a serenade by a young lady with a ukulele, in which she offered the homosexual actor her baby-making apparatus in no uncertain terms.  “I have all the tools that you require to breed / So send along your seed.”

Molly the Ukelele Girl

Molly the Ukelele Girl

Update: I found the name of the Ukelele girl: Molly.  She also was playing humorous songs about Wikipedia and Facebook in the beginning before the introductions.  Looks like the Stephen Fry song was premeditated:

Cross-posted with Science 2.0.


What are Symbols in AI?

Posted in artificial intelligence on February 22nd, 2011 by Samuel Kenyon

A main underlying philosophy of artificial intelligence and cognitive science is that cognition is computation.  This leads to the notion of symbols within the mind.

There are many paths to explore how the mind works.  One might start from the bottom, as is the case with neuroscience or connectionist AI.  So you can avoid symbols at first.  But once you start poking around the middle and top, symbols abound.

Besides the metaphor of top-down vs. bottom-up, there is also the crude summary of Logical vs. Probabilistic.  Some people have made theories that they think could work at all levels, starting with the connectionist basement and moving all the way up to the tower of human language, for instance Optimality Theory.   I will quote one of the Optimality Theory creators, not because I like the theory (I don’t, at least not yet), but because it’s a good summary of the general problem [1]:

Precise theories of higher cognitive domains like language and reasoning rely crucially on complex symbolic rule systems like those of grammar and logic. According to traditional cognitive science and artificial intelligence, such symbolic systems are the very essence of higher intelligence. Yet intelligence resides in the brain, where computation appears to be numerical, not symbolic; parallel, not serial; quite distributed, not as highly localized as in symbolic systems. Furthermore, when observed carefully, much of human behavior is remarkably sensitive to the detailed statistical properties of experience; hard-edged rule systems seem ill-equipped to handle these subtleties.

Now, when it comes to theorizing, I’m not interested in getting stuck in the wild goose chase for the One True Primitive or Formula.  I’m interested in cognitive architectures that may include any number of different methodologies.  And those different approaches don’t necessarily result in different components or layers.  It’s quite possible that within an architecture like the human mind, one type of structure can emerge from a totally different structure.  But depending on your point of view—or level of detail—you might see one or the other.

At the moment I’m not convinced of any particular definition of mental symbol.  I think that a symbol could in fact be an arbitrary structure, for example an object in a semantic network which has certain attributes.  The sort of symbols one uses in everyday living come in to play when one structure is used to represent another structure.  Or, perhaps instead of limiting ourselves to “represent” I should just say “provides an interface.”  One would expect that a good interface to produce a symbol would be a simplifying interface.  As an analogy, you use symbols on computer systems all the time.  One touch of a button on a cell phone activates thousands of lines of code, which may in turn activate other programs and so on.  You don’t need to understand how any of the code works, or how any of the hardware running the code works.  The symbols provide a simple way to access something complex.

A system of simple symbols that can be easily combined into new forms also enables wonderful things like language.  And the ablity to set up signs for representation (semiosis) is perhaps a partial window into how the mind works.

One of my many influences is Society of Mind by Marvin Minsky [2], which is full of theories of these structures that might exist in the information flows of the mind.  However, Society of Mind attempts to describe most structures as agents.  An agent is isn’t merely a structure being passed around, but is also actively processing information itself.

Symbols are also important when one is considering if there is a language of thought, and what that might be.  As Minsky wrote:

Language builds things in our minds.  Yet words themselves can’t be the substance of our thoughts.  They have no meanings by themselves; they’re only special sorts of marks or sounds…we must discard the usual view that words denote, or represent, or designate; instead, their function is control: each word makes various agents change what various other agents do.

Or, as Douglas Hofstadter puts it [3]:

Formal tokens such as ‘I’ or “hamburger” are in themselves empty. They do not denote.  Nor can they be made to denote in the full, rich, intuitive sense of the term by having them obey some rules.

Throughout the history of AI, I suspect, people have made intelligent programs and chosen some atomic object type to use for symbols, sometimes even something intrinsic to the programming language they were using.  But simple symbol manipulation doesn’t result in in human-like understanding.  Hofstadter, at least in the 1970s and 80s, said that symbols have to be “active” in order to be useful for real understanding.  “Active symbols” are actually agencies which have the emergent property of symbols.  They are decomposable, and their constituent agents are quite stupid compared to the type of cognitive information the symbols are taking part in.  Hofstadter compares these symbols to teams of ants that pass information between teams which no single ant is aware of.  And then there can be hyperteams and hyperhyperteams.

[1] P. Smolensky
[2] M. Minsky, Society of Mind, Simon & Schuster, 1986.
[3] D. Hofstadter, Metamagical Themas, Basic Books, 1985.

Tags: , , , , ,