Robot Marathon Blazes New Paths on the Linoleum

Posted in humor, robotics on February 28th, 2011 by Samuel Kenyon

Whereas in America we’ve been wasting our robot races on autonomous cars that can drive on real roads without causing Michael Bay levels of collateral damage, Japan has taken a more subtle approach.

Their “history making” bipedal robot race involves expensive toy robots stumbling through 422 laps of a 100 meter course, which is followed by visually tracking colored tape on the ground (I’m making an assumption there–the robots may actually be even less capable than that).  This is surely one of the most ingenious ways to turn old technology into a major new PR event.

the finish line

I assume they weren't remote controlling these "autonomous" robots during the race.

Unfortunately, I don’t have any video or photos of the 100 meter course; instead I have this:

And the winner is…Robovie PC!  Through some uncanny coincidence, that was operated by the Vstone team, the creators of the race.

Robovie PC

Robovie PC sprints through the miniature finish line. The robots on the sides are merely finish line holder slaves.

The practical uses of this technology are numerous.  For instance, if you happen to have 42.2 km of perfectly level and flat hallways with no obstructions, one of these robots can follow colored tape around all day without a break(down), defending the premises from vile insects and dust bunnies.

photoshop of a cat carrying a small robot

There's no doubt that in the next competition, they will continue to improve their survival capabilities.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,

Mel Hunter’s Lonely Robot

Posted in culture on February 27th, 2011 by Samuel Kenyon

During my adventures through the mysterious evo-devo circus freakshow known as childhood, I found myself encountering a lot of science fiction stories and art from 1950s-1970s. Old issues of The Magazine of Fantasy & Science Fiction that I recovered from the dump were just as interesting to my larval mind as pornography.

The one cover that I remember the most was Mel Hunter‘s depiction of a retro-futuristic vacuum tube powered robot, sitting alone in a post-apocalyptic world, listening to a vinyl record.  This was one of several covers by Hunter featuring the lonely robot.

May 1960 issue of F&SF

May 1960 issue of F&SF

Recently, I saw the painting in real life (unless it was a reproduction?) at Boskone, a science fiction literature convention in Boston.

Photo of Mel Hunter painting at Boskone

Photo of Mel Hunter painting at Boskone 48

Some people might assume that the lonely robot had something to do with the apocalypse. However, I interpret it to show the sad fate of a robot more rugged than biological life.

The image reminds me of Ray Bradbury’s short story, “There Will Come Soft Rains.” In that story, a home automation system continues working day after day despite that all the humans are gone, like an artificial mega-Jeeves except without the kind of common sense that would make it realize its owners were dead. One day the house is destroyed by a fire.

Among the ruins, one wall stood alone. Within the wall, a last voice said, over and over again and again, even as the sun rose to shine upon the heaped rubble and steam:

“Today is August 5, 2026, today is August 5, 2026, today is…”

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , ,

I Will Not Be Told: Stephen Fry’s Speech at Harvard

Posted in culture on February 22nd, 2011 by Samuel Kenyon

I just attended Stephen Fry‘s acceptance of the Outstanding Lifetime Achievement Award in Cultural Humanism, given by the Humanist Chaplaincy at Harvard.

Stephen Fry

Stephen Fry

His speech was quite different than the one he gave for the Intelligence² Debate. The main theme tonight was “I will not be told.”

To be told is to wallow in revealed truths. Bibles and similar religious texts are all about revealed truths which cannot be questioned, and the origins of which require the readers to make many assumptions. And it was even worse in the dark times of religious book control and illiteracy in which you might not even be allowed to read the book—you have to get the mediated verbal account from someone supposedly holier than you.

Discovered truths, on the other hand, are not told. Of course somebody could tell you a discovered truth, but if you don’t trust them you can question it. Discovered truths can be discussed. They are questioned and tested.

Fry suggests humility before facts—reason or sounding reasonable is not enough. Back to question and test. And so on.

Stephen Fry then fumbled through a quick version of history to describe how the Greeks had some free inquiry and attempts to discover truth around 2000 years ago, but that was almost extinguished for 1500 years by the Christians. But not all hope was lost, and then the Enlightenment brought discovered truths back into action. Science kicked into gear, the United States was born, and so on.

***

Later on, Fry was discussing Oscar Wilde’s adventures—somewhat like the Beatles, Wilde was not well known in England and then burst onto the scene in America in large part just by being an interesting character. When somebody asked him how America, born from the greatest ideals of freedom and reason, could have disintegrated into the Civil War, Wilde responded that it’s because American wallpaper is ugly.

The concept is that violence breaks out when people have no self worth because, which in turn is assisted by ugly artificial habitats.

***

Stephen Fry says how he used to see posters of Che and Marx in college dorms mixed in which pictures of John Lennon and Jimi Hendrix. People thought that revolutionary politics and rock music could change the world for the better. But they can’t. The posters he prefers to see on people’s dorm walls are Einstein and Oscar Wilde—the life of the mind instead. Fry then said something about the Oxford way to “play gracefully with ideas.”

***

Stephen Fry tells us (note this is not verbatim): “It’s not humanists’ job to tell religious people they are wrong. However,” and there is a pause as the crowd laughs, “it is none of their fucking business to impose their revealed truth on the wonderful world of doubt.”

Amidst anecdotes of Oscar Wilde, Fry repeatedly asserted his second theme, which is that humanists should not tell other people how to live. In fact, he accepted the award on the condition that the Humanist Chaplaincy would not try to convert religious people and smugly tell them they are wrong. It’s all about showing vs. telling.

As Fry so splendidly puts it: “You can tickle the minds of others, you can seduce the minds of others, but don’t try to own the minds of others.”

Epilogue

The high point of the question and answer session, which Fry compared to a KGB interrogation, was a serenade by a young lady with a ukulele, in which she offered the homosexual actor her baby-making apparatus in no uncertain terms.  “I have all the tools that you require to breed / So send along your seed.”

Molly the Ukelele Girl

Molly the Ukelele Girl

Update: I found the name of the Ukelele girl: Molly.  She also was playing humorous songs about Wikipedia and Facebook in the beginning before the introductions.  Looks like the Stephen Fry song was premeditated:





Cross-posted with Science 2.0.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,

What are Symbols in AI?

Posted in artificial intelligence on February 22nd, 2011 by Samuel Kenyon

A main underlying philosophy of artificial intelligence and cognitive science is that cognition is computation.  This leads to the notion of symbols within the mind.

There are many paths to explore how the mind works.  One might start from the bottom, as is the case with neuroscience or connectionist AI.  So you can avoid symbols at first.  But once you start poking around the middle and top, symbols abound.

Besides the metaphor of top-down vs. bottom-up, there is also the crude summary of Logical vs. Probabilistic.  Some people have made theories that they think could work at all levels, starting with the connectionist basement and moving all the way up to the tower of human language, for instance Optimality Theory.   I will quote one of the Optimality Theory creators, not because I like the theory (I don’t, at least not yet), but because it’s a good summary of the general problem [1]:

Precise theories of higher cognitive domains like language and reasoning rely crucially on complex symbolic rule systems like those of grammar and logic. According to traditional cognitive science and artificial intelligence, such symbolic systems are the very essence of higher intelligence. Yet intelligence resides in the brain, where computation appears to be numerical, not symbolic; parallel, not serial; quite distributed, not as highly localized as in symbolic systems. Furthermore, when observed carefully, much of human behavior is remarkably sensitive to the detailed statistical properties of experience; hard-edged rule systems seem ill-equipped to handle these subtleties.

Now, when it comes to theorizing, I’m not interested in getting stuck in the wild goose chase for the One True Primitive or Formula.  I’m interested in cognitive architectures that may include any number of different methodologies.  And those different approaches don’t necessarily result in different components or layers.  It’s quite possible that within an architecture like the human mind, one type of structure can emerge from a totally different structure.  But depending on your point of view—or level of detail—you might see one or the other.

At the moment I’m not convinced of any particular definition of mental symbol.  I think that a symbol could in fact be an arbitrary structure, for example an object in a semantic network which has certain attributes.  The sort of symbols one uses in everyday living come in to play when one structure is used to represent another structure.  Or, perhaps instead of limiting ourselves to “represent” I should just say “provides an interface.”  One would expect that a good interface to produce a symbol would be a simplifying interface.  As an analogy, you use symbols on computer systems all the time.  One touch of a button on a cell phone activates thousands of lines of code, which may in turn activate other programs and so on.  You don’t need to understand how any of the code works, or how any of the hardware running the code works.  The symbols provide a simple way to access something complex.

A system of simple symbols that can be easily combined into new forms also enables wonderful things like language.  And the ablity to set up signs for representation (semiosis) is perhaps a partial window into how the mind works.

One of my many influences is Society of Mind by Marvin Minsky [2], which is full of theories of these structures that might exist in the information flows of the mind.  However, Society of Mind attempts to describe most structures as agents.  An agent is isn’t merely a structure being passed around, but is also actively processing information itself.

Symbols are also important when one is considering if there is a language of thought, and what that might be.  As Minsky wrote:

Language builds things in our minds.  Yet words themselves can’t be the substance of our thoughts.  They have no meanings by themselves; they’re only special sorts of marks or sounds…we must discard the usual view that words denote, or represent, or designate; instead, their function is control: each word makes various agents change what various other agents do.

Or, as Douglas Hofstadter puts it [3]:

Formal tokens such as ‘I’ or “hamburger” are in themselves empty. They do not denote.  Nor can they be made to denote in the full, rich, intuitive sense of the term by having them obey some rules.

Throughout the history of AI, I suspect, people have made intelligent programs and chosen some atomic object type to use for symbols, sometimes even something intrinsic to the programming language they were using.  But simple symbol manipulation doesn’t result in in human-like understanding.  Hofstadter, at least in the 1970s and 80s, said that symbols have to be “active” in order to be useful for real understanding.  “Active symbols” are actually agencies which have the emergent property of symbols.  They are decomposable, and their constituent agents are quite stupid compared to the type of cognitive information the symbols are taking part in.  Hofstadter compares these symbols to teams of ants that pass information between teams which no single ant is aware of.  And then there can be hyperteams and hyperhyperteams.

References
[1] P. Smolensky http://web.jhu.edu/cogsci/people/faculty/Smolensky/
[2] M. Minsky, Society of Mind, Simon & Schuster, 1986.
[3] D. Hofstadter, Metamagical Themas, Basic Books, 1985.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , ,

5 Important Themes in Science Fiction Cinema, or Cultivating Bad Taste

Posted in culture, humor on February 21st, 2011 by Samuel Kenyon

Yesterday I attended the 48th episode of Boskone, a science fiction literature convention held in Boston.  I found that Boskone was not just about books however, illuminating me with discussion panels such as “The Five Definitive Criteria By Which SF Cinema Is to Be Judged.”

Robot Monster

Warning: This blog post is about to get silly.

The panel consisted of Esther Friesner, Craig Shaw Gardner (lord of obscure SF movies), Ginjer Buchanan, and Bruce Coville.

"Five Definitive Criteria..." panel @ Boskone 48

They considered science fiction writer John C. Wright’s criteria:

1. Is there a hot babe in a skintight and/or revealing future-suit?

Barbarella

Barbarella

2. Is there a gorilla?

Bride of the Gorilla

Bride of the Gorilla

3. Is there a robot?

The Gunslinger from Westworld

The Gunslinger (Yul Brynner) from Westworld

4. Does any character have Way Cool mind powers?

Big Trouble in Little China

Big Trouble in Little China

5. Does a planet get blown up?

exploding planet

This shit just got real.

The gorilla requirement forced the large part of the discussion into the realm of cheap B movies such as Rock ‘N’ Roll Wrestling Women Vs. the Aztec Ape. In fact, the only two non-B movies featuring gorillas I can think of off the top of my head are Congo and Mighty Joe Young.

Of course, if you stretch the definition of gorilla to include other kinds of apes, you can start considering the Planet of the Apes movies and 2001: A Space Odyssey. But even those don’t meet all the criteria. Many movies get 4/5, for instance Star Trek (2009) and Forbidden Planet.

Forbidden Planet

Forbidden Planet

The most obvious movie that can meet all five criteria is Star Wars: Episode IV, if “gorillas” is stretched to include Wookies.

I pointed out to the panel that if we include TV series, then Aqua Teen Hunger Force has definitely met all five criteria. I didn’t mention the ATHF movie, Aqua Teen Hunger Force Colon Movie Film for Theaters, because I don’t think it featured any planets being blown up or a gorilla.

Aqua Teen Hunger Force Colon Movie Film for Theaters

Aqua Teen Hunger Force Colon Movie Film for Theaters

Hopefully they will rectify that in the ATHF sequel Death Fighter, planned for release in summer 2012. And in case you were wondering, rumors have it that Bruce Campbell will return to voice Chicken Bittle. Thus, once again I have an excuse to end a blog post with that grand sci-fi thespian.

Bruce Campbells

Bruce Campbells

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,

When Mainstream Attacks: Robot Tropes That Never Die

Posted in culture, humor, transhumanism on February 17th, 2011 by Samuel Kenyon

Science comedian Brian Malow has made a video containing neither comedy nor science:


When Robots Attack! Should We Fear a Singularity?

And yes, I realize I shouldn’t have even bothered to watch it once I realized it was for a mainstream news outlet, but several people in my Twitter lists were tweeting it.

Unfortunately, this video turned out not to be for nerds or anyone who has ever thought about future robots or the Singularity.  This video is for mainstream sheep.  The only glimmer of hope was when he started pursuing the thread of asking why humans have this tendency to punish themselves in robot stories with a father figure or in the manner of Frankenstein.  After a couple seconds of that we’re dropped back into cliché city with “robot uprisings.”

The Roomba is mentioned—and then—holy shit, iRobot makes military robots too!  OMG!  Wait…everybody knows that already.  Big deal.  I guess Time readers/watchers are really behind the…times.  And sure, I’m not being fair—Time readers may not have heard of every robot company, after all.  Thank goodness this video shows Big Dog and Robonaut, two unrelated robots made by other companies, wedged in between the iRobot clips while Malow lobs the old joke at us that the cleaning robots will decide to kill humans.

Sure, it’s supposed to be funny.  But it’s not, because it’s unoriginal and out of date and/or not real enough (some humor is effective because it’s so close to the truth).  As William Zinsser said of humor writers:

They’re not just fooling around.  They are as serious in purpose as Hemingway or Faulkner—in fact, a national asset in forcing the country to see itself clearly.

Occasionally I do see a humor piece on the web that achieves this, sometimes even from big places like Cracked.com or The Onion.

Partly, it’s just a matter of taste.  Surely some people found Malow’s robot/singularity video funny; after all, millions of people out there paid money to see Meet the Fockers and Little Fockers.  Millions of people…laughing when they’re told to at tired jokes and clichés.

Of course, maybe it’s too difficult to be funny with robots—you have to be creative and you’re not sure what your target audience will grok.  But, please, if you’re going to make yet another joke about the “robot uprising,” at least make it a new joke.

If you think I’m biased against people making fun of robots or my company, think again: The Daily Show beat Malow to the punch and made fun of iRobot in 2009 (“Roombas of Doom“), and it was much funnier than Malow’s attempt, although still very far removed from reality:

So why do I even bother ranting about mainstream tropes and lack of creativity?  Well, the problem is it’s infecting even those not in the mainstream.  Almost every person, even if they are scientists or engineers, seems to be obligated to mention AI overlords and robot uprisings as if there are no possible other hooks available.  Every single military robot related article I have seen on the Internet mentions Terminator.  It’s as if the bulk of our culture has been reduced to a mere handful of common concepts, and more and more people are being sucked into this pit of mental inbreeding.


Cross-posted with Science 2.0.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , ,

Music and Machines: Highlights from the MIT Media Lab

Posted in culture, robotics on February 15th, 2011 by Samuel Kenyon

I recently attended “Music | Machines: 50 Years of Music and Technology @ MIT,” part of MIT’s ongoing Festival of Art + Science + Technology (FAST).

One of the most interesting demonstrations was the iPhone Guitar by Rob Morris of the Media Lab. Basically, he makes use of the iPhone’s accelerometer as an input for special effects.

iPhone Guitar by Rob Morris

The iPhone is attached to the guitar, so that certain gestural movements of the guitar in space–especially those that happen during an emotional performance–are detected and used to modulate the sound. The touch screen of the iPhone also comes in handy as an accessible on-guitar interface for selecting effects and inputting other variables.

The Muse

The Muse

Digital music is no longer a new phenomena; in fact, it’s downright ancient when you consider that one of the first digital music contraptions was made in 1972. The Triadex Muse is an algorithmic music generator using digital logic, and was designed by Edward Fredkin and Marvin Minsky at MIT Lincoln Laboratory.

Music, Mind and Meaning

Marvin Minsky at MIT Media Lab, Feb 5, 2011

Speaking of Minsky, he discussed “Music, Mind and Meaning” with Teresa Marrin, Mary Farbood and Mike Hawley. Amongst the anecdotes Minsky mentioned an old concept of goals.

One of the ways human minds might achieve goals is to reduce the difference between what it has and what it wants. Music may utilize some of the same mental components–most music chops time in equal intervals and with equal substructures. These chopped experience windows can be compared, possibly in the same way that you can compare what you have with what you want.

Excerpts from the Concert

Computer based production is normal nowadays. So how would a computer and electronics oriented concert be special? Well, Todd Machover of the Media Lab was able to do that by assembling musicians that make some very unusual sounds and abnormal compositions. They all involve computers and/or electronics, but in innovative ways…and through live performances.

The concert began with a 1976 composition by Barry Vercoe called Synapse for Viola and Computer, an early work from MIT’s Experimental Music Studio. As a restaging of the 1970s performance, the digital accompaniment is inflexible, so it was up to the human soloist, Marcus Thompson, to maintain sync and “express himself within the confines.”

Vercoe – Synapse

Synapse was followed by Synaptogenesis, in which Richard Boulanger performs by triggering sound clips and transformations using a Nintendo WiiMote and a Novation Launchpad.

Boulanger – Synaptogenesis

Programmable drums machines have been around since 1972, but what is rare is to see the machine actuate physical percussion hardware. One such robotic instrument is the Heliphon, originally made by Leila Hasan and Giles Hall, and later redesigned by Bill Tremblay and Andy Cavatorta.

Todd Reynolds, Heliphon the robot, and Evan Zipioryn performing at the MIT Media Lab

The sound from this double helix metallophone is produced via solenoids hammering the metal keys. It also has lights hooked in to give a visual indication of which keys are active.

Heliphon and humans Todd Reynolds (violin) and Evan Ziporyn (clarinet) performed Ziporyn’s Belle Labs – Parts 1 & 3.

Ziporyn, Reynolds, Heliphon – Belle Labs Parts 1 and 3

Heliphon is one of various robotic instruments commissioned by Ensemble Robot, a nonprofit corporation based in Boston, MA. Ensemble Robot also made WhirlyBot, which looks like a turnstile but sounds like a chorus of human-like voices, and Bot(i)Cello, which appears to be a cross between a construction tool and a stringed instrument.

The Future of the Underground

If you’re interested in hearing more electronic music, there is always new stuff (or remixes of old stuff) being made, far below the radar of the mainstream.  You can hear some of it on the web, but being at a live performance or DJ set is a different experience, especially when the DJ modifies the music on the fly.  There are some new tools to enable this, for example, here is DJ/producer Encati demonstrating a Kinect wobble controller for dubstep mutations:

What I would like to see more of are environmental actuations triggered by music, beyond just flashing lights. We have autogenerated visualizers, and we can use MIDI to control lights (and fire cannons), but what about having a room really transform automatically based on the music? I’m taking about dynamic 2D and 3D displays everywhere, autonomous mobile furniture, materials changing shape and color, and so on.


Image credits:

4. MIT
Others by the author.


Cross-posted with H+ Magazine.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , ,

Fake Love with Robots

Posted in interaction design, robotics on February 7th, 2011 by Samuel Kenyon

I noticed today that Kyle Munkittrick posted about Sherry Turkle’s concerns about people having emotional attachments to machines (The Turkle Test).

Love at first sight?

Turkle, who’s been at MIT for a long time, is not against machines or emotional machines. She’s skeptical of taking advantage of the human tendency to be social and have emotional attachments to machines which merely pretend to be social or pretend to have other emotional capabilities.

As Kyle says:

Yet these lovable mechanoids are not what Turkle is critiquing. Turkle is no Luddite, and does not strike me as a speciesist. What Turkle is critiquing is contentless performed emotion. Robots like Kisemet and Cog are representative of a group of robots where the brains are second to bonding. Humans have evolved to react to subtle emotional cues that allow us to recognize other minds, other persons. Kisemet and Cog have rather rudimentary A.I., but very advanced mimicking and response abilities. The result is they seem to understand us. Part of what makes HAL-9000 terrifying is that we cannot see it emote. HAL simply processes and acts.

Kyle’s post was apparently triggered by this recent article: Programmed for Love (The Chronicle of Higher Education). Turkle has a new book out called Alone Together: Why We Expect More From Technology and Less From Each Other.

I haven’t read it yet, but it supposedly expands her ideas into the modern world of social technologies. As for the robots such as the aforementioned Kismet and Cog, Turkle’s been talking about them since at least 2006 if not earlier, and Kismet and Cog are ancient history (from the 90s). The Programmed for Love article says Turkle was using Kismet in 2001; it wouldn’t surprise me if that was Kismet’s last experiment before being put in the MIT museum.

Kismet

I mentioned Turkle’s point of view in my article “Would You Still Love Me If I Was A Robot?” that was published in the Journal of Evolution and Technology (it was originally written in 2006 but didn’t get published until 2008).

Image credits:
1. Contra Costa Times
2. Jared C. Benedict

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , ,