Hype Alert: Robot Learns Self-Awareness

What the Hyped News Article Implies

An anonymous—aka spineless—person wrote a Kurzweilai.net article a few days ago titled “Robot learns ‘self-awareness.’” Self-awareness is in scare quotes. I’m not sure what that’s supposed to imply…maybe it was just a huge joke? Various other news sources are claiming similar things (e.g., BBC’s “Robot learns to recognise itself in mirror”).

The Kurzweilai article begins audaciously:

“Only humans can be self-aware.”
Another myth bites the dust.

The prose whips us into a frenzy by explaining the Mirror Test:

The classic mirror test has previously been done with animals to determine whether they understand that their reflections are actually images of themselves. The subject animals are allowed to familiarize themselves with a mirror. They are then sedated and a spot of dye is put on their faces. When they awaken, if they notice the new spot of color in their reflection and then touch the place on their face where the dye was put, they “pass” the mirror test.

The article does not let up on the AI success fervor until the very last sentence, when the punchline is revealed for those who made it to the end of the article without foaming at the mouth with technolust glee:

So far, no robot has successfully met this challenge. Jason and the Social Robotics Lab are working on it.

WTF meme face

What the Researchers Actually Did

Justin W. Hart and Brian Scassellati at the Department of Computer Science, Yale University, added a Mirror Perspective Model to their robot architecture. This model is a particular approach to figuring out the position of an object based on its 2D representation in a mirror. In this case it’s not just any object, but the robot’s own arm.

This new software comes up with an estimate of the arm’s position in space taking into account the nature of the mirror. The experiment compared the mirror based estimations with both non-mirror visual estimations and the values from a forward-kinematics model.

The authors claim it does well, where “well” means it has greater than twice as much error as the non-mirror visual estimate. I certainly haven’t made a mirror model that’s better so I will leave it at that.

The authors also propose a future architecture that could be built on the current one, which has the goal of a robot actually passing the Mirror Test. Their research is a precursor to something that might be able to pass the Mirror Test.

Six components that the proposed architecture requires are listed:

  1. End-effector Model
  2. Perceptual Model
  3. Perspective-Taking Model
  4. Structural Model
  5. Appearance Model
  6. Functional Model

They have not actually implemented the last three models.

So the paper could be viewed as a bit misleading but I will give the authors the benefit of the doubt and assume the intermingling of future work was not intended to mislead or distract from their contribution. It’s perfectly fine to talk about ideas for future architectures; indeed it may inspire others to investigate along the same lines. And knowing their goal is relevant.

I will also assume that the robot’s cartoonish eyeballs and “lips” were not added for distracting anthropomorphism and press releases, and were added for some other project or because that’s been the fad since Kismet.

This is what the researchers say in the section clearly labeled “Discussion” (i.e., no holds barred speculation) following the Results section:

To our knowledge, this is the first robotic system to attempt to use a mirror in this way, representing a significant step towards a cohesive architecture that allows robots to learn about their bodies and appearances through self-observation, and an important capability required in order to pass the Mirror Test.

Is it a significant step or a cohesive architecture? Maybe.

Is it Possible?

Don’t get me wrong, I would be happy if a robot passed the Mirror Test. I would also be happy if a robot could recognize mirrors in general. I have no doubt these robots can be done, and probably with many different methods. I also have no doubt robots can have various kinds of self-awareness, with or without mirror handling. It could have already been done. I suspect it hasn’t happened simply because of the typical reason projects don’t happen—a combination of motivation, management, team(s), time, and money.

References
J.W. Hart and B. Scassellati, “Mirror Perspective-Taking with a Humanoid Robot.” Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012.

Tags: , , ,

Leave a Reply

You must be logged in to post a comment.