Flashback: The Mini-Me Robot

Posted in robotics on March 13th, 2011 by Samuel Kenyon

I just rediscovered some photos of a robot I threw together in about an hour back in 2003.

Mini-Me v.1

Mini-Me v.1

Mini-Me v.1

Mini-Me v.1

This was made out of an Innovation First educational robot kit which came with the official FIRST robotics kit (which also had parts made by Innovation First). The small edu kit later evolved into the VEX robotics kit. They also make a cool little toy called Hexbug.

Hexbug

Hexbug

I had been spending a lot of time in a basement laboratory at Northeastern University, primarily to advise the FIRST team hosted there (the NU-Trons). This little robot had the same computer as the real competition robot, so it was useful as a programming testbed. Eventually it was dubbed “Mini-Me.”

Mini-Me (Verne Troyer) from Austin Powers 2

Mini-Me (Verne Troyer) from Austin Powers 2

The photos of the Mini-Me robot only show the original configuration.  Later on, the infrared sensors on the front were turned downward and I programmed it to be a simple line follower, as we were thinking about having the big FIRST robot do that as well.

Simple linetracking finite state machine diagram

Simple linetracking finite state machine diagram

Illustrating a line tracking robot's potentially zig-zag path during a competition

Illustrating a line tracking robot's potentially zig-zag path during a competition

Of course, being an optimistic college student, I designed a more complicated program of which the line tracker was one component.

Subsumption architecture diagram for a FIRST robot

Subsumption architecture diagram for a FIRST robot

Program flowchart

Program flowchart

But we never finished that on the final system (the big robot) as we spent most of our time on less glamorous tasks like soldering.

The NU-Trons robot from 2003

The NU-Trons robot (#125) from 2003 (here it is being teleoperated)

It was a good lesson in systems though—the amount of time for testing and integration is massive. With robots, most people never get to the interesting programming because it takes so long to make anything work at all. These robot kits help though, at least for programmers, because you don’t have to waste as much time reinventing the wheel.

Later on I used some of those Innovation First edu kit mechanical parts as part of my MicroMouse robot. Unfortunately I don’t think any photos were ever taken of that. Just imagine something awesome.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , ,

Robot Marathon Blazes New Paths on the Linoleum

Posted in humor, robotics on February 28th, 2011 by Samuel Kenyon

Whereas in America we’ve been wasting our robot races on autonomous cars that can drive on real roads without causing Michael Bay levels of collateral damage, Japan has taken a more subtle approach.

Their “history making” bipedal robot race involves expensive toy robots stumbling through 422 laps of a 100 meter course, which is followed by visually tracking colored tape on the ground (I’m making an assumption there–the robots may actually be even less capable than that).  This is surely one of the most ingenious ways to turn old technology into a major new PR event.

the finish line

I assume they weren't remote controlling these "autonomous" robots during the race.

Unfortunately, I don’t have any video or photos of the 100 meter course; instead I have this:

And the winner is…Robovie PC!  Through some uncanny coincidence, that was operated by the Vstone team, the creators of the race.

Robovie PC

Robovie PC sprints through the miniature finish line. The robots on the sides are merely finish line holder slaves.

The practical uses of this technology are numerous.  For instance, if you happen to have 42.2 km of perfectly level and flat hallways with no obstructions, one of these robots can follow colored tape around all day without a break(down), defending the premises from vile insects and dust bunnies.

photoshop of a cat carrying a small robot

There's no doubt that in the next competition, they will continue to improve their survival capabilities.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,

Daniel Dennett’s Super-Snopes and the Future of Religion

Posted in philosophy on October 12th, 2010 by Samuel Kenyon

“We’re all alone, no chaperone”
—Cole Porter

Despite his resemblance to Santa Claus, Daniel Dennett wants to disillusion the believers.  If we’re all adults, why can’t we reveal the truth that God(s), like Santa, are childish fantasies?

Earlier tonight I attended Dennett’s talk “What should replace religion?” at Tufts University, which was kindly hosted by the Tufts’ Freethought Society as part of their Freethought Week.

Atheist groups will have to compete with religion in the realm of social activities such as church services.  People won’t leave churches if they don’t have something else to give them the excitement, the music, the ecstasy, the group affiliation, the team building, the moral community, etc. that churches provide.  Many churches already contain atheists who go for all the other stuff besides the doctrine.  In fact, some of the preachers themselves do not believe the doctrine.

Daniel Dennett

Daniel Dennett @ Tufts, 11 Oct 2010

I won’t go over the entire talk, but I’d like to talk about the truth segment.  Dennett pointed out the various citizen science (although he didn’t use the term citizen science) projects going on, in which random people voluntarily collect or analyze data, such as for bird watching and galaxy classification and report that to central repositories.  But certain other data collection activities have gone down—the mundane types of things such as goings-on in a town.  Town newspapers are dying, and nobody is there to take notes in local affairs (such as education, politics, etc.).  And this lost data might be important, because it is oversight.

The Internet has democratized evidence gathering while also promoting the abuse of misinformation.  So, Dennett proposes, some organizations could start projects as preservers of truth—or perhaps a church replacement could convert lovers of God into lovers of truth.  But it wouldn’t be unconditional love of truth.  The privacy of your own thoughts, for instance, may contain truthful information, but it doesn’t necessarily have to become public.  A scientific (in a broad sense of the word) organization that loves truth would compete with religion’s typically “imperfect” handling of truth.

A serious project of truth preservation could become a sort of Super Snopes.  Snopes is the famous website which debunks and/or proves true various urban legends and the like.  When you get one of those emails such as certain bananas will eat your flesh, check it out on Snopes first before continuing the hoax chain.  Dennett doesn’t define Super Snopes in detail, just that this is a kind of project that would be like Snopes or Wikipedia on an even more massive scale.  And there could be similar or overlapping projects that operate on local scales—perhaps reinstating the town/neighborhood oversight that is now missing.

Of course, something this vague has a chance of happening in the future.  But how it happens could be, as usual, an imperfect evolution from what we have now.  Hopefully secular groups, as Dennett makes the call for, will try to architect and create these projects as soon as possible.

I speculate that the projects that end up working in the future as far as truth preservation will make use of software agents (autonomous programs).  For instance, if people are not interested in taking notes on every little issue in your town/city, especially the mundane ones, then a computer can do that.

Of course, one person’s boring task is another’s hobby.  Some people enjoy collecting the data that they contribute to a central database.  But some will be able to use software agents to act as their minions—the citizen truth gatherer becomes a node, in which they are a small local central repository, which then sends data to the next biggest node, and so on.

The truth needs to be available to people whenever they want.  So the other major part of the technical aspect will be the interfaces and filters that allow humans to digest information, and to choose what streams to digest.  Of course, various web technologies have been increasing this capability (of filtering and choosing streams) for the entire life of the Internet.

Here is my question: could a (or perhaps several) Super Snopes ever evolve beyond truth preservation into actual civilization preservation, for instance like Asimov’s fictional Foundations?

(Cross-posted with Science 2.0.)

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , , ,

Five Ways Machines Could Fix Themselves

Posted in interaction design, robotics on September 30th, 2010 by Samuel Kenyon

Now published on h+ magazine: my article “Five Ways Machines Could Fix Themselves.” Check it out!

As I see cooling fans die and chips fry, as I see half the machines in a laundry room decay into despondent malfunctioning relics, as my car invents new threats every day along the theme of catastrophic failure, and as I hear the horrific clunk of a “smart” phone diving into the sidewalk with a wonderful chance of breakage, I wonder why we put up with it. And why can’t this junk fix itself?

Design guru and psychologist Donald A. Norman has pointed out how most modern machines hide their internal workings from users. Any natural indicators, such as mechanical sounds, and certainly the view of mechanical parts, are muffled and covered. As much machinery as possible has been replaced by electronics which are silent except for the sound of fans whirring. And electronics are even more mysterious to most users than mechanical systems are.

Our interfaces to machines are primarily composed of various kinds of transducers (like buttons), LEDs (those little glowing lights), and display screens. We are, at the very least, one—if not a dozen—degrees removed from the implementation model. As someone who listens to user feedback, I can assure you that a user’s imagining of how a system works is often radically different than how it really works.

Yet with all this hiding away of the dirty reality of machinery, we have not had a proportional increase in machine self support.

Argument: Software, in some cases, does fix itself. Specifically I am thinking about automatic or pushed software updates. And, because that software runs on a box, it is by default also fixing a machine. For instance, console game platforms like XBox 360 and Playstation 3 receive numerous updates for bug fixes, enhancements, and game specific updates. Likewise, with some manual effort from the user, smart phones and even cars can have their firmware updated to get bug fixes and new features (or third-party hacks).

Counterargument: Most machines don’t update their software anywhere close to “automatically.” And none of those software updates actually fix physical problems. Software updates also require a minimal subset of the system to be operational, which is not always the case. The famous Red Ring of Death on the early XBox 360 units could not be fixed except via replacement of hardware. You might be able to flash your car’s engine control unit with new software, but that won’t fix mechanical parts that are already broken. And so on.

Another argument: Many programs and machines can “fail gracefully.” This phrase comforts a user like the phrase “controlled descent into the terrain” comforts the passenger of an airplane. However, it’s certainly the minimum bar that our contraptions should aim for. For example, if the software fails in your car, it should not default to maximum throttle, and preferably it would be able to limp to the nearest garage just in case your cell phone is dead. Another example: I expect my laptop to warn me, and then shutdown, if the internal temperature is too hot, as opposed to igniting the battery into a fireball.

The extreme solution to our modern mechatronic woes is to turn everything into software. If we made our machines out of programmable matter or nanobots that might be possible. Or we could all move into virtual realities, in which we have hooks for the meta—so a software update would actually update the code and data used to generate the representation of a machine (or any object) in our virtual world.

However, even if those technologies become mature, there won’t necessarily be one that is a monopoly or ubiquitous. A solution that is closer and could be integrated into current culture would be a drop-in replacement that utilizes existing infrastructures.

Some ideas that come close:

1. The device fixes itself without any external help. This has the shortcoming that it might be too broken to fix itself, or might not realize it’s broken. In some cases, we already have this in the form of redundant systems as used in aircraft, the Segway, etc.

2. Software updating (via the Internet) combined with 3D printing machines: the 3D printers would produce replacement parts. However, the printer of course needs the raw material but that could be as easy as putting paper in a printer. Perhaps in the future, that raw printer material will become some kind of basic utility, like water and Internet access.

3. Telepresence combined with built-in repair arms (aka “waldoes”). Many companies are currently trying to productize office-compatible telepresence robots. Doctors already use teleoperated robots such as Da Vinci to do remote, minimally-invasive surgery. Why not operate on machines? How to embed this into a room and/or within a machine is another—quite major—problem. Fortunately, with miniaturization of electronics, there might be room for new repair devices embedded in some products. And certainly not all products need general purpose manipulator arms. They could be machine specific devices, designed to repair the highest probability failures.

4. Autonomous telepresence combined with built-in repair arms: A remote server connects to the local machine via the Internet, using the built-in repair arms or device-specific repair mechanism. However, we also might need an automatic meta-repair mechanism. In other words, the fixer itself might break, or the remote server might crash. Now we enter endless recursions. However, this need not go on infinitely. It’s just a matter of having enough self-repair capacity to achieve some threshold of reliability.

5. Nothing is ever repaired, just installed. A FedEx robot appears within fifteen minutes with a replacement device and for an extra fee will set it up for you.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , , , , , , ,

Good Riddance to Human Drivers?

Posted in culture, interaction design, robotics on August 31st, 2010 by Samuel Kenyon

The Sept. 2010 issue of Scientific American is all about The End (at first I thought it meant SciAm’s end). In a provocative article “Good Riddance: Human Creations the World Would Be Better Off Without,” SciAm writers roast some technologies they don’t like.

The End.

The comments for that article express some displeasure, e.g. (Telrunya at 04:36 PM on 08/23/10):

Sa sinks to a new low with malthusian luddite philosophy.

It doesn’t help that they have a paywall for that article either. I read the print version while I drank coffee at the Harvard Co-op, so it was free for me.

However, the article is not at all luddite. The selections may be arbitrary, but there is value in pointing out that technologies that are chosen or popular are not necessarily the best or safest. Upcoming alternatives are provided.

For instance, the article doesn’t try to convince us that space travel is bad, in fact quite the opposite–the point they attempted to make was that the space shuttle is not a vehicle that can take us to the moon or another planet, and ideally its retirement will help the fervor to make new spacecraft.

Why Get Rid of Human Drivers?

Although not an invention, SciAm mentions “human drivers” as something we would be better off without. I agree with them partially.

Typical human driver...

According to WolframAlpha, there are 1.189 million deaths worldwide per year due to road traffic accidents (as of 2002).

However, those deaths are not just because the drivers are humans. Partially the problem might be the infrastructure–the design of the roads, the environment, and the very concept of roads and vehicles.

Why I Think We Can’t Get Rid of Human Drivers

First of all, it’s far more than a technology, it’s a part of our culture. To get rid of human drivers is not just to ban human-operated vehicles, it’s to ban a freedom that we have.

Freedom!

There’s also the social issues such as status. If humans didn’t care about freedom or status we could just switch to trains completely. But that’s not going to happen.

Status(?)

The best compromise, which could be enabled by technology, is that most if not all vehicles have an automatic driver mode.

Science fiction has shown this user experience from time to time–the car drives itself on the freeway, or when the person is busy. But on old fashioned local roads, or for fun, the user enables manual driver mode.

We have made great strides in technology in the past few decades for autonomous cars. It would, of course, be much easier if we could change the roads to be machine friendly.

Image credits: Dark Roasted Blend and Plan59.

Crosspost with my other blog, In the Eye of the Brainstorm

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , ,

Following Myself With Robots

Posted in interaction design, interfaces, robotics on July 10th, 2010 by Samuel Kenyon

With teleoperated robots it is relatively easy to experience telepresence–just put a wireless camera on a radio controlled truck and you can try it. Basically you feel like you are viewing the world from the point of view of the radio-controlled vehicle.

This clip from a Jame Bond movie is realistic in that he is totally focused on the telepresence via his cell phone to remotely drive a car, with only a few brief local interruptions.

It’s also interesting that the local and remote physical spaces intersected, but he was still telepresenced to the car’s point of view.

Humans cannot process more than one task simultaneously–but they can quickly switch between tasks (although context switching can be very tiresome in my experience). Humans can also execute a learned script in the background while focusing on a task–for instance driving (the script) while texting (the focus). Unfortunately, the script cannot handle unexpected problems like a large ladder falling off of a van in front of you in the highway (which happened to me a month ago). You have to immediately drop the focused task of texting and focus on avoiding a collision.

In the military, historically, one or more people would be dedicated to operating a single robot. The robot operator would be in a control station, a Hummer, or have a suitcase-style control system set up near a Hummer with somebody guarding them. You can’t operate the robot and effectively observe your own situation at the same time. If somebody shoots you, it might be too late to task switch. Also people under stress can’t handle as much cognitive load. When under fire, just like when giving a public presentation, you are often dumber than normal.

But what if you want to operate a robot while being dismounted (not in a Hummer) and mobile (walking/running around)? Well my robot interface (for Small Unmanned Ground Vehicle) enables that. The human constraints are still there, of course, so the user will never have complete awareness immediate surroundings simultaneously as operating the robot–but the user can switch between those situations almost instantly. However, this essay is not about the interface itself, but about an interesting usage in which you can see yourself from the point of view of the robot. So all you need to know about this robot interface is that it is a wearable computer system with a monocular head-mounted display.

An Army warfighter using one of our wearable robot control systems

One effective method I noticed while operating the robot at the Pentagon a few years ago is to follow myself. This allows me to be in telepresence and still walk relatively safely and quickly. Since I can see myself from the point of view of the robot, I will see any obvious dangers near my body. It was quite easy to get into this out-of-body mode of monitoring myself.

Unfortunately, this usage is not appropriate for many scenarios. Often times you want the robot to be ahead of you, hopefully keeping you out of peril. In many cases neither you or the robot will be in line-of-sight with each other.

As interaction design and autonomy improve for robots, they will more often than not autonomously follow their leaders, so a human will not have to manually drive them. However, keeping yourself in the view of cameras (or other sensors) could still be useful–you might be cognitively loaded with other tasks such as controlling arms attached to the robot, high level planning of robots, viewing information, etc., while being mobile yourself.

This is just one of many strange new interaction territories brought about by mobile robots. Intelligent software and new interfaces will make some of the interactions easier/better, but they will be constrained by human factors.

<object width=”480″ height=”385″><param name=”movie” value=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″></param><param name=”allowFullScreen” value=”true”></param><param name=”allowscriptaccess” value=”always”></param><embed src=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″ type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”480″ height=”385″></embed></object>
Crosspost with my other blog, In the Eye of the Brainstorm.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , , ,