A PIC-Based Scripted Robot System

The last time I used the aforementioned scripting framework was in the robot system described here. It was intended for a spherical robot, however, I also used an old RC truck chassis for testing. It was fairly generic—there wasn’t anything specific to spherical robots in the board design or programming, with the exception of the size and shape of the board which was made to fit in the sphere shell.

The Board

My embedded robot control board.

My embedded robot control board.

This robot used an 8-bit microcontroller (uC) based board that I hacked together. All of the robot code, including comms, the script engine, sensor interaction, and motor control ran on the uC. There was no in circuit debugging / programming; I used a separate device programmer (specifically, the EPIC Plus Pocket PIC Programmer with the 40/28 pin ZIF adapter). The uC I used was a Microchip PIC18LF458.

Continue reading

Robot Scripting

During 2003 and 2004, I worked on FIRST robots. I was a college student, but Northeastern University hosted a team supporting multiple high schools. FIRST competition robots are radio controlled, however autonomous routines activated by the operator are allowed and would be hugely advantageous. But most teams never got to that point, and were lucky to have much beyond the default code.

I also started making my own robots. I had hacked at robots before, but I hadn’t made my own that actually worked until 2003. First I used parts from the IFI Edukit that came with one of the FIRST robotics kits to make some little experimental robots and one that was intended for a Micromouse competition (but wasn’t finished in time). Eventually I made my own microcontroller based board which I attached to an RC truck chassis.

my robot truck

In all of these cases, I started to realize that a lot of basic things that we wanted these robots to do could be represented with simple scripts.

Continue reading

Flashback: The Mini-Me Robot

I just rediscovered some photos of a robot I threw together in about an hour back in 2003.

Mini-Me v.1

Mini-Me v.1

Mini-Me v.1

Mini-Me v.1

This was made out of an Innovation First educational robot kit which came with the official FIRST robotics kit (which also had parts made by Innovation First). The small edu kit later evolved into the VEX robotics kit. They also make a cool little toy called Hexbug.

Hexbug

Hexbug

I had been spending a lot of time in a basement laboratory at Northeastern University, primarily to advise the FIRST team hosted there (the NU-Trons). This little robot had the same computer as the real competition robot, so it was useful as a programming testbed. Eventually it was dubbed “Mini-Me.”

Mini-Me (Verne Troyer) from Austin Powers 2

Mini-Me (Verne Troyer) from Austin Powers 2

The photos of the Mini-Me robot only show the original configuration.  Later on, the infrared sensors on the front were turned downward and I programmed it to be a simple line follower, as we were thinking about having the big FIRST robot do that as well.

Simple linetracking finite state machine diagram

Simple linetracking finite state machine diagram

Illustrating a line tracking robot's potentially zig-zag path during a competition

Illustrating a line tracking robot's potentially zig-zag path during a competition

Of course, being an optimistic college student, I designed a more complicated program of which the line tracker was one component.

Subsumption architecture diagram for a FIRST robot

Subsumption architecture diagram for a FIRST robot

Program flowchart

Program flowchart

But we never finished that on the final system (the big robot) as we spent most of our time on less glamorous tasks like soldering.

The NU-Trons robot from 2003

The NU-Trons robot (#125) from 2003 (here it is being teleoperated)

It was a good lesson in systems though—the amount of time for testing and integration is massive. With robots, most people never get to the interesting programming because it takes so long to make anything work at all. These robot kits help though, at least for programmers, because you don’t have to waste as much time reinventing the wheel.

Later on I used some of those Innovation First edu kit mechanical parts as part of my MicroMouse robot. Unfortunately I don’t think any photos were ever taken of that. Just imagine something awesome.

Robot Marathon Blazes New Paths on the Linoleum

Whereas in America we’ve been wasting our robot races on autonomous cars that can drive on real roads without causing Michael Bay levels of collateral damage, Japan has taken a more subtle approach.

Their “history making” bipedal robot race involves expensive toy robots stumbling through 422 laps of a 100 meter course, which is followed by visually tracking colored tape on the ground (I’m making an assumption there–the robots may actually be even less capable than that).  This is surely one of the most ingenious ways to turn old technology into a major new PR event.

the finish line

I assume they weren't remote controlling these "autonomous" robots during the race.

Unfortunately, I don’t have any video or photos of the 100 meter course; instead I have this:

And the winner is…Robovie PC!  Through some uncanny coincidence, that was operated by the Vstone team, the creators of the race.

Robovie PC

Robovie PC sprints through the miniature finish line. The robots on the sides are merely finish line holder slaves.

The practical uses of this technology are numerous.  For instance, if you happen to have 42.2 km of perfectly level and flat hallways with no obstructions, one of these robots can follow colored tape around all day without a break(down), defending the premises from vile insects and dust bunnies.

photoshop of a cat carrying a small robot

There's no doubt that in the next competition, they will continue to improve their survival capabilities.

Following Myself With Robots

With teleoperated robots it is relatively easy to experience telepresence–just put a wireless camera on a radio controlled truck and you can try it. Basically you feel like you are viewing the world from the point of view of the radio-controlled vehicle.

This clip from a Jame Bond movie is realistic in that he is totally focused on the telepresence via his cell phone to remotely drive a car, with only a few brief local interruptions.

It’s also interesting that the local and remote physical spaces intersected, but he was still telepresenced to the car’s point of view.

Humans cannot process more than one task simultaneously–but they can quickly switch between tasks (although context switching can be very tiresome in my experience). Humans can also execute a learned script in the background while focusing on a task–for instance driving (the script) while texting (the focus). Unfortunately, the script cannot handle unexpected problems like a large ladder falling off of a van in front of you in the highway (which happened to me a month ago). You have to immediately drop the focused task of texting and focus on avoiding a collision.

In the military, historically, one or more people would be dedicated to operating a single robot. The robot operator would be in a control station, a Hummer, or have a suitcase-style control system set up near a Hummer with somebody guarding them. You can’t operate the robot and effectively observe your own situation at the same time. If somebody shoots you, it might be too late to task switch. Also people under stress can’t handle as much cognitive load. When under fire, just like when giving a public presentation, you are often dumber than normal.

But what if you want to operate a robot while being dismounted (not in a Hummer) and mobile (walking/running around)? Well my robot interface (for Small Unmanned Ground Vehicle) enables that. The human constraints are still there, of course, so the user will never have complete awareness immediate surroundings simultaneously as operating the robot–but the user can switch between those situations almost instantly. However, this essay is not about the interface itself, but about an interesting usage in which you can see yourself from the point of view of the robot. So all you need to know about this robot interface is that it is a wearable computer system with a monocular head-mounted display.

An Army warfighter using one of our wearable robot control systems

One effective method I noticed while operating the robot at the Pentagon a few years ago is to follow myself. This allows me to be in telepresence and still walk relatively safely and quickly. Since I can see myself from the point of view of the robot, I will see any obvious dangers near my body. It was quite easy to get into this out-of-body mode of monitoring myself.

Unfortunately, this usage is not appropriate for many scenarios. Often times you want the robot to be ahead of you, hopefully keeping you out of peril. In many cases neither you or the robot will be in line-of-sight with each other.

As interaction design and autonomy improve for robots, they will more often than not autonomously follow their leaders, so a human will not have to manually drive them. However, keeping yourself in the view of cameras (or other sensors) could still be useful–you might be cognitively loaded with other tasks such as controlling arms attached to the robot, high level planning of robots, viewing information, etc., while being mobile yourself.

This is just one of many strange new interaction territories brought about by mobile robots. Intelligent software and new interfaces will make some of the interactions easier/better, but they will be constrained by human factors.

<object width=”480″ height=”385″><param name=”movie” value=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″></param><param name=”allowFullScreen” value=”true”></param><param name=”allowscriptaccess” value=”always”></param><embed src=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″ type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”480″ height=”385″></embed></object>
Crosspost with my other blog, In the Eye of the Brainstorm.