Five Ways Machines Could Fix Themselves

Posted in interaction design, robotics on September 30th, 2010 by Samuel Kenyon

Now published on h+ magazine: my article “Five Ways Machines Could Fix Themselves.” Check it out!

As I see cooling fans die and chips fry, as I see half the machines in a laundry room decay into despondent malfunctioning relics, as my car invents new threats every day along the theme of catastrophic failure, and as I hear the horrific clunk of a “smart” phone diving into the sidewalk with a wonderful chance of breakage, I wonder why we put up with it. And why can’t this junk fix itself?

Design guru and psychologist Donald A. Norman has pointed out how most modern machines hide their internal workings from users. Any natural indicators, such as mechanical sounds, and certainly the view of mechanical parts, are muffled and covered. As much machinery as possible has been replaced by electronics which are silent except for the sound of fans whirring. And electronics are even more mysterious to most users than mechanical systems are.

Our interfaces to machines are primarily composed of various kinds of transducers (like buttons), LEDs (those little glowing lights), and display screens. We are, at the very least, one—if not a dozen—degrees removed from the implementation model. As someone who listens to user feedback, I can assure you that a user’s imagining of how a system works is often radically different than how it really works.

Yet with all this hiding away of the dirty reality of machinery, we have not had a proportional increase in machine self support.

Argument: Software, in some cases, does fix itself. Specifically I am thinking about automatic or pushed software updates. And, because that software runs on a box, it is by default also fixing a machine. For instance, console game platforms like XBox 360 and Playstation 3 receive numerous updates for bug fixes, enhancements, and game specific updates. Likewise, with some manual effort from the user, smart phones and even cars can have their firmware updated to get bug fixes and new features (or third-party hacks).

Counterargument: Most machines don’t update their software anywhere close to “automatically.” And none of those software updates actually fix physical problems. Software updates also require a minimal subset of the system to be operational, which is not always the case. The famous Red Ring of Death on the early XBox 360 units could not be fixed except via replacement of hardware. You might be able to flash your car’s engine control unit with new software, but that won’t fix mechanical parts that are already broken. And so on.

Another argument: Many programs and machines can “fail gracefully.” This phrase comforts a user like the phrase “controlled descent into the terrain” comforts the passenger of an airplane. However, it’s certainly the minimum bar that our contraptions should aim for. For example, if the software fails in your car, it should not default to maximum throttle, and preferably it would be able to limp to the nearest garage just in case your cell phone is dead. Another example: I expect my laptop to warn me, and then shutdown, if the internal temperature is too hot, as opposed to igniting the battery into a fireball.

The extreme solution to our modern mechatronic woes is to turn everything into software. If we made our machines out of programmable matter or nanobots that might be possible. Or we could all move into virtual realities, in which we have hooks for the meta—so a software update would actually update the code and data used to generate the representation of a machine (or any object) in our virtual world.

However, even if those technologies become mature, there won’t necessarily be one that is a monopoly or ubiquitous. A solution that is closer and could be integrated into current culture would be a drop-in replacement that utilizes existing infrastructures.

Some ideas that come close:

1. The device fixes itself without any external help. This has the shortcoming that it might be too broken to fix itself, or might not realize it’s broken. In some cases, we already have this in the form of redundant systems as used in aircraft, the Segway, etc.

2. Software updating (via the Internet) combined with 3D printing machines: the 3D printers would produce replacement parts. However, the printer of course needs the raw material but that could be as easy as putting paper in a printer. Perhaps in the future, that raw printer material will become some kind of basic utility, like water and Internet access.

3. Telepresence combined with built-in repair arms (aka “waldoes”). Many companies are currently trying to productize office-compatible telepresence robots. Doctors already use teleoperated robots such as Da Vinci to do remote, minimally-invasive surgery. Why not operate on machines? How to embed this into a room and/or within a machine is another—quite major—problem. Fortunately, with miniaturization of electronics, there might be room for new repair devices embedded in some products. And certainly not all products need general purpose manipulator arms. They could be machine specific devices, designed to repair the highest probability failures.

4. Autonomous telepresence combined with built-in repair arms: A remote server connects to the local machine via the Internet, using the built-in repair arms or device-specific repair mechanism. However, we also might need an automatic meta-repair mechanism. In other words, the fixer itself might break, or the remote server might crash. Now we enter endless recursions. However, this need not go on infinitely. It’s just a matter of having enough self-repair capacity to achieve some threshold of reliability.

5. Nothing is ever repaired, just installed. A FedEx robot appears within fifteen minutes with a replacement device and for an extra fee will set it up for you.

Tags: , , , , , , , , , , , ,

Following Myself With Robots

Posted in interaction design, interfaces, robotics on July 10th, 2010 by Samuel Kenyon

With teleoperated robots it is relatively easy to experience telepresence–just put a wireless camera on a radio controlled truck and you can try it. Basically you feel like you are viewing the world from the point of view of the radio-controlled vehicle.

This clip from a Jame Bond movie is realistic in that he is totally focused on the telepresence via his cell phone to remotely drive a car, with only a few brief local interruptions.

It’s also interesting that the local and remote physical spaces intersected, but he was still telepresenced to the car’s point of view.

Humans cannot process more than one task simultaneously–but they can quickly switch between tasks (although context switching can be very tiresome in my experience). Humans can also execute a learned script in the background while focusing on a task–for instance driving (the script) while texting (the focus). Unfortunately, the script cannot handle unexpected problems like a large ladder falling off of a van in front of you in the highway (which happened to me a month ago). You have to immediately drop the focused task of texting and focus on avoiding a collision.

In the military, historically, one or more people would be dedicated to operating a single robot. The robot operator would be in a control station, a Hummer, or have a suitcase-style control system set up near a Hummer with somebody guarding them. You can’t operate the robot and effectively observe your own situation at the same time. If somebody shoots you, it might be too late to task switch. Also people under stress can’t handle as much cognitive load. When under fire, just like when giving a public presentation, you are often dumber than normal.

But what if you want to operate a robot while being dismounted (not in a Hummer) and mobile (walking/running around)? Well my robot interface (for Small Unmanned Ground Vehicle) enables that. The human constraints are still there, of course, so the user will never have complete awareness immediate surroundings simultaneously as operating the robot–but the user can switch between those situations almost instantly. However, this essay is not about the interface itself, but about an interesting usage in which you can see yourself from the point of view of the robot. So all you need to know about this robot interface is that it is a wearable computer system with a monocular head-mounted display.

An Army warfighter using one of our wearable robot control systems

One effective method I noticed while operating the robot at the Pentagon a few years ago is to follow myself. This allows me to be in telepresence and still walk relatively safely and quickly. Since I can see myself from the point of view of the robot, I will see any obvious dangers near my body. It was quite easy to get into this out-of-body mode of monitoring myself.

Unfortunately, this usage is not appropriate for many scenarios. Often times you want the robot to be ahead of you, hopefully keeping you out of peril. In many cases neither you or the robot will be in line-of-sight with each other.

As interaction design and autonomy improve for robots, they will more often than not autonomously follow their leaders, so a human will not have to manually drive them. However, keeping yourself in the view of cameras (or other sensors) could still be useful–you might be cognitively loaded with other tasks such as controlling arms attached to the robot, high level planning of robots, viewing information, etc., while being mobile yourself.

This is just one of many strange new interaction territories brought about by mobile robots. Intelligent software and new interfaces will make some of the interactions easier/better, but they will be constrained by human factors.

<object width=”480″ height=”385″><param name=”movie” value=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″></param><param name=”allowFullScreen” value=”true”></param><param name=”allowscriptaccess” value=”always”></param><embed src=”http://www.youtube.com/v/meY1R43fJIQ&amp;hl=en_US&amp;fs=1?rel=0″ type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”480″ height=”385″></embed></object>
Crosspost with my other blog, In the Eye of the Brainstorm.
Tags: , , , , , , , ,

Liberation of Tools

Posted in artificial intelligence, humor, interaction design, interfaces, robotics on February 25th, 2010 by Samuel Kenyon

Without the existence of parody, I would have far less hope for our society. Robert Brockway’s recent article on the Cracked website, “If The Internet Wins The Nobel: A Proposed Acceptance Speech,” makes fun of the effort to give the internet the Nobel Peace Prize.

Unfortunately the Nobel Prize agency hides the nominees list for 50 years in a secret volcano lair, so I’m not sure if the intertubes is actually a nominee right now.

Brockway points out the strangeness of recent awards/nominations to abstract concepts, such as “You” for Time’s Person of the Year. Why don’t we nominate abstract concepts for President?  Brockway bemoans the internet’s qualifications, concluding that this would really be a peace prize for pornography.

Despite his brilliance, Brockway misses one aspect of the internet that makes it somewhat different than other abstract concepts, which is that it’s also a tool. Even if you disagree with the usage, the acceptance of a tool for a major award may be a predecessor to a future culture in which intelligence, personhood and rights apply to a myriad forms, not just humans.   And not just in object-oriented forms.

“In the future, your clothes will be smarter than you.”
Scott Adams

The interfaces of the web allow us to interact with agents who may not be human. Would you care if other players in multiplayer games were bots, as long as they acted like humans? Would you follow software agents on Twitter? I certainly would.  Would you have sex with a sufficiently humanlike robot (or web agent + peripheral)?  I certainly…um…

“Smart” gadgets now are still relatively idiotic. But we’ll have more and better mobile assistants and home appliances in the future. Also, if we can work out the interfaces, automatic systems and software agents will become better at doing online chores and information aggregation for us.

In the real world, augmented pets and socially adept robots may be among your friends. Telepresence robots will let you interact in the same physical space with remote humans, software agents, pets, corporations, etc.

This could be the era in which humans finally start accepting machines, distributed systems, and other non-humans as people. Or if not as people, than as new classes of rights-bearing entities.

Bonus points to anybody who makes an “I for one welcome…” comment.

Tags: , , , , , , ,