## Growing Robot Minds

Posted in artificial intelligence on January 11th, 2014 by Samuel Kenyon

One way to increase the intelligence of a robot is to train it with a series of missions, analogous to the missions (aka levels) in a video game.

mission (aka levels)

In a developmental robot, the training would not be simply learning—its brain structure would actually change. Biological development shows some extremes that a robot could go through, like starting with a small seed that constructs itself, or creating too many neural connections and then in a later phase deleting a whole bunch of them.

As another example of development vs. learning, a simple artificial neural network is trained when the weights have been changed after a series of training inputs (and error correction if it is supervised). In contradistinction, a developmental system changes itself structurally. It would be like growing completely new nodes, network layers, or new networks entirely during each training level.

Or you can imagine the difference between decorating a skyscraper (learning) and building a skyscraper (development).

construction (development) vs. interior design (learning)

### What Happens to the Robot Brain in these Missions?

Inside a robotic mental development mission or stage, almost anything could go on depending on the mad scientist who made it. It could be a series of timed, purely internal structural changes. For instance:

1. Grow set A[] of mental modules
2. Grow mental module B
3. Connect B to some of A[]
4. Activate dormant function f() in all modules
5. Add in pre-made module C and connect it to all other modules

Instead of (or in addition to) pre-planned timed changes, the stages could be in part based on environmental interactions. And I think that is actually a possibly useful tactic to make a robot adjust to its own morphology and the particular range of environments that it must operate and survive in. And that makes the stages more like the aforementioned missions as one has in computer games.

Note that learning is most likely going to be happening at the same time (unless learning abilities are turned off as part of a developmental level). In the space of all possible developmental robots, one would expect some mental change gray areas somewhere between development and learning.

Given the input and triggering roles of the environment, each development level may require a special sandbox world. The body of the robot may also undergo changes during each level.

The ordering of the levels/ sandboxes would depend on what mental structures are necessary going in to each one.

### A Problem

One problem that I have been thinking about is how to prevent cross-contamination of mental changes. One mission might nullify a previous mission.

For example, let’s say that a robot can now survive in Sandbox A after making it through Mission A. Now the robot proceeds through Mission B in Sandbox B. You would expect the robot to be able to survive in a bigger new sandbox (e.g. the real world) that has elements of both Sandbox A and Sandbox B (or requires the mental structures developed during A and B). But B might have messed up A. And now you have a robot that’s good at B but not A, or even worse not good at anything.

Imagine some unstructured cookie dough. You can form a blob of it into a special shape with a cookie cutter.

But applying several cookie cutters in a row might result in an unrecognizable shape, maybe even no better than the original blob.

As a mathematical example, take a four stage developmental sequence where each stage is a different function, numbered 1-4. This could be represented as:

$y = f_{4}(f_{3}(f_{2}(f_{1}(x))))$

where x is the starting cognitive system and y is the final resulting cognitive system.

This function composition is not commutative, e.g.

$f_{4}\circ f_{3}\circ f_{2}\circ f_{1} \neq f_{1}\circ f_{2}\circ f_{3}\circ f_{4}$

### A Commutative Approach

There is a way to make an architecture and transform function type that is commutative. You might think that will solve our problem, however it only works with certain constraints that we might not want. To explain I will show you an example of a special commutative configuration.

We could require all the development stages to have a minimal required integration program. I.e. f1(), f2(), etc. are all sub-types of m(), the master function. Or in object oriented terms:

The example here would have each mission result in a new mental module. The required default program would automatically connect this module with the same protocol to all other modules.

So in this case:

$f_{4}\circ f_{3}\circ f_{2}\circ f_{1} = f_{1}\circ f_{2}\circ f_{3}\circ f_{4}$

I don’t think this is a good solution since it seriously limits the cognitive architecture. We would not even be able to build a simple layered control system where each higher layer depends on the lower layers. We cannot have arbitrary links and different types of links between modules. And it does not address how conflicts are arbitrated for outputs.

However, we could add in some dynamic adaptive interfaces in each module that apply special changes. For instance, module type B might send out feelers to sense the presence of module type A, and even if A is added afterwards, eventually B will find it after all the modules have been added. But, we will not be able to actually unleash the robot into any of the environments it should be able to handle until the end, and this is bad. It removes the power of iterative development. And it means that a mission associated with a module will be severely limited.

The most damning defect with this approach is that there’s still no guarantee that a recently added module won’t interfere with previous modules as the robot interacts in a dynamic world.

### A Pattern Solution

A non-commutative solution might reside in integration patterns. These would preserve the functionality from previous stages as the structure is changed.

a multipole switch to illustrate mental switching

For instance, one pattern might be to add a switching mechanism. The resulting robot mind would be partially modal—in a given context, it would activate the most appropriate developed part of its mind but not all of parts at the same time.

A similar pattern could be used for developing or learning new skills—a new skill doesn’t overwrite previous skills, it is instead added to the relevant set of skills which are triggered or selected by other mechanisms.

Image credits:

1. Nintendo
2. Georgia State University Library via Atlanta Time Machine
3. dezeen
4. crooked brains
5. diagram by the author
6. MyDukkan

Tags: , ,

## A PIC-Based Scripted Robot System

Posted in making and hacking, robotics on January 7th, 2013 by Samuel Kenyon

The last time I used the aforementioned scripting framework was in the robot system described here. It was intended for a spherical robot, however, I also used an old RC truck chassis for testing. It was fairly generic—there wasn’t anything specific to spherical robots in the board design or programming, with the exception of the size and shape of the board which was made to fit in the sphere shell.

### The Board

My embedded robot control board.

This robot used an 8-bit microcontroller (uC) based board that I hacked together. All of the robot code, including comms, the script engine, sensor interaction, and motor control ran on the uC. There was no in circuit debugging / programming; I used a separate device programmer (specifically, the EPIC Plus Pocket PIC Programmer with the 40/28 pin ZIF adapter). The uC I used was a Microchip PIC18LF458.

Tags: , , ,

## Robot Scripting

Posted in programming, robotics on January 2nd, 2013 by Samuel Kenyon

During 2003 and 2004, I worked on FIRST robots. I was a college student, but Northeastern University hosted a team supporting multiple high schools. FIRST competition robots are radio controlled, however autonomous routines activated by the operator are allowed and would be hugely advantageous. But most teams never got to that point, and were lucky to have much beyond the default code.

I also started making my own robots. I had hacked at robots before, but I hadn’t made my own that actually worked until 2003. First I used parts from the IFI Edukit that came with one of the FIRST robotics kits to make some little experimental robots and one that was intended for a Micromouse competition (but wasn’t finished in time). Eventually I made my own microcontroller based board which I attached to an RC truck chassis.

my robot truck

In all of these cases, I started to realize that a lot of basic things that we wanted these robots to do could be represented with simple scripts.

Tags: , ,

## Hype Alert: Robot Learns Self-Awareness

Posted in artificial intelligence, robotics on August 28th, 2012 by Samuel Kenyon

### What the Hyped News Article Implies

An anonymous—aka spineless—person wrote a Kurzweilai.net article a few days ago titled “Robot learns ‘self-awareness.’” Self-awareness is in scare quotes. I’m not sure what that’s supposed to imply…maybe it was just a huge joke? Various other news sources are claiming similar things (e.g., BBC’s “Robot learns to recognise itself in mirror”).

The Kurzweilai article begins audaciously:

“Only humans can be self-aware.”
Another myth bites the dust.

The prose whips us into a frenzy by explaining the Mirror Test:

The classic mirror test has previously been done with animals to determine whether they understand that their reflections are actually images of themselves. The subject animals are allowed to familiarize themselves with a mirror. They are then sedated and a spot of dye is put on their faces. When they awaken, if they notice the new spot of color in their reflection and then touch the place on their face where the dye was put, they “pass” the mirror test.

The article does not let up on the AI success fervor until the very last sentence, when the punchline is revealed for those who made it to the end of the article without foaming at the mouth with technolust glee:

So far, no robot has successfully met this challenge. Jason and the Social Robotics Lab are working on it.

Tags: , , ,