Nature-Inspired Development as an AI Abstraction

I’m working on some ideas and a paper to present my version of biologically-inspired development. But not just as a single project or as a technique, but as an abstraction level.

It’s hard to explain, so let me first digress with this: The agent approach to AI became a mainstream part of AI in the 1990s, and one of the most popular AI textbooks of the past decade tries to frame all of AI in the context of agents. Certainly within a given project, one can refer to the agent layer of abstraction. But I wonder how much agent as abstraction actually matters.

An abstraction in computer science hides the details that are “below” or “inside”. We encapsulate and black box things. It’s easy to see how straightforward this is with computers, where the abstractions rise from transistors in the electronics domain up to machine language (and then optionally up to assembly language) up to languages like C and then on top of that an application which in turn has an interpreted scripting language. Each layer saves the user/programmer from having to deal with the nitty gritty of lower levels on a daily basis (although they sometimes rear their ugly heads), and it promotes modularity–presumably lower layers have proven working parts that are then reused for many purposes as directed by the higher layers.

And yet when we get to the concept of agent, I feel like the abstraction stack is in new territory. And this feeling gets more weird with development, at least my version of development.

Mentioning agents was not just to talk about abstractions getting fuzzy, but also because it’s one of those AI abstractions that never seemed to reach the glorious potential some may have hoped for. There are many abstractions and techniques in cognitive science and AI. A lot of people swear by certain specific abstractions, for instance neuropsychologists and similar-minded folks thing that neurons and their networks are the layer on which we should understand the human mind. AI people have their own obsessions with abstractions and/or models (like ANNs and HMMs).

My Version of Abstract Development

So now let me ease you into my version of development: First, consider the agent as an artificial organism, possibly virtual, possibly existing in the real world as a robot. Now imagine that it has an embryo stage (embryogeny) where it actually grows from some small version (analogous to a biological zygote) and into the first child stage form. For robot that would be hard to make, but it’s not impossible, and it’s easy in a virtual world. Also, we are recording all of this data, including what goes on inside the artificial organism’s mind (or whatever prototypical information patterns it has that will eventually become a mind).

Next this organism begins various phases of “childhood”. Again, physical body changes may occur. If we want to be like biology, we also keep the mind in synchronization with the body changes. Of course, that is one of the interesting experimental areas when using this development abstraction–how the mind changes structures and content in a feedback loop with the body. And we are still recording all this data.

general ontogeny

general ontogeny

Now we also include a special other agent in the mix by default. Now this abstraction seems to have overlaps with a default framework. Anyway, the special agent is another artificial organism which is the analogue of a caregiver. This is a special training mechanism. The role of the parent in the early years of animal babies, especially humans, involves not just keeping a baby safe and teaching it some things, but also establishing basic mental structures via interaction.

Also, during these phases, the default framework for this development abstraction would include the notion of environment changes. A typical pattern would be to start with a small relatively safe environment, and then gradually increase the complexity and/or danger as the artificial organism develops and learns.

So these phases go on until adulthood. And depending on the experiment, the adult artificial organism is unleashed onto the world, where “world” is whatever adult environment the researchers have chosen. Data was recorded the whole time–both of the environment and of whatever we can record of the mental experiences of the organisms. Unlike biology we literally have special access as researchers into the minds of our digital creatures. All that data we recorded can be used to do cool and weird experiments, like going “backwards in time” to replay a phase, and even replay it with some variables changed.

So that’s generally it–for the lifetime part of it (aka ontogeny). I haven’t even mentioned anything about the evolutionary timeline. I’m struggling with what is the best way to explain how development, namely my version, is an abstraction. This is tangled with my abstract default “framework” concept I sketched above. Any criticisms and suggestions are welcome.