Biomimetic Emotional Learning Agents

Since I didn’t blog in back in 2004, you get to suffer—I mean, enjoy—another breathtaking misadventure down memory lane.

In 2004 I started designing and coding (in C++) a cognitive architecture called Biomimetic Emotional Learning Agents (BELA).

The Grand Plan

This antique diagram reveals my old plans:

BELA Meta-Dev Diagram

BELA Meta-Dev Diagram

The diagram indicates general flow of time from left to right, but the arrows primarily show development paths. Useful developments to the primary tree could come from any position vertically.

The top line is so-called “blue sky” research such as figuring out ways to make a human-level AI and testing combinations of existing AI architectures, for instance the merger of commonsense reasoning with emotional learning, etc. This of course could have many useful results which is why it can contribute to the main tree and hence the lower lines such as applied R&D and practical applications, both of which could be targeted for applied-research agency contracts, and possibly eventually marketable/industrial software and robotics. Since this is all happening in parallel (ideally), the research towards the top may find that the applied research has come up with some useful versions of, for instance, the biomimetic emotional learning agents, and exploit those.

The Framework

Although focused on low-level emotional architectures as involved with homeostasis, knowledge acquisition, reactions, and learning, the framework was designed at all points to be extensible.

In my documentation I wrote:

Another issue to keep in mind is that although now the system is simple enough that the programmed class family is almost isomorphic to the abstract system design itself, that won’t always be the case, especially since this is a framework with many possible systems and the more complex systems later on will probably be reusing the same components in different ways.

What this means is that the design, or model (or system of models)…whatever…may look very close the C++ class hierarchy. That may give someone a warm and fuzzy feeling of an object-oriented modeling-to-code done right. But as my former self warned, the design of systems made with this design may break that nice model-to-code mapping.

The List of Concepts

This is the list of mental concepts that I wanted my system to implement or experiment with:

  • reactions
  • instincts
    • “gut feelings”
  • homeostasis (some have suggested “homeodynamics”)
  • emotions
    • “background” emotions
    • “primary” emotions
    • “secondary” emotions
  • pain
  • fear
  • phobias
  • pleasure
    • excitement/appetitive/dopamine vs. tranquility/consumatory/opioid
    • overlap with agency-affiliation cycle system?
    • rewards
  • fixing the negative side-effects of biological emotional learning/behavior
  • learning in real-time
    • associative learning
    • emotional learning
    • learning new methods of learning
    • change/growth in the learning mechanism itself
    • concepts without experience? null issue?
  • conditioning, predispositions
  • phylogenetic vs. ontogenetic learning/training
    • self-preservation configuration from nurture vs. nature
    • “growing pains” training
  • emotional-associative knowledge bases, emotional maps
  • ontology KBs
  • operating domains
  • fuzzy logic—many comparisons will be of ranges not single values
  • knowledge through association
  • configuration parameters for every aspect of the framework—no hard-coded constants or formulas, they should all be loaded from data config files (this will also make it easier for brute force enumerations & evolutionary dev)
  • parallel processes/modules/layers
  • ENS-style [semi]-autonomous agencies
  • arbitration
  • goal definition/descriptions
    • soft goals—emergent behavior from configuration
    • hard goals—minimize difference between current state and a stored described state
  • feelings of emotions
  • moods
  • modes of thinking / “frames of mind” / alternate problem-solving approaches
  • knowledge-lines and switching to previous configurations
  • self-history and forgetfulness
  • fall-back configurations for input overload or lack of input
    • tricking with form instead of content
  • priming
  • plasticity / adaptability of agent when it is damaged
  • probability of survival of a particular class of agent in a particular domain
  • immune system
  • stress responses, “biological modes”–arbitration rules for action subsumption between modes?
  • autonomic nervous system
    • Sympathetic Nervous System–“fight or flight” capabilities: instant reconfiguration of agent viscera + stock survival behaviors
    • Parasympathetic Nervous System–“rest and digest”
    • exhaustion, sleep—even robots have to recharge their batteries and possibly correlate/cull/compress memories
    • crisis situations that require suppression of SNS stress responses?
  • noise, false memories, misinformation, probability of wrong inferences (+ theory-of-mind == capability to lie ?)
  • intentionality, guessing intentions of objects/agents
    • theory-of-mind, mindreading, mindblindness
    • self-model, self-awareness, “proto-self”
    • critic/skeptic analyzers, checks-and-balances
    • attention, salient objects/agents
    • Exploiting the environment/situation (with constraints, e.g. nondestructive)
    • Utilizing the environment-technology situation to extend computational/problem-solving abilities

I described this enumeration as:

not a complete list, nor am I trying to test all of those concepts at once in the first phases of this project. Some may be removed as irrelevant, and new ones will be added. Of course, there is no shake-n-bake implementation—one doesn’t just make a module or sub-agent for each one of those concepts, throw them in a bag, and get something that works as expected or at all.

Some of these concepts are the same things, overlap, and most are connected somehow; some may turn out to be null concepts or unneeded in an artificial framework.

The intention of the experimental research stemming from this engineering-oriented project was to discover which concepts are needed in a minimal agent framework that is both useful itself and allows the other concepts that are needed (e.g., for a more intelligent—by the metrics of the application domain—robot) to be incorporated without a total redesign from scratch.

The first agents being tested would have been based only on some of the concepts, but with the design goal of supporting most of the rest (for research and/or practical robots) through configuration, versions, extensions, etc. of an evolving (in human design space—certain parts may actually be generated with evolutionary algorithms though) agent framework.

Phylo vs. Onto

Phylogenetic (the evolutionary space) versus ontogenetic (the individual  lifetime space) development/learning and a “growing pains” training scenario was a major part of the framework. “Growing pains” is a scheme to keep an artificial organism in increasingly more dangerous/complex sandboxes until they are at adult level. These concepts and how artificial organisms can exploit them are still of significant importance for my thinking about cognitive architectures today.

The End?

I didn’t get very far with BELA, and pretty much abandoned the codebase in February 2005. However, I didn’t abandon cognitive architectures at all. And many of the ideas in BELA are still good in my opinion. The overreaching nature of the project was noted by a reviewer of my rejected extended abstract who told me my ideas were θ-baked, where θ <= 0.5.

The project failure, as I have indicated can happen quite often, was just that—the project breaking down, not the concepts. A large part of this project’s brief history was my descending motivation and focus. I didn’t abandon AI or cognitive architectures, however. On the contrary in 2005 I took two AI-related grad classes at MIT (Society of Mind/Emotion Machine and Commonsense Reasoning for Interactive Applications), started working at a mobile robot company (iRobot), and was on an autonomous underwater vehicle competition team (MIT ORCA).

If the me of today went back to circa 2004 to work on BELA, I would change the name. I still think emotions and learning are important, and necessary if one is in fact mimicking biology. But they are only two aspects of many.

Tags: ,

Leave a Reply

You must be logged in to post a comment.

View Mobile Site