Polymorphism in Mental Development

Posted in artificial intelligence on March 9th, 2014 by Samuel Kenyon

Adaptability and uninterrupted continuous operations are important features of mental development.

An organism that can’t adapt enough is too rigid and brittle—and dies. The environment will never be exactly as expected (or exactly the same as any other previous time during evolution). Sure, in broad strokes the environment has to be the same (e.g. gravity), and the process of reproduction has limits, but many details change.

During its lifetime (ontogeny), starting from inception, all through embryogeny and through childhood and into adulthood, an organism always has to be “on call.” At the very least, homeostasis is keeping it alive. I’d say for the most part all other informational control/feedback systems, including the nervous system which in turn includes the brain, all have to be always operational. There are brief gaps, for instance highly altricial animals depend on their parents protecting them as children. And of course there’s the risk when one is asleep. But even in those vulnerable states, organisms do not suddenly change their cognitive architectures or memories in a radically broken way. Otherwise they’d die.

There are a couple computer-ish concepts that might be useful for analyzing and synthesizing these aspects of mental development:

  1. Parameter ranges
  2. Polymorphism

These aren’t the only relevant concepts; I just happen to be thinking about them lately.

Parameter Ranges

Although it seems to be obvious, I want to point out that animals such as humans have bodies and skills that do not require exact environmental matches. For example, almost any human can hammer a nail regardless of the exact size and shape of the hammer, or the nail for that matter.

Crows have affordances with sticks and string-like objects. Actually, crows aren’t alone—other bird species have been seen pulling strings for centuries [1]. Anyway, crows use their beaks and feet to accomplish tasks with sticks and strings.

crow with a stick

a crow with a stick

crow incrementally pulling up a string to get food

a crow incrementally pull-stepping a string to get food

Presumably there is some range of dimensions and appearances of sticks and strings that a crow can:

  1. Physically manipulate
  2. Recognize as an affordance

And note that those items are required regardless of whether crows have insight at all or are just behaving through reinforcement feedback loops [1].

Organisms fit into their niches, but there’s flexibility to the fit. However, these flexibilities are not unlimited—there are ranges. E.g., physically a person’s hand cannot grip beyond a certain width. The discipline of Human factors (and ergonomics) studies these ranges of the human body as well as cognitive limits.

For an AI agent, there are a myriad places in which there could a function f and its parameter x with some range R of acceptable values for x. Perhaps the levels of abstraction for this to be useful are exactly those involved with perception of affordances and related skill-knowledge.

Polymorphism

Polymorphic entities have the same interfaces. Typically one ends up with a family of types of objects (e.g. classes). At any given time, if you are given an object of a family, you can interface to it the same way.

In object oriented programming this is exemplified by calling functions (aka methods) on pointers to a base class object without actually knowing what particular child class the object is an instance of. It works conceptually because all the child classes inherit the same interface.

For example, in C++:

class A {
public:
    virtual void foo() = 0;
};

class B : public A
{
public:
    void foo() { std::cout << "B foo!\n"; }
};

class C : public A
{
public:
    void foo() { std::cout << "C foo!\n"; }
};

void main() 
{
    std::unique_ptr<A> obj(new B);
    obj->foo(); // does the B variation of foo()

    std::unique_ptr<A> obj2(new C);
    obj2->foo(); //does the C variation of foo()
}

That example is of course contrived for simplicity. It’s practical if you have one piece of code which should not need to know the details of what particular kind of object it has—it just needs to do operations defined by an interface. E.g., the main rendering loop for a computer game could have a list of objects to call draw() on, and it doesn’t care what specific type each object is—as long as it can call draw() on each object it’s happy.

So what the hell does this have to do with mental development? Well, it could be a way for a mind to add new functions to existing concepts without blowing away the existing functions. For instance, at a high level, a new skill S4 could be learned and connected to concept C. But concepts S1, S2, and S3 are still usable for concept C. In a crisp analogy to programming polymorphism, the particular variant of S that is used in a context would be based on the sub-type of C that is perceived.

On the other hand, polymorphism could be fuzzier and more like the switch concept I mentioned in a previous post. In other words, the mechanism of the switch could be anything. E.g. a switch could be influenced by global emotional states instead of the active sub-type of C.

One can imagine polymorphism both at reflexive levels and on conceptual symbol-based levels. Reflexive (or “behavioral”) networks could be designed to effectively arbitrate based on the “type” of an external situation via the mapping of inputs (sensor data) to outputs (actuators). Conceptual based mental polymorphism would presumably actually categorize a perception (or perhaps an input from some other mental module) which would be mapped by subcategory to the appropriate version of an action. And maybe it’s not an action, but just the next mental node to visit.


References
[1] Taylor, A., Medina, F., Holzhaider, J., Hearne, L., Hunt, G., & Gray, R. (2010) An Investigation into the Cognition Behind Spontaneous String Pulling in New Caledonian Crows. PLoS ONE, 5(2). DOI: 10.1371/journal.pone.0009345



Image credits:

  1. University of Oxford
  2. David Schulz

Tags: , , ,

Mechanisms of Meaning

Posted in artificial intelligence on October 15th, 2012 by Samuel Kenyon

How do organisms generate meaning during their development? What designs of information structures and processes best explain how animals understand concepts?

In “A Deepness in the Mind: The Symbol Grounding Problem“, I showed a three layer diagram with semantic connections on the top. I’d like to spend some time discussing the bottom of the top layer.

Basic Meaning Mechanisms

At the moment, there seems to be a few worthwhile avenues of investigation:

  • Emotions
  • Affordances
  • Metaphors And Blends

Each of these points of view involve certain theories and architecture concepts. Soon I will have more blog posts describing these concepts and how we might implement and synthesize them. Marvin Minsky’s books Society of Mind and The Emotion Machine have many mental agency and structure ideas that may be useful in this context.

Tags: , , , , , , ,

Enactive Interface Perception and Affordances

Posted in artificial intelligence, interfaces, philosophy on November 14th, 2011 by Samuel Kenyon

There are two freaky theories of perception which are very interesting to me not just for artificial intelligence, but also from a point of view of interfaces, interactions, and affordances. The first one is Alva Noë’s enactive approach to perception. The second one is Donald D. Hoffman’s interface theory of perception.

Enactive Perception vs. Interface Perception

Enactive Perception

BREAK_DA_vinci_NCE_by_iamwilliam_s
The key element of the enactive approach to perception is that sensorimotor knowledge and skills are a required part of perception [1].

In the case of vision, there is a tradition of keeping vision separate from the other senses and sensorimotor abilities, and also as treating it as a reconstruction program (inverse optics). The enactive approach suggests that visual perception is not simply a transformation of 2D pictures into a 3D representation, and that vision is dependent on sensorimotor skills. Indeed, the enactive approach claims that all perceptual representation is dependent on sensorimotor skills.

Example of optical flow (one of the ways to get structure from motion)

My interpretation of the enactive approach proposes that perception co-evolved with motor skills such as how our bodies move and how our sensors, for instance, eyes, move. A static 2D image can not tell you what color blobs are objects and what are merely artifacts of the sensor or environment (e.g. light effects). But if you walk around this scene, and take into account how you are moving, you get a lot more data to figure out what is stable and what is not. We have evolved to have constant motion in our eyes via saccades, so even without walking around or moving our heads, we are getting this motion data for our visual perception system.

Of course, there are some major issues that need to be resolved, at least in my mind, about enactive perception (and related theories). As Aaron Sloman has pointed out repeatedly, we need to fix or remove dependence on symbol grounding. Do all concepts, even abstract ones, exist in a mental skyscraper built on a foundation of sensorimotor concepts? I won’t get into that here, but I will hopefully return to it in a later blog post.

The enactive approach says that you should be careful about making assumptions that perception (and consciousness) can be isolated on one side of an arbitrary interface. For instance, it may not be alright to study perception–or consciousness–by looking just at the brain. It may be necessary to include much more of the mind-environment system–a system which is not limited to one side of the arbitrary interface of the skull.

Perception as a User Interface

human-computer interfaces (this still from Matrix Reloaded)

human-computer interfaces

The Interface Theory of Perception says that “our perceptions constitute a species-specific user interface that guides behavior in a niche” [2].

Evolution has provided us with icons and widgets to hide the true complexity of reality. This reality user interface allows organisms to survive better in particular environments, hence the selection for it.

Perception as an interface

Or as Hoffman et al summarize [3] the conceptual link from computer interfaces:

An interface promotes efficient interaction with the computer by hiding its structural and causal complexity, i.e., by hiding the truth. As a strategy for perception, an interface can dramatically trim the requirements for information and its concomitant costs in time and energy, thus leading to greater fitness. But the key advantage of an interface strategy is that it is not required to model aspects of objective reality; as a result it has more flexibility to model utility, and utility is all that matters in evolution.

Besides supporting the theory with simulations, Hoffman [2] uses a colorful real world example: he describes how male jewel beetles use a reality user interface to find females. This perceptual interface is composed of simple rules involving the color and shininess of female wing cases. Unfortunately, it evolved for a niche which could not have predicted the trash dropped by humans that lead to false positives. This results in male jewel beetles humping empty beer bottles.

Male Australian jewel beetle attempting to mate with a discarded “stubby” (beer bottle)

For more info on the beetles, see this short biological review [4] which includes “discussion regarding the habit of the males of this species to attempt mating with brown beer-bottles.” It also notes:

Schlaepfer et al. (2002) point out that organisms often rely on environmental cues to make behavioural and life-history decisions. However, in environments which have been altered suddenly by humans, formerly reliable cues might no longer be associated with adaptive outcomes. In such cases, organisms can become trapped by their evolutionary responses to the cues and experience reduced survival or reproduction (Schlaepfer et al., 2002).

All perception, including of humans, evolved for adaptation to niches. There is no reason or evidence to suspect that our reality interfaces provide “faithful depictions” of the objective world. Fitness trumps truth. Hoffman says that Noë supports a version of faithful depiction within enactive perception, although I don’t see how that is necessary for enactive perception.

Interactions

One might think of perception as interactions within a system. This system contains the blobs of matter we typically refer to as an “organism” and its “environment.”

You’ll notice that in the diagram in the previous section, “environment” and “organism” are in separate boxes. But that can be very misleading. Really the organism is part of the environment:

Of course, the organism itself is part of the environment.

True Perception is Right Out the Window

How do we know what we know about reality? There seems to be a consistency at our macroscopic scale of operation. One consistency is due to natural genetic programs–and programs they in turn cause–which result in humans having shared knowledge bases and shared kinds of experience. If you’ve ever not been on the same page as somebody before, then you can imagine how it would be like if we didn’t have anything in common conceptually. Communication would be very difficult. For every other entity you want to communicate with, you’d have to establish communication interfaces, translators, interpreters, etc. And how would you even know who to communicate with in the first place? Maybe you wouldn’t have even evolved communication.

So humans (and probably many other related animals) have experiences and concepts that are similar enough that we can communicate with each other via speech, writing, physical contact, gestures, art, etc.

But for all that shared experience and ability to generate interfaces, we have no inkling of reality.

Since the interface theory of perception says that our perception is not necessarily realistic, and is most likely not even close to being realistic, does this conflict with the enactive theory?

Noë chants the mantra that the world makes itself available to us (echoing some of the 1980s/1990s era Rod Brooks / behavioral robotics approach of “world as its own model”). If representation is distributed in a human-environment system, does it have to be a veridical (truthful) representation? No. I don’t see why that has to be the case. So it seems that the non-veridical nature of perception should not prevent us from combining these two theories.

Affordances

A chair affords sitting, a book affords turning pages.

A chair affords sitting, a book affords turning pages.

Another link that might assist synthesizing these two theories is that of J.J. Gibson’s affordances. Affordances are “actionable properties between the world and an actor (a person or animal)” [5].

The connection of affordances to the enactive approach is provided by Noë (here he’s using an example of flatness):

To see something is flat is precisely to see it as giving rise to certain possibilities of sensorimotor contingency…Gibson’s theory, and this is plausible, is that we don’t see the flatness and then interpret it as suitable for climbing upon. To see it as flat is to see it as making available possibilities for movement. To see it as flat is to see it, directly, as affording certain possibilities.

Noë also states that there is a sense in which all objects of perception are affordances. I think this implies that if there is no affordance relationship between you and a particular part of the environment, then you will not perceive that part. It doesn’t exist to you.

The concept of affordances is also used, in a modified form, for interaction design as well. For those who are designers or understand design, you can perhaps understand how affordances in nature have to be perceived by animals so that they can survive. It is perhaps the inverse of the design problem–instead of making the artifact afford action for the user, the animal had to make itself comprehend certain affordances through evo-devo.

Design writer Don Norman makes the point to distinguish between “real” and “perceived” affordances[5]. That makes sense in the context of his examples such as human-computer interfaces. But are any affordances actually real? And that gets back into the perception as interface theory–animals perceive affordances, but there’s no guarantee those affordances are veridical.

References
1. Noë, A., Action in Perception, Cambridge, MA: MIT Press, 2004.
2. Hoffman, D.D., “The interface theory of perception: Natural selection drives true perception to swift extinction” in Dickinson, S., Leonardis, A., Schiele, B.,&Tarr, M.J. (Eds.), Object categorization: Computer and human vision perspectives. Cambridge, UK: Cambridge University Press, 2009, pp.148-166. PDF.
3. Mark, J.T., Marion, B.B.,&Hoffman, D.D., “Natural selection and veridical perceptions,” Journal of Theoretical Biology, no. 266, 2010, pp.504-515. PDF.
4. Hawkeswood, T., “Review of the biology and host-plants of the Australian jewel beetle Julodimorpha bakewelli,” Calodema, vol. 3, 2005. PDF.
5. Norman, D., “Affordances and Design.” http://www.jnd.org/dn.mss/affordances_and_design.html

Image credits: iamwilliam, T. Hawkeswood [4], Matrix Reloaded (film), Old Book Illustrations.
Diagrams created by Samuel H. Kenyon.

This is an improved/expanded version of an essay I originally posted February 24th, 2010, on my blog SynapticNulship.

Tags: , , , , , , ,