Are Emotional Structures the Foundation of Intelligence?

Posted in artificial intelligence on January 9th, 2013 by Samuel Kenyon

It seems like all human babies go through the exact same intelligence growth program. Like clockwork. A lot of people have assumed that it really is a perfect program which is defined by genetics.

Obviously something happens when a child grows. But surely that consists of minor environmental queues to the genetic program. Or does it?

Consider if the “something happens as a child grows” might in fact be critical. And not just critical, but the major source of information. What exactly is that “nurture” part of nature vs. nurture?

What if the nurturing is in fact the source of all conceptual knowledge, language, sense of self, and sense of reality?

Read more »

Tags: , , , ,

Infants, Monkeys, Love, and AI

Posted in artificial intelligence on December 27th, 2012 by Samuel Kenyon

Perhaps you have seen pictures or videos from the 1960s of rhesus monkey babies clinging to inanimate surrogate mothers.

H.F. Harlow's Research Into Relationship Between Child and Mother Utilizing Infant Rhesus Monkey

In one of Harlow’s experiments, a baby monkey clings to the softer surrogate mother.

These experiments were by Harry Harlow, who eventually went against the psychology mainstream to demonstrate that love—namely caregiver-baby affection—was required for healthy development.

Dr. Harlow created inanimate surrogate mothers for the rhesus infants from wire and wood. Each infant became attached to its particular mother, recognizing its unique face and preferring it above all others. Harlow next chose to investigate if the infants had a preference for bare wire mothers or cloth covered mothers. For this experiment he presented the infants with a cloth mother and a wire mother under two conditions. In one situation, the wire mother held a bottle with food and the cloth mother held no food, and in the other, the cloth mother held the bottle and the wire mother had nothing.

Overwhelmingly, the infant macaques preferred spending their time clinging to the cloth mother. Even when only the wire mother could provide nourishment, the monkeys visited her only to feed. Harlow concluded that there was much more to the mother/infant relationship than milk and that this “contact comfort” was essential to the psychological development and health of infant monkeys and children. [1]

According to Stuart G. Shanker [2], various primates reach levels of funtional-emotional development similar to the first 2-3 levels (out of 9) that humans accomplish. Perhaps part of the system is the infancy period is much longer for humans.

Although a baby rhesus doesn’t express its positive affects with the same sorts of wide joyful smiles that we see in human infants between the ages of two and five months, in other respects it behaves in a manner similar to that of a human infant. The rhesus baby spends lots of time snuggling into its mother’s body or looking keenly at her face. It visibly relaxes while being rocked, and vocalizes happily when the mother plays with it. We can even see the baby rhythmically moving its arms and legs and vocalizing in time to its caregiver’s movements and vocalizations.

Shanker said this about Harlow’s experiments:

Although it was clear that the infants were deriving great comfort from the cloth-covered surrogates, they still suffered from striking social and emotional disorders.

One might interject here: Well so what? Who cares about social and emotional disorders? Well, aside from gunshot victims. What about intelligence? What about self awareness? The thing is though, that intelligence and possibly even the capacity for basic symbolic thought—ideas—are developed via emotions and social interactions.

Read more »

Tags: , , , ,

Recursion and the Human Mind

Posted in artificial intelligence on December 5th, 2011 by Samuel Kenyon

It’s certainly not new to propose recursion as a key element of the human mind—for instance Douglas Hofstadter has been writing about that since the 1970s.

nested recursion

Michael C. Corballis, a former professor of psychology, came out with a new book this year called The Recursive Mind. It explains his specific theory that I will attempt to outline here.

The Recursive Mind

As I understand it, his theory is composed of these parts:

  1. The ability of the human mind to generate concepts recursively is what causes the main differences between homo sapiens and other animals.
  2. A Chomskian internal language is the basis for all external languages and other recursive abilities. (See this blog post by Corballis for a summary of an internal language as a universal grammar).
  3. External languages evolved on top of the recursive abilities primarily for storytelling and social cohesion.
  4. External languages started with gestures, and most likely were followed by mouth clicking languages before vocal languages emerged.

Read more »

Tags: , , , , , ,

Enactive Interface Perception and Affordances

Posted in artificial intelligence, interfaces, philosophy on November 14th, 2011 by Samuel Kenyon

There are two freaky theories of perception which are very interesting to me not just for artificial intelligence, but also from a point of view of interfaces, interactions, and affordances. The first one is Alva Noë’s enactive approach to perception. The second one is Donald D. Hoffman’s interface theory of perception.

Enactive Perception vs. Interface Perception

Enactive Perception

BREAK_DA_vinci_NCE_by_iamwilliam_s
The key element of the enactive approach to perception is that sensorimotor knowledge and skills are a required part of perception [1].

In the case of vision, there is a tradition of keeping vision separate from the other senses and sensorimotor abilities, and also as treating it as a reconstruction program (inverse optics). The enactive approach suggests that visual perception is not simply a transformation of 2D pictures into a 3D representation, and that vision is dependent on sensorimotor skills. Indeed, the enactive approach claims that all perceptual representation is dependent on sensorimotor skills.

Example of optical flow (one of the ways to get structure from motion)

My interpretation of the enactive approach proposes that perception co-evolved with motor skills such as how our bodies move and how our sensors, for instance, eyes, move. A static 2D image can not tell you what color blobs are objects and what are merely artifacts of the sensor or environment (e.g. light effects). But if you walk around this scene, and take into account how you are moving, you get a lot more data to figure out what is stable and what is not. We have evolved to have constant motion in our eyes via saccades, so even without walking around or moving our heads, we are getting this motion data for our visual perception system.

Of course, there are some major issues that need to be resolved, at least in my mind, about enactive perception (and related theories). As Aaron Sloman has pointed out repeatedly, we need to fix or remove dependence on symbol grounding. Do all concepts, even abstract ones, exist in a mental skyscraper built on a foundation of sensorimotor concepts? I won’t get into that here, but I will hopefully return to it in a later blog post.

The enactive approach says that you should be careful about making assumptions that perception (and consciousness) can be isolated on one side of an arbitrary interface. For instance, it may not be alright to study perception–or consciousness–by looking just at the brain. It may be necessary to include much more of the mind-environment system–a system which is not limited to one side of the arbitrary interface of the skull.

Perception as a User Interface

human-computer interfaces (this still from Matrix Reloaded)

human-computer interfaces

The Interface Theory of Perception says that “our perceptions constitute a species-specific user interface that guides behavior in a niche” [2].

Evolution has provided us with icons and widgets to hide the true complexity of reality. This reality user interface allows organisms to survive better in particular environments, hence the selection for it.

Perception as an interface

Or as Hoffman et al summarize [3] the conceptual link from computer interfaces:

An interface promotes efficient interaction with the computer by hiding its structural and causal complexity, i.e., by hiding the truth. As a strategy for perception, an interface can dramatically trim the requirements for information and its concomitant costs in time and energy, thus leading to greater fitness. But the key advantage of an interface strategy is that it is not required to model aspects of objective reality; as a result it has more flexibility to model utility, and utility is all that matters in evolution.

Besides supporting the theory with simulations, Hoffman [2] uses a colorful real world example: he describes how male jewel beetles use a reality user interface to find females. This perceptual interface is composed of simple rules involving the color and shininess of female wing cases. Unfortunately, it evolved for a niche which could not have predicted the trash dropped by humans that lead to false positives. This results in male jewel beetles humping empty beer bottles.

Male Australian jewel beetle attempting to mate with a discarded “stubby” (beer bottle)

For more info on the beetles, see this short biological review [4] which includes “discussion regarding the habit of the males of this species to attempt mating with brown beer-bottles.” It also notes:

Schlaepfer et al. (2002) point out that organisms often rely on environmental cues to make behavioural and life-history decisions. However, in environments which have been altered suddenly by humans, formerly reliable cues might no longer be associated with adaptive outcomes. In such cases, organisms can become trapped by their evolutionary responses to the cues and experience reduced survival or reproduction (Schlaepfer et al., 2002).

All perception, including of humans, evolved for adaptation to niches. There is no reason or evidence to suspect that our reality interfaces provide “faithful depictions” of the objective world. Fitness trumps truth. Hoffman says that Noë supports a version of faithful depiction within enactive perception, although I don’t see how that is necessary for enactive perception.

Interactions

One might think of perception as interactions within a system. This system contains the blobs of matter we typically refer to as an “organism” and its “environment.”

You’ll notice that in the diagram in the previous section, “environment” and “organism” are in separate boxes. But that can be very misleading. Really the organism is part of the environment:

Of course, the organism itself is part of the environment.

True Perception is Right Out the Window

How do we know what we know about reality? There seems to be a consistency at our macroscopic scale of operation. One consistency is due to natural genetic programs–and programs they in turn cause–which result in humans having shared knowledge bases and shared kinds of experience. If you’ve ever not been on the same page as somebody before, then you can imagine how it would be like if we didn’t have anything in common conceptually. Communication would be very difficult. For every other entity you want to communicate with, you’d have to establish communication interfaces, translators, interpreters, etc. And how would you even know who to communicate with in the first place? Maybe you wouldn’t have even evolved communication.

So humans (and probably many other related animals) have experiences and concepts that are similar enough that we can communicate with each other via speech, writing, physical contact, gestures, art, etc.

But for all that shared experience and ability to generate interfaces, we have no inkling of reality.

Since the interface theory of perception says that our perception is not necessarily realistic, and is most likely not even close to being realistic, does this conflict with the enactive theory?

Noë chants the mantra that the world makes itself available to us (echoing some of the 1980s/1990s era Rod Brooks / behavioral robotics approach of “world as its own model”). If representation is distributed in a human-environment system, does it have to be a veridical (truthful) representation? No. I don’t see why that has to be the case. So it seems that the non-veridical nature of perception should not prevent us from combining these two theories.

Affordances

A chair affords sitting, a book affords turning pages.

A chair affords sitting, a book affords turning pages.

Another link that might assist synthesizing these two theories is that of J.J. Gibson’s affordances. Affordances are “actionable properties between the world and an actor (a person or animal)” [5].

The connection of affordances to the enactive approach is provided by Noë (here he’s using an example of flatness):

To see something is flat is precisely to see it as giving rise to certain possibilities of sensorimotor contingency…Gibson’s theory, and this is plausible, is that we don’t see the flatness and then interpret it as suitable for climbing upon. To see it as flat is to see it as making available possibilities for movement. To see it as flat is to see it, directly, as affording certain possibilities.

Noë also states that there is a sense in which all objects of perception are affordances. I think this implies that if there is no affordance relationship between you and a particular part of the environment, then you will not perceive that part. It doesn’t exist to you.

The concept of affordances is also used, in a modified form, for interaction design as well. For those who are designers or understand design, you can perhaps understand how affordances in nature have to be perceived by animals so that they can survive. It is perhaps the inverse of the design problem–instead of making the artifact afford action for the user, the animal had to make itself comprehend certain affordances through evo-devo.

Design writer Don Norman makes the point to distinguish between “real” and “perceived” affordances[5]. That makes sense in the context of his examples such as human-computer interfaces. But are any affordances actually real? And that gets back into the perception as interface theory–animals perceive affordances, but there’s no guarantee those affordances are veridical.

References
1. Noë, A., Action in Perception, Cambridge, MA: MIT Press, 2004.
2. Hoffman, D.D., “The interface theory of perception: Natural selection drives true perception to swift extinction” in Dickinson, S., Leonardis, A., Schiele, B.,&Tarr, M.J. (Eds.), Object categorization: Computer and human vision perspectives. Cambridge, UK: Cambridge University Press, 2009, pp.148-166. PDF.
3. Mark, J.T., Marion, B.B.,&Hoffman, D.D., “Natural selection and veridical perceptions,” Journal of Theoretical Biology, no. 266, 2010, pp.504-515. PDF.
4. Hawkeswood, T., “Review of the biology and host-plants of the Australian jewel beetle Julodimorpha bakewelli,” Calodema, vol. 3, 2005. PDF.
5. Norman, D., “Affordances and Design.” http://www.jnd.org/dn.mss/affordances_and_design.html

Image credits: iamwilliam, T. Hawkeswood [4], Matrix Reloaded (film), Old Book Illustrations.
Diagrams created by Samuel H. Kenyon.

This is an improved/expanded version of an essay I originally posted February 24th, 2010, on my blog SynapticNulship.

Tags: , , , , , , ,