Polymorphism in Mental Development

Posted in artificial intelligence on March 9th, 2014 by Samuel Kenyon

Adaptability and uninterrupted continuous operations are important features of mental development.

An organism that can’t adapt enough is too rigid and brittle—and dies. The environment will never be exactly as expected (or exactly the same as any other previous time during evolution). Sure, in broad strokes the environment has to be the same (e.g. gravity), and the process of reproduction has limits, but many details change.

During its lifetime (ontogeny), starting from inception, all through embryogeny and through childhood and into adulthood, an organism always has to be “on call.” At the very least, homeostasis is keeping it alive. I’d say for the most part all other informational control/feedback systems, including the nervous system which in turn includes the brain, all have to be always operational. There are brief gaps, for instance highly altricial animals depend on their parents protecting them as children. And of course there’s the risk when one is asleep. But even in those vulnerable states, organisms do not suddenly change their cognitive architectures or memories in a radically broken way. Otherwise they’d die.

There are a couple computer-ish concepts that might be useful for analyzing and synthesizing these aspects of mental development:

  1. Parameter ranges
  2. Polymorphism

These aren’t the only relevant concepts; I just happen to be thinking about them lately.

Parameter Ranges

Although it seems to be obvious, I want to point out that animals such as humans have bodies and skills that do not require exact environmental matches. For example, almost any human can hammer a nail regardless of the exact size and shape of the hammer, or the nail for that matter.

Crows have affordances with sticks and string-like objects. Actually, crows aren’t alone—other bird species have been seen pulling strings for centuries [1]. Anyway, crows use their beaks and feet to accomplish tasks with sticks and strings.

crow with a stick

a crow with a stick

crow incrementally pulling up a string to get food

a crow incrementally pull-stepping a string to get food

Presumably there is some range of dimensions and appearances of sticks and strings that a crow can:

  1. Physically manipulate
  2. Recognize as an affordance

And note that those items are required regardless of whether crows have insight at all or are just behaving through reinforcement feedback loops [1].

Organisms fit into their niches, but there’s flexibility to the fit. However, these flexibilities are not unlimited—there are ranges. E.g., physically a person’s hand cannot grip beyond a certain width. The discipline of Human factors (and ergonomics) studies these ranges of the human body as well as cognitive limits.

For an AI agent, there are a myriad places in which there could a function f and its parameter x with some range R of acceptable values for x. Perhaps the levels of abstraction for this to be useful are exactly those involved with perception of affordances and related skill-knowledge.

Polymorphism

Polymorphic entities have the same interfaces. Typically one ends up with a family of types of objects (e.g. classes). At any given time, if you are given an object of a family, you can interface to it the same way.

In object oriented programming this is exemplified by calling functions (aka methods) on pointers to a base class object without actually knowing what particular child class the object is an instance of. It works conceptually because all the child classes inherit the same interface.

For example, in C++:

class A {
public:
    virtual void foo() = 0;
};

class B : public A
{
public:
    void foo() { std::cout << "B foo!\n"; }
};

class C : public A
{
public:
    void foo() { std::cout << "C foo!\n"; }
};

void main() 
{
    std::unique_ptr<A> obj(new B);
    obj->foo(); // does the B variation of foo()

    std::unique_ptr<A> obj2(new C);
    obj2->foo(); //does the C variation of foo()
}

That example is of course contrived for simplicity. It’s practical if you have one piece of code which should not need to know the details of what particular kind of object it has—it just needs to do operations defined by an interface. E.g., the main rendering loop for a computer game could have a list of objects to call draw() on, and it doesn’t care what specific type each object is—as long as it can call draw() on each object it’s happy.

So what the hell does this have to do with mental development? Well, it could be a way for a mind to add new functions to existing concepts without blowing away the existing functions. For instance, at a high level, a new skill S4 could be learned and connected to concept C. But concepts S1, S2, and S3 are still usable for concept C. In a crisp analogy to programming polymorphism, the particular variant of S that is used in a context would be based on the sub-type of C that is perceived.

On the other hand, polymorphism could be fuzzier and more like the switch concept I mentioned in a previous post. In other words, the mechanism of the switch could be anything. E.g. a switch could be influenced by global emotional states instead of the active sub-type of C.

One can imagine polymorphism both at reflexive levels and on conceptual symbol-based levels. Reflexive (or “behavioral”) networks could be designed to effectively arbitrate based on the “type” of an external situation via the mapping of inputs (sensor data) to outputs (actuators). Conceptual based mental polymorphism would presumably actually categorize a perception (or perhaps an input from some other mental module) which would be mapped by subcategory to the appropriate version of an action. And maybe it’s not an action, but just the next mental node to visit.


References
[1] Taylor, A., Medina, F., Holzhaider, J., Hearne, L., Hunt, G., & Gray, R. (2010) An Investigation into the Cognition Behind Spontaneous String Pulling in New Caledonian Crows. PLoS ONE, 5(2). DOI: 10.1371/journal.pone.0009345



Image credits:

  1. University of Oxford
  2. David Schulz

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , ,

Heterarchies and Society of Mind’s Origins

Posted in artificial intelligence on February 4th, 2014 by Samuel Kenyon

Ever wonder how Society of Mind came about? Of course you do.

One of the key ideas of Society of Mind [1] is that at some range of abstraction levels, the brain’s software is a bunch of asynchronous agents. Agents are simple—but a properly organized society of them results in what we call “mind.”

agents

agents

The book Society of Mind includes many sub-theories of how agents might work, structures for connecting them together, memory, etc. Although Minsky mentions some of the development points that lead to the book, he makes no explicit references to old papers. The book is copyrighted “1985, 1986.” Rewind back to 1979, “long” before I was born. In the book Artificial Intelligence: An MIT Perspective [2], there is a chapter by Minsky called “The Society Theory of Thinking.” In a note, Minsky summarizes it as:

Papert and I try to combine methods from developmental, dynamic, and cognitive psychological theories with ideas from Artificial Intelligence and computational theories. Freud and Piaget play important roles.

Ok, that shouldn’t be a surprise if you read the later book. But what about heterarchies? In 1971 Patrick Winston described heterarchical organization as [3]:

An interacting community of processes, some narrow experts, others broad generalists, and still others in the role of critics.

tangled

tangled

“Heterarchy” is a term that many attribute to Warren McCulloch in 1945 based on his neural research. Although it may have been abandoned in AI, the concept had success in anthropology (according to the intertubes). It is important to note that a heterarchy can be viewed as a parent class to heirarchies and a heterarchy can contain hierarchies.

In 1973 the student Eugene Freuder, who later became well known for constraint based reasoning, reported on his “active knowledge” for vision thesis, called SEER [4]. In one of the funniest papers I’ve read, Freuder warns us that:

this paper will probably vacillate between cryptic and incoherent.

Nevertheless, it is healthy to write things down periodically. Good luck.

And later on that:

SEER never demands that much be done, it just makes a lot of helpful suggestion. A good boss.

This basic structure is not too hairy, I hope.

If you like hair, however, there are enough hooks here to open up a wig salon.

He refers to earlier heterarchy uses in the AI Lab, but says that they are isolated hacks, whereas his project is more properly a system designed to be a heterachy which allows any number of hacks to be added during development. And this supposedly allows the system to make the “best interactive use of the disparate knowledge it has.”

This supposed hetarchical system:

  • “provides concrete mechanisms for heterarchical interactions and ‘institutionalizes’ and encourages forms of heterarchy like advice”
  • allows integration of modules during development (a user (the programmer) feature)

One aspect of this is the parallelism and whether that was actually better than serial methods. The MIT heterarchy thread eventually turned into Society of Mind, or at least that’s what Patrick Winston indicates in his section introduction [2]:

Minsky’s section introduces his theory of mind in which the basic constituents are very simple agents whose simplicity strongly affects the nature of communication between different parts of a single mind. Working with Papert, he has greatly refined a set of notions that seem to have roots in the ideas that formerly went by the name of heterarchy.

Society of Mind is highly cited but rarely implemented or tested. Reactive aka behavioral robotics can be heterarchies, but are either ignored by AI or relegated to the bottom of 3 layer architectures for robots. The concepts of modularity and parallel processing have folded into general software engineering paradigms.

But I wonder if maybe the heterarchy concept(s) for cognitive architectures were abandoned too quickly. The accidents of history may have already incorporated the best ideas from heterarchies into computer science, however I am not yet sure about that.

References

[1] M. Minsky. The Society of Mind. New York: Simon and Schuster, 1986, pp. 249-250.
[2] P.H. Winston & R.H. Brown, Eds., Artificial Intelligence: An MIT Perspective, vol. 1, MIT Press, 1979.
[3] P.H. Winston, “Heterarchy in the M.I.T. Robot.” MIT AI Memo Vision Flash 8, March 1971.
[4] E.C. Freuder, “Active Knowledge.” MIT AI Memo Vision Flash 53, Oct. 1973.


Image Credits

  1. Samuel H. Kenyon’s mashup of Magritte and Agent Smith of the Matrix trilogy
  2. John Zeweniuk

 

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,

Evaluating Webots

Posted in artificial intelligence, robotics on February 4th, 2014 by Samuel Kenyon

I’m trying to find a better simulator than Breve for robots and 3D physics world creation.

I have been examining the Webots simulation environment. It seems pretty useful since I could write controllers in C++ and it comes with several robot models out of the box. And I like the scene graph (or “scene tree” as they call it) approach for environments (they use VRML97 which is obsolete but at least it’s a well known standard); I only played with the interface enough to add a sphere and a box but it seems good enough so far. A lot easier than doing it completely programmatically and/or completely from scratch with raw data files. And I have made 3D models from scratch with data files in the past and it was not that efficient (compared to some ideal GUI) except for tweaking exact numbers.

They have some Nao (aka “NAO”) robot models and it would be awesome to use that for some mental development research. I’m thinking about affordances, and certainly the Nao with its 25 DOF and lots of sensors is more than sufficient for interesting affordances with real world (or simulated 3D world) environments. It may actually be overkill…

simulated Nao looks at a brick box

simulated Nao looks at a brick box

Not that I have access to a real Nao, although I programmed two little little test scripts last year on an actual Nao using Choregraphe, a primarily visual programming tool. Webots can run a Nao server so you can actually hook Choregraphe to the Webots simulation (Choreograph has a sim but just of the robot model, not of environmental interactions). Unfortunately I couldn’t try this out as it’s blocked for free users (the screenshot below shows an attempt to use the robot soccer world).

naoqisim denied!

naoqisim denied!

And I just realized if I write a Webots Nao controller, there’s no documentation or obvious way that I can see to know what the exact actuator names are to pass to wb_robot_get_device() and the demo doesn’t show all motors (or maybe they haven’t implemented all motors?). [Update: I have been informed that you can get the device tags from the Robot Window which can be made visible with the menu (Robot -> Show Robot Window) or double-clicking on the robot in the sim view. Not as good as a text list but at least the info is there.] Making motion files manually would be a pain as well. Maybe I will end up making a simpler robot model from scratch.

The lack of access to the Superviser is also making me question using this for my non-funded research. I might try running some experiments in it, and just see how far I can go without a Superviser.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: ,

Hyping Nonsense: What Happens When Artificial Intelligence Turns On Us?

Posted in culture, transhumanism on January 23rd, 2014 by Samuel Kenyon

The user(s) behind the G+ account Singularity 2045 made an appropriately skeptical post today about the latest Machines-versus-Humans “prediction,” specifically an article “What Happens When Artificial Intelligence Turns On Us” about a new book by James Barrat.

As S2045 says:

Don’t believe the hype. It is utter nonsense to think AI or robots would ever turn on humans. It is a good idea to explore in novels, films, or knee-jerk doomsday philosophizing because disaster themes sell well. Thankfully the fiction or speculation will never translate to reality because it is based upon a failure to recognize how technology erodes scarcity. Scarcity is the root of all conflict.

Smithsonian even includes a quote by the equally clueless Eliezer Yudkowsky:

In the longer term, as experts in my book argue, A.I. approaching human-level intelligence won’t be easily controlled; unfortunately, super-intelligence doesn’t imply benevolence. As A.I. theorist Eliezer Yudkowsky of MIRI [the Machine Intelligence Research Institute] puts it, “The A.I. does not love you, nor does it hate you, but you are made of atoms it can use for something else.” If ethics can’t be built into a machine, then we’ll be creating super-intelligent psychopaths, creatures without moral compasses, and we won’t be their masters for long.

In the G+ comments you can see some arguments about the evidence for or against the prediction. I would like to add a couple arguments in support of Singularity 2045′s conclusion (but not necessarily endorsing his specific arguments):

  1. Despite “future shock” (before Kurzweil and Vinge there was Toffler) from accelerating change in certain avenues, most of these worries about machines-vs-humans battles are so fictional because they assume a discrete transition point: before the machines appeared and after. The only way that could happen is if there was an massive planetary invasion of intelligent robots from another planet. In real life things happen over a period of time with transitions and various arbitrary (e.g. because of politics) diversions and fads…despite any accelerating change.
  2. We have examples of humans living in partial cooperation and simultaneously partial conflict with other species. Insects outnumber us. Millions of cats and dogs live in human homes and get better treatment than the poor and homeless in the world. Meanwhile, crows and parrots are highly intelligent animals often living in symbiosis with humans…except when they become menaces.

If we’re going to map fiction to reality, Michael Crichton techno-thrillers are a bit closer to real technological disasters, which are local specific incidences resulting from the right mixture of human errors and coincidence (and this happens in real life sometimes for instance nuclear reactor disasters). And sometimes those errors are far apart at first like somebody designing a control panel badly which assists in a bad decision by an operator 10 years later during an emergency.

And of course I’ve already talked about the Us-versus-Them dichotomy and the role of interfaces in human-robot technology in my paper “Would You Still Love Me If I Was A Robot?”

Addendum

I doubt we will have anything as clear cut as an us-vs-them new species. And if we maintain civilization (e.g. not the anti-gay anti-atheist witch-hunting segments) then new variations would not be segregated / given less rights and vice-versa they would not segregate / remove our human rights.

As far as I know, there is no such thing as a natural species on Earth that “peacefully coexists.” This may be the nature of the system, and that’s certainly easy to see when looking at the evolutionary arms races constantly happening. Anyway my point is that any attempt to appeal to nature or the mythical peaceful caveman is not the right direction. The fact that humans can even imagine never-ending peace and utopia seems to indicate that we have started to surpass nature’s “cold equations.”

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: ,

2013: Postmortem

Posted in meta on January 19th, 2014 by Samuel Kenyon

This is a personal postmortem (aka retrospective), not a report on the world at-large.

crossing a stream in the Amazon

crossing a stream in the Amazon

What Went Right

I accomplished several things of a wider diversity than I did in 2012, particularly new-to-me activities.

Riding horses in Iceland

Riding horses in Iceland

Highlights:

  • Sky-dived for the first time
  • Went outdoor top-rope rock climbing for the first time
  • Submitted two artificial intelligence papers to conferences/symposia, one of which was accepted
  • Went to aforementioned symposium and presented a poster for it
  • Wrote 25 blog posts
  • Started working on developmental systems
  • Started a new job (technically started mid Dec 2012)
  • Wrote a new, improved version of my short film screenplay, Enough to be Dangerous
  • Started working on a horror film screenplay
  • Acted in a music video for Wake No More
  • Acted in a book video for Kissing Oscar Wilde
  • Acted in a a BU short comedic film called South x Southeast (screened in Dec 2012, but we’ll count it since the video appeared online in 2013)
  • Explored Iceland
  • Explored Ecuador
    • Including rising to the highest elevation I’ve ever been at
    • Explored (via a guide) the Amazon rainforest
      • Monkeys!
      • I ate a grub (it wasn’t raw–it was cooked (smoked) by the natives)
  • Learned to ride a horse (in Iceland, also went riding in Ecuador in the Andes mountains)
  • Explored the entire Freedom Tunnel in New York
  • Went sailing for the first time (part of a company forced-fun day, but it was fairly interesting)
  • Participated in No Pants Subway Ride
Laundromat cafe, Reykjavik, Iceland

Laundromat cafe, Reykjavik, Iceland

Iceland

Iceland

Iceland

A volcanic crater we climbed up to

Iceland

Iceland

Kayaking in the volcanic crater of Quilotoa, Ecuador with Emily

Kayaking in the volcanic crater of Quilotoa, Ecuador with Emily

Me holding a freshly opened jungle coconut (Amazon rainforest)

Me holding a freshly opened jungle coconut (Amazon rainforest) (photo by Emily Durrant)

What Went Wrong

AI Paper Number 2

The second papers I wrote was obviously not ready (both the paper and the research), but I submitted it anyway. However, I got excellent feedback in the rejection.

Not Enough Personal Coding

Although I worked a lot on non-work projects, I didn’t write as much code as I wanted. Still, I think there was an increase from 2012.

Not Enough Hardware

Aside from testing some little programs I wrote on a NAO robot, I didn’t do any work with real robots in 2013. I need to improve the balance of theory to implementation with my robotic and computational intelligence ideas.

Not Enough Art

I didn’t make any new drawings except for a few doodles. Also although I’ve written some notes for new music compositions, I didn’t actually generate any new music last year…although I did finally post an old composition on Soundcloud.

Skydiving

I didn’t like being squashed in a painful/awkward position on the floor of an overfull plane and without even any handles to pull myself up out of this position. It kind of ruined the whole experience until later on after the parachute was out and we were gliding—after tumbling ridiculously on exiting the plane and almost going unconscious from lack of breathing. Apparently the instructor was supposed to tell me about making sure to breath (the experience is quite new if you are not used to having your face blasted with air as you plummet from the highest location you’ve ever jumped before), but he failed to do so. I probably won’t go back to that airfield and if I do, I will make sure I’m not getting on the plane unless I can crouch or sit in a safe, un-strained, non-distracting position.

970853_10151492414894302_723138573_n

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Growing Robot Minds

Posted in artificial intelligence on January 11th, 2014 by Samuel Kenyon

One way to increase the intelligence of a robot is to train it with a series of missions, analogous to the missions (aka levels) in a video game.

mission (aka levels)

In a developmental robot, the training would not be simply learning—its brain structure would actually change. Biological development shows some extremes that a robot could go through, like starting with a small seed that constructs itself, or creating too many neural connections and then in a later phase deleting a whole bunch of them.

As another example of development vs. learning, a simple artificial neural network is trained when the weights have been changed after a series of training inputs (and error correction if it is supervised). In contradistinction, a developmental system changes itself structurally. It would be like growing completely new nodes, network layers, or new networks entirely during each training level.

Or you can imagine the difference between decorating a skyscraper (learning) and building a skyscraper (development).

construction (development) vs. interior design (learning)

construction (development) vs. interior design (learning)

What Happens to the Robot Brain in these Missions?

Inside a robotic mental development mission or stage, almost anything could go on depending on the mad scientist who made it. It could be a series of timed, purely internal structural changes. For instance:

  1. Grow set A[] of mental modules
  2. Grow mental module B
  3. Connect B to some of A[]
  4. Activate dormant function f() in all modules
  5. Add in pre-made module C and connect it to all other modules

Instead of (or in addition to) pre-planned timed changes, the stages could be in part based on environmental interactions. And I think that is actually a possibly useful tactic to make a robot adjust to its own morphology and the particular range of environments that it must operate and survive in. And that makes the stages more like the aforementioned missions as one has in computer games.

Note that learning is most likely going to be happening at the same time (unless learning abilities are turned off as part of a developmental level). In the space of all possible developmental robots, one would expect some mental change gray areas somewhere between development and learning.

Given the input and triggering roles of the environment, each development level may require a special sandbox world. The body of the robot may also undergo changes during each level.

The ordering of the levels/ sandboxes would depend on what mental structures are necessary going in to each one.

A Problem

One problem that I have been thinking about is how to prevent cross-contamination of mental changes. One mission might nullify a previous mission.

For example, let’s say that a robot can now survive in Sandbox A after making it through Mission A. Now the robot proceeds through Mission B in Sandbox B. You would expect the robot to be able to survive in a bigger new sandbox (e.g. the real world) that has elements of both Sandbox A and Sandbox B (or requires the mental structures developed during A and B). But B might have messed up A. And now you have a robot that’s good at B but not A, or even worse not good at anything.

Imagine some unstructured cookie dough. You can form a blob of it into a special shape with a cookie cutter.

cookie cutter

cookie cutter

But applying several cookie cutters in a row might result in an unrecognizable shape, maybe even no better than the original blob.

As a mathematical example, take a four stage developmental sequence where each stage is a different function, numbered 1-4. This could be represented as:

y = f_{4}(f_{3}(f_{2}(f_{1}(x))))

where x is the starting cognitive system and y is the final resulting cognitive system.

This function composition is not commutative, e.g.

f_{4}\circ f_{3}\circ f_{2}\circ f_{1} \neq f_{1}\circ f_{2}\circ f_{3}\circ f_{4}

A Commutative Approach

There is a way to make an architecture and transform function type that is commutative. You might think that will solve our problem, however it only works with certain constraints that we might not want. To explain I will show you an example of a special commutative configuration.

We could require all the development stages to have a minimal required integration program. I.e. f1(), f2(), etc. are all sub-types of m(), the master function. Or in object oriented terms:

The example here would have each mission result in a new mental module. The required default program would automatically connect this module with the same protocol to all other modules.

So in this case:

f_{4}\circ f_{3}\circ f_{2}\circ f_{1} = f_{1}\circ f_{2}\circ f_{3}\circ f_{4}

I don’t think this is a good solution since it seriously limits the cognitive architecture. We would not even be able to build a simple layered control system where each higher layer depends on the lower layers. We cannot have arbitrary links and different types of links between modules. And it does not address how conflicts are arbitrated for outputs.

However, we could add in some dynamic adaptive interfaces in each module that apply special changes. For instance, module type B might send out feelers to sense the presence of module type A, and even if A is added afterwards, eventually B will find it after all the modules have been added. But, we will not be able to actually unleash the robot into any of the environments it should be able to handle until the end, and this is bad. It removes the power of iterative development. And it means that a mission associated with a module will be severely limited.

The most damning defect with this approach is that there’s still no guarantee that a recently added module won’t interfere with previous modules as the robot interacts in a dynamic world.

A Pattern Solution

A non-commutative solution might reside in integration patterns. These would preserve the functionality from previous stages as the structure is changed.

a multipole switch to illustrate mental switching

a multipole switch to illustrate mental switching

For instance, one pattern might be to add a switching mechanism. The resulting robot mind would be partially modal—in a given context, it would activate the most appropriate developed part of its mind but not all of parts at the same time.

A similar pattern could be used for developing or learning new skills—a new skill doesn’t overwrite previous skills, it is instead added to the relevant set of skills which are triggered or selected by other mechanisms.


Image credits:

  1. Nintendo
  2. Georgia State University Library via Atlanta Time Machine
  3. dezeen
  4. crooked brains
  5. diagram by the author
  6. MyDukkan

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,

Content/Internalism Space and Interfacism

Posted in artificial intelligence, interfaces, philosophy on December 29th, 2013 by Samuel Kenyon

Whenever a machine or moist machine (aka animal) comes up with a solution, an observer could imagine an infinite number of alternate solutions. The observed machine, depending on its programming, may have considered many possible options before choosing one. In any case, we could imagine a 2D or 3D (or really any dimensionality) mathematical space in which to place all these different solutions.

Fake example of a design/solution or analysis space.

Fake example of a design/solution or analysis space.

Of course, you have to be careful what the axes are. What if you chose the wrong variables? We could come up with dimensions for any kind of analysis or synthesis. I want to introduce here one such space, which illuminates particular spectra, offering a particular view into the design space of cognitive architectures. In some sense, it’s still a solution space—as a thinking system I am exploring the possible solutions for making artificial thinking systems.

Keep in mind that this landscape is only one view, just one scenic route through explanationville—or is it designville?

This photo could bring to mind several relevant concepts: design vs. analysis, representations, and the illusion of reality in reflections

This photo could bring to mind several relevant concepts: design vs. analysis, representations, and the illusion of reality in reflections

Content/Internalism Space

In the following diagram I propose that these two cognitive spectra are of interest and possibly related:

  1. Content vs. no content
  2. Internal cognition vs. external cognition
Content vs. Internalism Space

Content vs. Internalism Space

What the Hell is “Content”?

I don’t have a very good definition of this, and so far I haven’t seen one from anybody else despite its common usage by philosophers. One aspect (or perhaps container) of content is representation. That is a bit easier to comprehend—an informational structure can represent something in the real world (or represent some other informational structure). It may seem obvious that humans have representations in their minds, but that is debatable. Some, such as Hutto and Myin, suggest that human minds are primarily without content, and only a few faculties require content [1]:

Some cognitive activity—plausibly, that associated with and dependent upon the mastery of language—surely involves content. Still, if our analyses are right, a surprising amount of mental life (including some canonical forms of it, such as human visual experience) may well be inherently contentless.

And the primary type of content that Hutto and Myin try to expunge is representational. It’s worth mentioning that representation can be divorced from the Computational Theory of Mind. Nothing here goes against the mind as computation. If you could pause a brain, you could point to various informational states, which in turn compose structures, and say that those structures are “representations.” But they don’t necessarily mean anything—they don’t have to be semantic. This leads to aboutness…

Another aspect of content is aboutness. “Aboutness” is an easier to use word in place of the philosophical term intentionality; “intentionality” has a different everyday meaning which can cause confusion [2]. We think about stuff. We talk about stuff. External signs are about stuff. And we all seem to have a lot of overlapping agreements on what stuff means, otherwise we wouldn’t be able to communicate at all and there wouldn’t be any sense of logic in the world.

So does this mean we all have similar representations? Does a stop sign represent something? Is that representation stored in all of our brains, thus we all know what a stop sign means? And what things would we not understand without in-brain representations? For instance, consider some sensory stimulus that sets off a chain reaction resulting in a particular behavior that most humans share. Is that internal representation, or are dynamic interfaces, however complicated, something different?

Internal vs. External

This is about the prevailing cognitive science assumption that anything of interest cognitively is neural. Indeed, most would go even further than neural and limit themselves just to the brain. The brain is just one part of your nervous system. Although human brain evolution and development seem to be the cause of our supposed mental advantages over other animals, we should be careful not to discard all the supporting and/or interacting structures. If we take the magic glass elevator a bit down and sideways, we might want to consider our insect cousins in which the brain is not the only critical part of the nervous system—and insects can still operate in some fashion for days or weeks without their brains.

I’m not saying here that the brain focus is wrong; I’m merely saying that one can have a spectrum. For instance, a particular ALife experiment could be analyzed from the point of view of anywhere on that axis. Or you could design an ALife situation on any point, e.g. just by focusing on the internal controller that is analogous to a brain (internalist) vs. focusing on the entire system of brain-body-environment (externalist).

Interfacism

Since there has to be an “ism” for everything, there is of course representationalism. Another philosophical stance that is sometimes pitted against representationalism is direct realism.

Direct realism seems to be kind of sloppy. It could simply mean that at some levels of abstraction in the mind, real world objects are experienced as whole objects, not as the various mental middlemen which were involved in constructing the representation of that object. E.g., we don’t see a chair by consciously and painstakingly sorting through various raw sensory data chunks—we have an evolved and developed system for becoming aware of a chair as an object “directly.”

Or, perhaps, in an enactivism or dynamic system sense, one could say that regardless of information processing or representations, real world objects are the primary cause of information patterns that propagate through the system which lead to experience of the object.

My middle ground between direct and indirect realism would, perhaps, be called “interfacism,” which is a form of representationalism that is enactivism-compatible. Perhaps most enactivists already think that way, although I don’t recall seeing any enactivist descriptions of mental representation in terms of interfaces.

What I definitely do not concede is any form of cognitive architecture which requires veridical, aka truthful, accounts anywhere in the mind. What I do propose is that any concept of an organism can be seen as interactions. The organism itself is a bunch of cellular interactions, and that blob interacts with other blobs and elements of the environment, some of which may be tools or cognitively-extensive information processors. Whenever you try to look at a particular interaction, there is an interface. Zooming into that interface reveals yet more interfaces, and so on. To say anything is direct, in that sense, is false.

For example, an interfacism description of a human becoming aware of a glass of beer would acknowledge that the human as an animate object and the beer glass as an inanimate object are arbitrary abstractions or slices of reality. At that level in that slice, we can say there is an interface between the human and the glass of beer, presumably involving the mind attributed to the human.

Human-beer interface

Human-beer interface

But, if we zoom into the interface, there will be more interfaces.

Zooming in to the human-beer interface reveals more interfaces.

Zooming in to the human-beer interface reveals more interfaces.

And semantics will probably require links to other things, for instance we don’t just see what is in front of us—we can be primed or biased, or hallucinate, or dream, etc. How sensory data comes to mean anything at all almost certainly involves evolutionary history and ontogeny (life history) and current brain states at least as much as any immediate perceptual trigger. And our perception is just a contraption of evolution, so we aren’t really seeing true reality ever—it’s a nonsensical concept.

I think interfacism is possibly a good alternate way to look at how cognition, be it wide or narrow—at any given cognitive granularity, there is no direct connection between two “nodes” or objects. There is just an interface, and anything “direct” is at a level below, recursively. It’s also compatible with non-truthful representations and/or perception.

Some might say that representations have to be truthful or that there are representations, for instance in animal behaviors, because there is some truthful mapping between the real world and the behavior. With an interface point of view we can throw truth out the window. Mappings can be arbitrary. There may be consistent and/or accurate mappings. But they don’t necessarily have to be truthful in any sense aside from that.


References
[1]D. D. Hutto and E. Myin, Radicalizing enactivism: basic minds without content. Cambridge, Mass.: MIT Press, 2013.
[2]D. C. Dennett, Intuition Pumps and Other Tools for Thinking. W.W. Norton & Company, 2013.


Image credits
Figures by Samuel H. Kenyon.
Photo by Dimosthenis Kapa

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , ,

AAAI FSS-13 and Symbol Grounding

Posted in artificial intelligence on November 19th, 2013 by Samuel Kenyon

At the AAAI 2013 Fall Symposia (FSS-13) 1 2, I realized that I was not prepared to explain certain topics quickly to those who are specialists in various AI domains and/or don’t delve into philosophy of mind issues. Namely I am thinking of enactivism and embodied cognition.

my poster

my poster

But something even easier (or so I thought) that threw up communication boundaries was The Symbol Grounding Problem. Even those in AI who have a vague knowledge of the issue will often reject it as a real problem. Or maybe Jeff Clune was just testing me. Either way, how can one give an elevator pitch about symbol grounding?

So after thinking about it this weekend, I think the simplest explanation is this:

Symbol grounding is about making meaning intrinsic to an agent as opposed to parasitic meaning provided by an external human researcher or user.

And really, maybe it should not be called a “problem” anymore. It’s only a problem if somebody claims that systems have human-like knowledge but in fact they do not have any intrinsic meaning. Most applications, such as NLP programs and semantic graphs / networks, do not have intrinsic meaning. (I’m willing to grant them a small amount intrinsic meaning if that meaning depends on the network structure itself.)

Meanwhile, there is in fact grounded knowledge of some sort in research labs. For instance, AI systems in which perceptual invariants are registered as objects are making grounded symbols (e.g. the work presented by Bonny Banerjee). That type of object may not meet some definitions of “symbol,” but it is at least a sub-symbol which could be used to form full mental symbols.

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, "The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!"

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, “The Leabra Cognitive Architecture:
How to Play 20 Principles with Nature and Win!”

Randall O’Reilly from University of Colorado gave a keynote speech about some of his computational cognitive neuroscience in which there are explicit mappings from one level to the next. Even if his architectures are wrong as far as biological modeling, if the lowest layer is in fact the simulation he showed us, then it is symbolically grounded as far as I can tell. The thing that is a “problem” in general in AI is to link the bottom / middle to the top (e.g. natural language).

I think that the quick symbol grounding definition above (in italics) is enough to at least establish a thin bridge between various AI disciplines and skeptics of symbol grounding. Unfortunately, I also learned this weekend that hardly anybody agrees on what a “symbol” is.

Symbols?

Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Gary Marcus by some coincidence ended our symposium with a keynote that successfully convinced many people there that symbolic AI never died and is in fact present in many AI systems even if they don’t realize it, and is necessary in combination with other methods (for instance connectionist ML) at the very least for achieving human-like inference. Marcus’s presentation was related to some concepts in his book The Algebraic Mind (which I admit I have not read yet). There’s more to it like variable binding that I’m not going to get into here.

As far as I can tell, my concept of mental symbols is very similar to Marcus’s. I thought I was in the traditional camp in that regard. And yet his talk spawned debate on the very definition of “symbol”. Also, I’m starting to wonder if I should be careful about “subsymbolic” vs. “symbolic” structures. Two days earlier, when I had asked a presenter about the symbols in his research, he flat out denied that his object representations based on invariants were “symbols.”

So…what’s the elevator pitch for a definition of mental symbols?

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,