AAAI FSS-13 and Symbol Grounding

Posted in artificial intelligence on November 19th, 2013 by Samuel Kenyon

At the AAAI 2013 Fall Symposia (FSS-13) 1 2, I realized that I was not prepared to explain certain topics quickly to those who are specialists in various AI domains and/or don’t delve into philosophy of mind issues. Namely I am thinking of enactivism and embodied cognition.

my poster

my poster

But something even easier (or so I thought) that threw up communication boundaries was The Symbol Grounding Problem. Even those in AI who have a vague knowledge of the issue will often reject it as a real problem. Or maybe Jeff Clune was just testing me. Either way, how can one give an elevator pitch about symbol grounding?

So after thinking about it this weekend, I think the simplest explanation is this:

Symbol grounding is about making meaning intrinsic to an agent as opposed to parasitic meaning provided by an external human researcher or user.

And really, maybe it should not be called a “problem” anymore. It’s only a problem if somebody claims that systems have human-like knowledge but in fact they do not have any intrinsic meaning. Most applications, such as NLP programs and semantic graphs / networks, do not have intrinsic meaning. (I’m willing to grant them a small amount intrinsic meaning if that meaning depends on the network structure itself.)

Meanwhile, there is in fact grounded knowledge of some sort in research labs. For instance, AI systems in which perceptual invariants are registered as objects are making grounded symbols (e.g. the work presented by Bonny Banerjee). That type of object may not meet some definitions of “symbol,” but it is at least a sub-symbol which could be used to form full mental symbols.

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, "The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!"

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, “The Leabra Cognitive Architecture:
How to Play 20 Principles with Nature and Win!”

Randall O’Reilly from University of Colorado gave a keynote speech about some of his computational cognitive neuroscience in which there are explicit mappings from one level to the next. Even if his architectures are wrong as far as biological modeling, if the lowest layer is in fact the simulation he showed us, then it is symbolically grounded as far as I can tell. The thing that is a “problem” in general in AI is to link the bottom / middle to the top (e.g. natural language).

I think that the quick symbol grounding definition above (in italics) is enough to at least establish a thin bridge between various AI disciplines and skeptics of symbol grounding. Unfortunately, I also learned this weekend that hardly anybody agrees on what a “symbol” is.

Symbols?

Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Gary Marcus by some coincidence ended our symposium with a keynote that successfully convinced many people there that symbolic AI never died and is in fact present in many AI systems even if they don’t realize it, and is necessary in combination with other methods (for instance connectionist ML) at the very least for achieving human-like inference. Marcus’s presentation was related to some concepts in his book The Algebraic Mind (which I admit I have not read yet). There’s more to it like variable binding that I’m not going to get into here.

As far as I can tell, my concept of mental symbols is very similar to Marcus’s. I thought I was in the traditional camp in that regard. And yet his talk spawned debate on the very definition of “symbol”. Also, I’m starting to wonder if I should be careful about “subsymbolic” vs. “symbolic” structures. Two days earlier, when I had asked a presenter about the symbols in his research, he flat out denied that his object representations based on invariants were “symbols.”

So…what’s the elevator pitch for a definition of mental symbols?

Tags: , ,

Symbol Grounding and Symbol Tethering

Posted in artificial intelligence on April 3rd, 2013 by Samuel Kenyon

Philosopher Aaron Sloman claims that symbol grounding is impossible. I say it is possible, indeed necessary, for strong AI. Yet my own approach may be compatible with Sloman’s.

Sloman equates “symbol grounding” with concept empiricism, thus rendering it impossible. However, I don’t see the need to equate all symbol grounding to concept empiricism. And what Sloman calls “symbol tethering” may be what I call “symbol grounding,” or at least a type of symbol grounding.

Firstly, as Sloman says about concept empiricism [1]:

Kant refuted this in 1781, roughly by arguing that experience without prior concepts (e.g. of space, time, ordering, causation) is impossible.

Well that’s fine. My interpretation of symbol grounding didn’t involve the baggage to bootstrap everything. Innate concepts, which are triggered by machinations in phylogenetic space, can contribute to grounding.

Sloman also says that concept empiricism was [2]:

finally buried by 20th century philosophers of science considering the role of theoretical terms in science (e.g. “electron”, “gene”, “valence”, etc.) that are primarily defined by their roles in explanatory theories.

Here is the only inkling of why this would take down symbol grounding: abstract concepts might actually be defined in terms of each other. As Sloman explains here[3]:

Because a concept can be (partially) defined implicitly by its role in a powerful theory, and therefore some symbols expressing such concepts get much of their meaning from their structural relations with other symbols in the theory (including relations of derivability between formulae including those symbols) it follows that not all meaning has to come from experience of instances, as implied by the theory of concept empiricism

Theory concept tethering?

Theory concept tethering?

On the other hand, maybe theory concepts are grounded, but in a very tenuous way. Here is a metaphor, albeit not a great one: Imagine a family of hot air balloons with links between them, and this group is floating free. However, they aren’t quite free because there is a single rope tying one of them, and indirectly all of them, to the ground. Sloman seems to be saying something like that, via mechanisms of how good a theory concept is at modeling something, hence the term “symbol tethering”. Whatever the case, I don’t see why all symbols have to be like theory concepts.

If the goal is to understand how human minds create and use knowledge, then one is led down the road of grounding. Otherwise you’re playing Wacky Mad Libs or tainting an experiment with an observer’s human knowledge. Imagine if you could pause a human (or some other animal) and have access to the layer or point-of-view of internal mental symbols. You might then ask, what is the genealogy of a particular symbol—what symbols are its ancestors? The path to embodiment-derived symbols or innate symbols may be long and treacherous, yet there it is. And if the path stops short, then you have chosen a symbol which is somehow usable in a biological mind, yet is completely encapsulated in a self-referential subsystem.

Sloman has hypothesized that theory concept networks don’t need to be grounded in any normal sense of the word. But that doesn’t mean we throw the baby out with the bathwater. As far as I can tell, we should add the theory tethering mechanism in as a method of grounding symbols. Or perhaps it is simply one of the other ways in which information structures can be handled in a mind. I think it is plausible to have ungrounded symbols generated by a mind which also has grounded symbols. The inherent structure of a ungrounded self-referential database could be useful in certain contexts. But ungrounded symbols are easy. That’s the default for all existing computing systems. And that’s what a dictionary is. The nature of those dictionary-like systems are at most a subset of the nature of human-like knowledge systems. We end up with humans injecting the meaning into the computer (or dictionary or whatever). The tricky problem is making systems that are grounded in the same way humans or other animals are. Those systems could have compatible notions of common sense and general (to humans) understanding. They would, in turn, be capable of doing the same kind of knowledge injection or anchoring that humans do with ungrounded systems.

References

[1] A. Sloman, “Symbol Grounding is Not a Serious Problem. Theory Tethering Is,” IEEE AMD Newsletter, April 2010.
[2] A. Sloman, “Some Requirements for Human-like Robots: Why The Recent
Over-emphasis on Embodiment has Held up Progress,” in B. Sendhoff et al., Eds., Creating Brain-Like Intelligence, pp. 248-277, Springer-Verlag, 2009.
[3] A. Sloman, “What’s information, for an organism or intelligent machine? How can a machine or organism mean?,” 2011. http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-inf-chap.html

Tags:

Are Emotional Structures the Foundation of Intelligence?

Posted in artificial intelligence on January 9th, 2013 by Samuel Kenyon

It seems like all human babies go through the exact same intelligence growth program. Like clockwork. A lot of people have assumed that it really is a perfect program which is defined by genetics.

Obviously something happens when a child grows. But surely that consists of minor environmental queues to the genetic program. Or does it?

Consider if the “something happens as a child grows” might in fact be critical. And not just critical, but the major source of information. What exactly is that “nurture” part of nature vs. nurture?

What if the nurturing is in fact the source of all conceptual knowledge, language, sense of self, and sense of reality?

Read more »

Tags: , , , ,

Emotional Developmental Symbol Creation

Posted in artificial intelligence on December 2nd, 2012 by Samuel Kenyon

To create an artificial intelligence system that is similar to humans or other animals, it has to have some way to generate meaning. A potential mechanism of meaning is an emotional system which has built in symbol grounding.

When I talk about mental symbols in this article, I assume that ideas, concepts, knowledge, representations, etc. are all composed of mental symbols.

Before getting into emotions, lets take a higher level look at development levels of children.

Read more »

Tags: , , , , , ,