AAAI FSS-13 and Symbol Grounding

Posted in artificial intelligence on November 19th, 2013 by Samuel Kenyon

At the AAAI 2013 Fall Symposia (FSS-13) 1 2, I realized that I was not prepared to explain certain topics quickly to those who are specialists in various AI domains and/or don’t delve into philosophy of mind issues. Namely I am thinking of enactivism and embodied cognition.

my poster

my poster

But something even easier (or so I thought) that threw up communication boundaries was The Symbol Grounding Problem. Even those in AI who have a vague knowledge of the issue will often reject it as a real problem. Or maybe Jeff Clune was just testing me. Either way, how can one give an elevator pitch about symbol grounding?

So after thinking about it this weekend, I think the simplest explanation is this:

Symbol grounding is about making meaning intrinsic to an agent as opposed to parasitic meaning provided by an external human researcher or user.

And really, maybe it should not be called a “problem” anymore. It’s only a problem if somebody claims that systems have human-like knowledge but in fact they do not have any intrinsic meaning. Most applications, such as NLP programs and semantic graphs / networks, do not have intrinsic meaning. (I’m willing to grant them a small amount intrinsic meaning if that meaning depends on the network structure itself.)

Meanwhile, there is in fact grounded knowledge of some sort in research labs. For instance, AI systems in which perceptual invariants are registered as objects are making grounded symbols (e.g. the work presented by Bonny Banerjee). That type of object may not meet some definitions of “symbol,” but it is at least a sub-symbol which could be used to form full mental symbols.

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, "The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!"

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, “The Leabra Cognitive Architecture:
How to Play 20 Principles with Nature and Win!”

Randall O’Reilly from University of Colorado gave a keynote speech about some of his computational cognitive neuroscience in which there are explicit mappings from one level to the next. Even if his architectures are wrong as far as biological modeling, if the lowest layer is in fact the simulation he showed us, then it is symbolically grounded as far as I can tell. The thing that is a “problem” in general in AI is to link the bottom / middle to the top (e.g. natural language).

I think that the quick symbol grounding definition above (in italics) is enough to at least establish a thin bridge between various AI disciplines and skeptics of symbol grounding. Unfortunately, I also learned this weekend that hardly anybody agrees on what a “symbol” is.

Symbols?

Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Gary Marcus by some coincidence ended our symposium with a keynote that successfully convinced many people there that symbolic AI never died and is in fact present in many AI systems even if they don’t realize it, and is necessary in combination with other methods (for instance connectionist ML) at the very least for achieving human-like inference. Marcus’s presentation was related to some concepts in his book The Algebraic Mind (which I admit I have not read yet). There’s more to it like variable binding that I’m not going to get into here.

As far as I can tell, my concept of mental symbols is very similar to Marcus’s. I thought I was in the traditional camp in that regard. And yet his talk spawned debate on the very definition of “symbol”. Also, I’m starting to wonder if I should be careful about “subsymbolic” vs. “symbolic” structures. Two days earlier, when I had asked a presenter about the symbols in his research, he flat out denied that his object representations based on invariants were “symbols.”

So…what’s the elevator pitch for a definition of mental symbols?

Tags: , ,

The Code Experience vs. the Math Experience

Posted in culture, philosophy on November 29th, 2012 by Samuel Kenyon

In the book The Mathematical Experience, the chapter on symbols mentions computer programming [1]. But it really doesn’t do justice to programming (aka coding). In fact it’s actually one of the lamer parts of an otherwise thought-provoking book. It’s not that it’s dated—a concern since the book was published in 1981—but that the authors only provide the paltry sentence, “Computer science embraces several varieties of mathematical disciplines but has its own symbols,” followed by some random examples of BASIC keywords and some operators.

cover of the edition I have

As mentioned in “The Disillusionment of Math,” I’ve always thought of programming as different than mathematics. And I almost always choose the experience of thinking in code over the experience of thinking in equations.

But I suspect others think of these as similar activities occupied by mathematical people. Likewise, if a programmer tells somebody that they are a software engineer, the keyword “engineer” can create a response of “oh you must do a lot of math.”

Read more »

Tags: , , ,

Mechanisms of Meaning

Posted in artificial intelligence on October 15th, 2012 by Samuel Kenyon

How do organisms generate meaning during their development? What designs of information structures and processes best explain how animals understand concepts?

In “A Deepness in the Mind: The Symbol Grounding Problem“, I showed a three layer diagram with semantic connections on the top. I’d like to spend some time discussing the bottom of the top layer.

Basic Meaning Mechanisms

At the moment, there seems to be a few worthwhile avenues of investigation:

  • Emotions
  • Affordances
  • Metaphors And Blends

Each of these points of view involve certain theories and architecture concepts. Soon I will have more blog posts describing these concepts and how we might implement and synthesize them. Marvin Minsky’s books Society of Mind and The Emotion Machine have many mental agency and structure ideas that may be useful in this context.

Tags: , , , , , , ,

A Deepness in the Mind: The Symbol Grounding Problem

Posted in artificial intelligence, philosophy on September 29th, 2012 by Samuel Kenyon

The Symbol Grounding Problem reared its ugly head in my previous post. Some commenters suggested certain systems as being symbol-grounding-problem-free because those systems learn concepts that were not chosen beforehand by the programmers.

However, the fact that a software program learns concepts doesn’t mean it is grounded. It might be, but it might not be.

Here’s a thought experiment example of why that doesn’t float my boat: Let’s say we have a computer program with some kind of semantic network or database (or whatever) which was generated by learning during runtime. Now lets say we have the exact same system, except a human hard-coded the semantic network. Did it really matter that one of them auto-generated the network versus the other as far as grounding goes? In other words, runtime generation doesn’t guarantee grounding.

Experience and Biology

Now let’s say symbol grounded systems require learning from experience. A first-order logic representation of that would be:

SymbolGrounding -> LearningFromExperience

Note that is not a biconditional relationship. But is that true? Why might learning from experience matter?

herring gull and chicks

Well our example systems are biological, and certainly animals learn while they are alive. But that is merely the ontogeny. What about the stuff that animals already know when they are born? And how do they know how to learn the right things? That’s why the evolutionary knowledge via phylogeny is also important. It’s not stored in the same way though. It unfolds in complex ways as the tiny zygote becomes a multicellular animal.

Read more »

Tags: , , , ,