A Deepness in the Mind: The Symbol Grounding Problem

The Symbol Grounding Problem reared its ugly head in my previous post. Some commenters suggested certain systems as being symbol-grounding-problem-free because those systems learn concepts that were not chosen beforehand by the programmers.

However, the fact that a software program learns concepts doesn’t mean it is grounded. It might be, but it might not be.

Here’s a thought experiment example of why that doesn’t float my boat: Let’s say we have a computer program with some kind of semantic network or database (or whatever) which was generated by learning during runtime. Now lets say we have the exact same system, except a human hard-coded the semantic network. Did it really matter that one of them auto-generated the network versus the other as far as grounding goes? In other words, runtime generation doesn’t guarantee grounding.

Experience and Biology

Now let’s say symbol grounded systems require learning from experience. A first-order logic representation of that would be:

SymbolGrounding -> LearningFromExperience

Note that is not a biconditional relationship. But is that true? Why might learning from experience matter?

herring gull and chicks

Well our example systems are biological, and certainly animals learn while they are alive. But that is merely the ontogeny. What about the stuff that animals already know when they are born? And how do they know how to learn the right things? That’s why the evolutionary knowledge via phylogeny is also important. It’s not stored in the same way though. It unfolds in complex ways as the tiny zygote becomes a multicellular animal.

“Unfolds” is of course just a rough metaphor. My point is that learning does make sense as a contributor to achieve animal-like grounding since biology had to do that, but one should consider how biological learning actually works before assuming that any software program’s autonomous state changes are similar.

I think it is a safe statement that the grounding of symbols in biological organisms is a combination of ontogenetic and phylogenetic learning [1]. To map that back to software systems—the architecture and the program and the starting knowledge (which may come from previous instances) are just as important as what’s learned during a single run of the program.

How Deep?

To be fair, in my last post I did not say how grounded a grounded system needs to be. It would appear that the literature for symbol grounding generally means grounded to the real world. The “real world” may be up for debate, but one way to put it is: grounded to the world that humans share.

Illustration of Jack and the Beanstalk

This assumes that all of us are in fact living in the same world which is objective. I’m just saying this for the purposes of defining symbol grounding. There is, however, some potential weirdness regarding interfaces with reality that we will skip for today.

I threw together this simple diagram to illustrate the generic layers involved with real world grounding:

An Abstract Layering of Symbol Grounded Systems

Bass Ackwards

I’m doing things backwards and giving you a definition of symbol grounding here near the end of the essay.

I would say that a “symbol” could be a lot of things, even a complicated structure (relevant: my post What are Symbols in AI?). But let’s look at Stevan Harnad’s definition of what a symbolic mental system requires [2]:

1. a set of arbitrary “physical tokens” scratches on paper, holes on a tape, events in a digital computer, etc. that are
2. manipulated on the basis of “explicit rules” that are
3. likewise physical tokens and strings of tokens. The rule-governed symbol-token manipulation is based
4. purely on the shape of the symbol tokens (not their “meaning”), i.e., it is purely syntactic, and consists of
5. “rulefully combining” and recombining symbol tokens. There are
6. primitive atomic symbol tokens and
7. composite symbol-token strings. The entire system and all its parts — the atomic tokens, the composite tokens, the syntactic manipulations both actual and possible and the rules — are all
8. “semantically interpretable:” The syntax can be systematically assigned a meaning e.g., as standing for objects, as describing states of affairs).

The Problem, is, as Harnad’s abstract puts it:

How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads?

Harnad proposes a system in which all symbols eventually are based on a set of elementary symbols. And these elementary symbols are generated from and connected with non-symbolic representations. The non-symbolic representations are caused directly by analog sensing of the real world.

Harnad’s low level interface, which he abstractly constructs from a notion of connectionism, reminded me of something ubiquitous (whether you realize it or not) and realistic: tranducers and ADCs.

The transducers I have in mind are devices which convert continuous analog movement into a digital numeric representation, e.g. you move a joystick forward and the computer reads that as an integer and adjusts the speed of your video game character accordingly. ADCs (Analog-to-Digital Converters) sample the real world at some frequency and force the measurement into a limited number of bits. For example, recording music.

analog to digital example

A digital-to-analog conversion does the opposite, e.g. playing an MP3 results in real world sound waves hitting your ear.

Practical Applications

If the Symbol Grounding Problem defines the ground as the real world, what about software agents and other non-situated, non-embodied entities? For practical purposes, perhaps we need to acknowledge different kinds of grounding. Of course, any arbitrary environment which is a target of grounding will not necessarily result in a system that humans can interface with.


References

[1] animal behaviour. (2012). In Encyclopædia Britannica. Retrieved from http://www.britannica.com/EBchecked/topic/25597/animal-behaviour/282494/Ontogeny

[2] S. Harnad, “The Symbol Grounding Problem”. Physica D 42, 1990., pp. 335-346. Available: http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad90.sgproblem.html

Image Credits:

  1. http://my.opera.com/Words/blog/2008/05/27/baby-herring-gulls
  2. http://www.surlalunefairytales.com/illustrations/jackbeanstalk/hassalljack6.html
  3. Samuel H. Kenyon
  4. http://www.dilettantesdictionary.org/index.php?search=1&searchtxt=analog-to-digital

 

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , ,

2 Responses to “A Deepness in the Mind: The Symbol Grounding Problem”

  1. SynapticNulship » Blog Archive » Mechanisms of Meaning Says:

    [...] “A Deepness in the Mind: The Symbol Grounding Problem“, I showed a a three layer diagram with semantic connections on the top. I’d like to [...]

  2. SynapticNulship » Blog Archive » Emotional Developmental Symbol Creation Says:

    [...] To create an artificial intelligence system that is similar to humans or other animals, it has to have some way to generate meaning. A potential mechanism of meaning is an emotional system which has built in symbol grounding. [...]

Leave a Reply

You must be logged in to post a comment.