Symbol Grounding and Symbol Tethering

Philosopher Aaron Sloman claims that symbol grounding is impossible. I say it is possible, indeed necessary, for strong AI. Yet my own approach may be compatible with Sloman’s.

Sloman equates “symbol grounding” with concept empiricism, thus rendering it impossible. However, I don’t see the need to equate all symbol grounding to concept empiricism. And what Sloman calls “symbol tethering” may be what I call “symbol grounding,” or at least a type of symbol grounding.

Firstly, as Sloman says about concept empiricism [1]:

Kant refuted this in 1781, roughly by arguing that experience without prior concepts (e.g. of space, time, ordering, causation) is impossible.

Well that’s fine. My interpretation of symbol grounding didn’t involve the baggage to bootstrap everything. Innate concepts, which are triggered by machinations in phylogenetic space, can contribute to grounding.

Sloman also says that concept empiricism was [2]:

finally buried by 20th century philosophers of science considering the role of theoretical terms in science (e.g. “electron”, “gene”, “valence”, etc.) that are primarily defined by their roles in explanatory theories.

Here is the only inkling of why this would take down symbol grounding: abstract concepts might actually be defined in terms of each other. As Sloman explains here[3]:

Because a concept can be (partially) defined implicitly by its role in a powerful theory, and therefore some symbols expressing such concepts get much of their meaning from their structural relations with other symbols in the theory (including relations of derivability between formulae including those symbols) it follows that not all meaning has to come from experience of instances, as implied by the theory of concept empiricism

Theory concept tethering?

Theory concept tethering?

On the other hand, maybe theory concepts are grounded, but in a very tenuous way. Here is a metaphor, albeit not a great one: Imagine a family of hot air balloons with links between them, and this group is floating free. However, they aren’t quite free because there is a single rope tying one of them, and indirectly all of them, to the ground. Sloman seems to be saying something like that, via mechanisms of how good a theory concept is at modeling something, hence the term “symbol tethering”. Whatever the case, I don’t see why all symbols have to be like theory concepts.

If the goal is to understand how human minds create and use knowledge, then one is led down the road of grounding. Otherwise you’re playing Wacky Mad Libs or tainting an experiment with an observer’s human knowledge. Imagine if you could pause a human (or some other animal) and have access to the layer or point-of-view of internal mental symbols. You might then ask, what is the genealogy of a particular symbol—what symbols are its ancestors? The path to embodiment-derived symbols or innate symbols may be long and treacherous, yet there it is. And if the path stops short, then you have chosen a symbol which is somehow usable in a biological mind, yet is completely encapsulated in a self-referential subsystem.

Sloman has hypothesized that theory concept networks don’t need to be grounded in any normal sense of the word. But that doesn’t mean we throw the baby out with the bathwater. As far as I can tell, we should add the theory tethering mechanism in as a method of grounding symbols. Or perhaps it is simply one of the other ways in which information structures can be handled in a mind. I think it is plausible to have ungrounded symbols generated by a mind which also has grounded symbols. The inherent structure of a ungrounded self-referential database could be useful in certain contexts. But ungrounded symbols are easy. That’s the default for all existing computing systems. And that’s what a dictionary is. The nature of those dictionary-like systems are at most a subset of the nature of human-like knowledge systems. We end up with humans injecting the meaning into the computer (or dictionary or whatever). The tricky problem is making systems that are grounded in the same way humans or other animals are. Those systems could have compatible notions of common sense and general (to humans) understanding. They would, in turn, be capable of doing the same kind of knowledge injection or anchoring that humans do with ungrounded systems.

References

[1] A. Sloman, “Symbol Grounding is Not a Serious Problem. Theory Tethering Is,” IEEE AMD Newsletter, April 2010.
[2] A. Sloman, “Some Requirements for Human-like Robots: Why The Recent
Over-emphasis on Embodiment has Held up Progress,” in B. Sendhoff et al., Eds., Creating Brain-Like Intelligence, pp. 248-277, Springer-Verlag, 2009.
[3] A. Sloman, “What’s information, for an organism or intelligent machine? How can a machine or organism mean?,” 2011. http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-inf-chap.html

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags:

Leave a Reply

You must be logged in to post a comment.