AAAI FSS-13 and Symbol Grounding

Posted in artificial intelligence on November 19th, 2013 by Samuel Kenyon

At the AAAI 2013 Fall Symposia (FSS-13) 1 2, I realized that I was not prepared to explain certain topics quickly to those who are specialists in various AI domains and/or don’t delve into philosophy of mind issues. Namely I am thinking of enactivism and embodied cognition.

my poster

my poster

But something even easier (or so I thought) that threw up communication boundaries was The Symbol Grounding Problem. Even those in AI who have a vague knowledge of the issue will often reject it as a real problem. Or maybe Jeff Clune was just testing me. Either way, how can one give an elevator pitch about symbol grounding?

So after thinking about it this weekend, I think the simplest explanation is this:

Symbol grounding is about making meaning intrinsic to an agent as opposed to parasitic meaning provided by an external human researcher or user.

And really, maybe it should not be called a “problem” anymore. It’s only a problem if somebody claims that systems have human-like knowledge but in fact they do not have any intrinsic meaning. Most applications, such as NLP programs and semantic graphs / networks, do not have intrinsic meaning. (I’m willing to grant them a small amount intrinsic meaning if that meaning depends on the network structure itself.)

Meanwhile, there is in fact grounded knowledge of some sort in research labs. For instance, AI systems in which perceptual invariants are registered as objects are making grounded symbols (e.g. the work presented by Bonny Banerjee). That type of object may not meet some definitions of “symbol,” but it is at least a sub-symbol which could be used to form full mental symbols.

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, "The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!"

From Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, “The Leabra Cognitive Architecture:
How to Play 20 Principles with Nature and Win!”

Randall O’Reilly from University of Colorado gave a keynote speech about some of his computational cognitive neuroscience in which there are explicit mappings from one level to the next. Even if his architectures are wrong as far as biological modeling, if the lowest layer is in fact the simulation he showed us, then it is symbolically grounded as far as I can tell. The thing that is a “problem” in general in AI is to link the bottom / middle to the top (e.g. natural language).

I think that the quick symbol grounding definition above (in italics) is enough to at least establish a thin bridge between various AI disciplines and skeptics of symbol grounding. Unfortunately, I also learned this weekend that hardly anybody agrees on what a “symbol” is.


Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.

Gary Marcus by some coincidence ended our symposium with a keynote that successfully convinced many people there that symbolic AI never died and is in fact present in many AI systems even if they don’t realize it, and is necessary in combination with other methods (for instance connectionist ML) at the very least for achieving human-like inference. Marcus’s presentation was related to some concepts in his book The Algebraic Mind (which I admit I have not read yet). There’s more to it like variable binding that I’m not going to get into here.

As far as I can tell, my concept of mental symbols is very similar to Marcus’s. I thought I was in the traditional camp in that regard. And yet his talk spawned debate on the very definition of “symbol”. Also, I’m starting to wonder if I should be careful about “subsymbolic” vs. “symbolic” structures. Two days earlier, when I had asked a presenter about the symbols in his research, he flat out denied that his object representations based on invariants were “symbols.”

So…what’s the elevator pitch for a definition of mental symbols?

Tags: , ,

Artificial Intelligence is a Design Problem

Posted in artificial intelligence on September 2nd, 2013 by Samuel Kenyon

Take a look at this M.C. Escher lithograph, “Ascending and Descending”.

"Ascending and Descending" by M.C. Escher

“Ascending and Descending” by M.C. Escher

Is this art? Is it graphic design? Is it mathematical visualization? It is all of those things. One might even say that it’s also an architectural plan given that it has been used to implement physical 3-dimensional structures which feature the same paradox when viewed from a particular angle.

A physical implementation of the MC Escher's "Ascending and Descending"

A physical implementation of the M.C. Escher’s “Ascending and Descending”

Design and Specialization

Design stated as a single word typically means design of form, such as with industrial design, graphic design, and interior design. It would come as no surprise to anyone to see the term design used for any other form-creation discipline, such as architecture and software interfaces. Usually if somebody means production of stuff in their specialization, they use a modifier, e.g. “software design.”

I do not often encounter software engineers self-styled as “designers.” When I hang out with people in the various related disciplines of user experience, calling oneself a “designer” is perfectly fine—there is an atmosphere of design of form. Some of them also are “developers”, i.e. programmers of web or mobile apps. It’s common for the younger designers to speak of a wish to learn to code. Perhaps the design specialists are a bit more open to cross-discipline experience. To be fair, the creative point of view seems to be gaining in favor—there are now coding schools which focus on rapidly teaching the creative aspect of coding to people who have no code experience and possibly no mathematical experience.

Specialized design fields presumably have a lot of overlap if one were to examine general aspects and the notion of abstractions and how abstractions are used; yet software engineers aren’t typically thinking of themselves as “designers.” And they aren’t expected to or encouraged to. I have experienced situations in which I designed the interface of an application using primarily interaction design practices, and also developed the software using typical software practices, and no single person would comprehend that I did both of those things to any degree of competence. Professional designers in the realm of human-computer interfaces and user experience were brought in and were genuinely surprised that my product had already been designed—they expected a typical engineer interface design. Engineers have a very bad reputation in industry for making horrible human-computer interfaces. But as I said, they aren’t expected to. Or even worse in particular cases, no expectations for design existed at all because the people in charge had no concept of design at all.

What I’m trying to get across is my observation that design of form is not integrated with engineering and computer science disciplines, at least not to the degree that a single person is expected to be competent in both. And really entire corporate organizations that were traditionally hyper-engineering-focused have had rough times trying to comprehend what interaction design is and why that it is important for making usable products that customers want to buy. It’s easy to point to some mega-popular companies that got the balance right from the start, at least organizationally—not necessarily for each individual in the company—such as Google. Google has a reputation as a place for smart programmers to go hack all day to make information free for all humankind. But really it became big and has sustained that in part because of the integration of the design of the human-computer interfaces. If you don’t think of Google as a design company, it’s because you think design means stylish, trendy, “new”, etc. You might expect to be stopped in your tracks by the mere sight of some artistic designed form and say—wow, look at that amazing design. But the truth is invisible design is an apex that most never achieve; even if they try they find it’s difficult. Simple is hard. The phrase “less is more” may sound too trite to be true and yet it generally is a good target. If you notice the interface and find yourself marveling at it, it’s probably getting in your way of actually accomplishing a goal.

It should come as no surprise that most AI implementations, and many of the theories, are now and have always been generated by persons living in the realm of computer science and/or engineering.

Why is AI a Design Problem?

One might ask, first, is AI a problem at all? Some of the early AI papers referred to “the artificial intelligence problem”—singular—such as the 1955 proposal for the Dartmouth Summer Research Project on AI [1] and a report from one of those proposers, Minsky, done the following year at Lincoln Lab [2]. But even in those papers, they had already started viewing the problem often as a collection of problems. Each intelligence ability that humans can do should be able to be mechanized and therefore computerized. The problem for each ability was how to figure out what those mechanical descriptions are. Still, be it one problem or many, AI was described in terms of problems.

So, on the premise that AI is indeed a problem, why would I say it’s a design problem? Why AI books rarely, if ever, even mention the word “design”?

AI is a design problem because there is no mechanical reduction that will automatically generate a solution for the problem. And there is no single, best solution. To be clear, that doesn’t mean that intelligence is irreducible. That just means that the creation of an artifact which is intelligent necessarily involves the artifact (which is a form) and an environment (the context). And to create an intelligent artifact in that system does not have any single directly available answer.

Form and Context

Although AI is often defined and practiced as hundreds of narrow specialist sub-disciplines, as one old AI textbook put it [3]:

The ultimate goal of AI research (which we are very far from achieving) is to build a person, or, more humbly, an animal.

(Of course humans are animals, so by pretending they aren’t, right off the bat there’s an arbitrary wall built which prevents cross-fertilization of concepts between disciplines.)

A more recent popular undergraduate AI textbook [4] abandons all hope of human-like AI:

Human behavior, on the other hand, is well-adapted for one specific environment and is the product, in part, of a complicated and largely unknown evolutionary process that still is far from producing perfection. This book will therefore concentrate on general principles of rational agents and on components for constructing them.

So we’re starting to see how a major portion of AI research has decided that since humans are too difficult to understand and non-perfect the focus will be on ideal models that work in ideal contexts. The derision of real-world environments as “one specific environment” is interesting—if it was simply one specific environment, wouldn’t that make human-type AI easier to figure out? But it’s not an easy to define environment, of course. It is specific to areas of Earth and involves myriad dynamic concepts interacting in different ways. And, as said, it is not perfect.

But design disciplines routinely tackle exactly those kinds of problems—a problem is defined, perhaps very vaguely at first, which involves a complex system operating in a real world environment. Even after coming up with a limited set of requirements, the number of interactions makes creating a form that maintains equilibrium very hard. There are too many factors for an individual to comprehend all at once without some process of organization and methods that good designers use. A bad designer has a very small chance of success given the complexity of any problem. Yet somehow some designers make artifacts that work. Both the successes and failures of design are of importance to making intelligence systems. I’m not going to go into design methods in this blog post (I will talk more about design in future posts), but I will have to say something about form and context.

It is an old notion that design problems attempt to make a good fit between a form and its context [5]. Christopher Alexander (coming from architecture of buildings and villages) described form-context systems as “ensembles,” of which there are a wide variety:

The biological ensemble made up of a natural organism and its physical environment is the most familiar: in this case we are used to describing the fit between the two as well-adaptedness.

…The ensemble may be a musical composition—musical phrases have to fit their contexts too…

…An object like a kettle has to fit the context of its use, and the technical context of its production cycle.

Form is not naturally isolated. It must be reintroduced into the wild of the system at hand, be it a civilized urban human context or literally in a wild African savanna. Form is relevant because of its interface with context. They are the yin and yang of creating and modifying ensembles. And really, form is merely a slice of form-context. The boundary can be shifted arbitrarily. Alexander suggests that a designer may actually have to consider several different context-form divisions simultaneously.

And this ensemble division is another important aspect of design that goes right to the heart of artificial intelligence and even cognitive science as a whole—is the form of intelligence the brain, or should the division line be moved to include the whole nervous system, or the whole body, or perhaps the whole local system that was previously defined as “environment”? Assuming one without consideration for the others (or not admitting the assumption at all) is a very limiting way to solve problems. And Alexander’s suggestion of layering many divisions might be very useful for future AI research.

Someone might argue that using design methods to create the form of an artificial mind is not necessary because AI researchers should instead by trying to implement a “seed” which grows into the form automatically. However, that involves defining a context in which a new form, the seed, turns into the target form over time. The form is still being fit to the ensemble. Indeed, we may need to solve even more form-context design problems such as the development mechanisms. One could imagine designing a form, and then afterword somehow working in reverse to figure out a compressed form which can grow into that form with the assistance of environmental interactions. Still, design was not made irrelevant.


In case it wasn’t clear by now, the form of an artificial organism includes its informational aspect. Its mind is a form.

Creating mental forms is a design problem because there is no single perfect solution. One can solve a design problem by creating a form that sets to false all the binary “misfits” (states in the ensemble where form and context do not fit). This satisfies the requirements only at that level, not at some optimal level. It is not the “best possible” way [5]. There is no best possible way; if there was, it would not be a design problem. Artificial minds do not have a best possible way either; they merely work in a context or they don’t. You could synthesize many different minds and compare them in a context and say “this one is the best”—but that will have to be a very small context or be a rating on only one variable.


[1] J. McCarthy, M. L. Minksy, N. Rochester, and C. E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” 1955. [Online]. Available: [Accessed: 03-Sep-2013].
[2] [“Heuristic Aspects of the Artificial Intelligence Problem.” MIT Lincoln Laboratory Report 34-55, 1956.
[3] E. Charniak and D. V. McDermott, Introduction to artificial intelligence. Reading, Mass.: Addison-Wesley, 1985.
[4] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach. Upper Saddle River, N.J.: Prentice Hall/Pearson Education, 2003.
[5] C. Alexander, Notes on the synthesis of form. Cambridge, MA: Harvard University Press, 1971.

Image Credits

  1. Print by Van Dooren N.V. of Maurits C. Escher,  Klimmen en dalen, 1960.
  2. “Ascending and Descending” in LEGO by Andrew Lipson
Tags: , , , ,

The Need for Emotional Experience Does Not Prevent Conscious Artificial Intelligence

Posted in artificial intelligence on January 18th, 2013 by Samuel Kenyon

I have mentioned the book The First Idea by Greenspan and Shanker many times recently. Lest anybody assume I am a fanboy of that tome, I wanted to argue with a ridiculous statement that the authors make in regards to consciousness and artificial intelligence.

Greenspan and Shanker make it quite clear that they don’t think artificial intelligence can have consciousness:

What is the necessary foundation for consciousness? Can computers be programmed to have it or any types of truly reflective intelligence? The answer is NO! Consciousness depends on affective experience (i.e. the experience of one’s own emotional patterns). True affects and their near infinite variations can only arise from living biological systems and the developmental processes that we have been discussing.

Let’s look at that logically. The first part of their argument is Consciousness (C) depends on Affective experience (A):

Read more »

Tags: , ,

Are Emotional Structures the Foundation of Intelligence?

Posted in artificial intelligence on January 9th, 2013 by Samuel Kenyon

It seems like all human babies go through the exact same intelligence growth program. Like clockwork. A lot of people have assumed that it really is a perfect program which is defined by genetics.

Obviously something happens when a child grows. But surely that consists of minor environmental queues to the genetic program. Or does it?

Consider if the “something happens as a child grows” might in fact be critical. And not just critical, but the major source of information. What exactly is that “nurture” part of nature vs. nurture?

What if the nurturing is in fact the source of all conceptual knowledge, language, sense of self, and sense of reality?

Read more »

Tags: , , , ,