Heterarchies and Society of Mind’s Origins

Posted in artificial intelligence on February 4th, 2014 by Samuel Kenyon

Ever wonder how Society of Mind came about? Of course you do.

One of the key ideas of Society of Mind [1] is that at some range of abstraction levels, the brain’s software is a bunch of asynchronous agents. Agents are simple—but a properly organized society of them results in what we call “mind.”

agents

agents

The book Society of Mind includes many sub-theories of how agents might work, structures for connecting them together, memory, etc. Although Minsky mentions some of the development points that lead to the book, he makes no explicit references to old papers. The book is copyrighted “1985, 1986.” Rewind back to 1979, “long” before I was born. In the book Artificial Intelligence: An MIT Perspective [2], there is a chapter by Minsky called “The Society Theory of Thinking.” In a note, Minsky summarizes it as:

Papert and I try to combine methods from developmental, dynamic, and cognitive psychological theories with ideas from Artificial Intelligence and computational theories. Freud and Piaget play important roles.

Ok, that shouldn’t be a surprise if you read the later book. But what about heterarchies? In 1971 Patrick Winston described heterarchical organization as [3]:

An interacting community of processes, some narrow experts, others broad generalists, and still others in the role of critics.

tangled

tangled

“Heterarchy” is a term that many attribute to Warren McCulloch in 1945 based on his neural research. Although it may have been abandoned in AI, the concept had success in anthropology (according to the intertubes). It is important to note that a heterarchy can be viewed as a parent class to heirarchies and a heterarchy can contain hierarchies.

In 1973 the student Eugene Freuder, who later became well known for constraint based reasoning, reported on his “active knowledge” for vision thesis, called SEER [4]. In one of the funniest papers I’ve read, Freuder warns us that:

this paper will probably vacillate between cryptic and incoherent.

Nevertheless, it is healthy to write things down periodically. Good luck.

And later on that:

SEER never demands that much be done, it just makes a lot of helpful suggestion. A good boss.

This basic structure is not too hairy, I hope.

If you like hair, however, there are enough hooks here to open up a wig salon.

He refers to earlier heterarchy uses in the AI Lab, but says that they are isolated hacks, whereas his project is more properly a system designed to be a heterachy which allows any number of hacks to be added during development. And this supposedly allows the system to make the “best interactive use of the disparate knowledge it has.”

This supposed hetarchical system:

  • “provides concrete mechanisms for heterarchical interactions and ‘institutionalizes’ and encourages forms of heterarchy like advice”
  • allows integration of modules during development (a user (the programmer) feature)

One aspect of this is the parallelism and whether that was actually better than serial methods. The MIT heterarchy thread eventually turned into Society of Mind, or at least that’s what Patrick Winston indicates in his section introduction [2]:

Minsky’s section introduces his theory of mind in which the basic constituents are very simple agents whose simplicity strongly affects the nature of communication between different parts of a single mind. Working with Papert, he has greatly refined a set of notions that seem to have roots in the ideas that formerly went by the name of heterarchy.

Society of Mind is highly cited but rarely implemented or tested. Reactive aka behavioral robotics can be heterarchies, but are either ignored by AI or relegated to the bottom of 3 layer architectures for robots. The concepts of modularity and parallel processing have folded into general software engineering paradigms.

But I wonder if maybe the heterarchy concept(s) for cognitive architectures were abandoned too quickly. The accidents of history may have already incorporated the best ideas from heterarchies into computer science, however I am not yet sure about that.

References

[1] M. Minsky. The Society of Mind. New York: Simon and Schuster, 1986, pp. 249-250.
[2] P.H. Winston & R.H. Brown, Eds., Artificial Intelligence: An MIT Perspective, vol. 1, MIT Press, 1979.
[3] P.H. Winston, “Heterarchy in the M.I.T. Robot.” MIT AI Memo Vision Flash 8, March 1971.
[4] E.C. Freuder, “Active Knowledge.” MIT AI Memo Vision Flash 53, Oct. 1973.


Image Credits

  1. Samuel H. Kenyon’s mashup of Magritte and Agent Smith of the Matrix trilogy
  2. John Zeweniuk

 

Tags: , ,

What are Symbols in AI?

Posted in artificial intelligence on February 22nd, 2011 by Samuel Kenyon

A main underlying philosophy of artificial intelligence and cognitive science is that cognition is computation.  This leads to the notion of symbols within the mind.

There are many paths to explore how the mind works.  One might start from the bottom, as is the case with neuroscience or connectionist AI.  So you can avoid symbols at first.  But once you start poking around the middle and top, symbols abound.

Besides the metaphor of top-down vs. bottom-up, there is also the crude summary of Logical vs. Probabilistic.  Some people have made theories that they think could work at all levels, starting with the connectionist basement and moving all the way up to the tower of human language, for instance Optimality Theory.   I will quote one of the Optimality Theory creators, not because I like the theory (I don’t, at least not yet), but because it’s a good summary of the general problem [1]:

Precise theories of higher cognitive domains like language and reasoning rely crucially on complex symbolic rule systems like those of grammar and logic. According to traditional cognitive science and artificial intelligence, such symbolic systems are the very essence of higher intelligence. Yet intelligence resides in the brain, where computation appears to be numerical, not symbolic; parallel, not serial; quite distributed, not as highly localized as in symbolic systems. Furthermore, when observed carefully, much of human behavior is remarkably sensitive to the detailed statistical properties of experience; hard-edged rule systems seem ill-equipped to handle these subtleties.

Now, when it comes to theorizing, I’m not interested in getting stuck in the wild goose chase for the One True Primitive or Formula.  I’m interested in cognitive architectures that may include any number of different methodologies.  And those different approaches don’t necessarily result in different components or layers.  It’s quite possible that within an architecture like the human mind, one type of structure can emerge from a totally different structure.  But depending on your point of view—or level of detail—you might see one or the other.

At the moment I’m not convinced of any particular definition of mental symbol.  I think that a symbol could in fact be an arbitrary structure, for example an object in a semantic network which has certain attributes.  The sort of symbols one uses in everyday living come in to play when one structure is used to represent another structure.  Or, perhaps instead of limiting ourselves to “represent” I should just say “provides an interface.”  One would expect that a good interface to produce a symbol would be a simplifying interface.  As an analogy, you use symbols on computer systems all the time.  One touch of a button on a cell phone activates thousands of lines of code, which may in turn activate other programs and so on.  You don’t need to understand how any of the code works, or how any of the hardware running the code works.  The symbols provide a simple way to access something complex.

A system of simple symbols that can be easily combined into new forms also enables wonderful things like language.  And the ablity to set up signs for representation (semiosis) is perhaps a partial window into how the mind works.

One of my many influences is Society of Mind by Marvin Minsky [2], which is full of theories of these structures that might exist in the information flows of the mind.  However, Society of Mind attempts to describe most structures as agents.  An agent is isn’t merely a structure being passed around, but is also actively processing information itself.

Symbols are also important when one is considering if there is a language of thought, and what that might be.  As Minsky wrote:

Language builds things in our minds.  Yet words themselves can’t be the substance of our thoughts.  They have no meanings by themselves; they’re only special sorts of marks or sounds…we must discard the usual view that words denote, or represent, or designate; instead, their function is control: each word makes various agents change what various other agents do.

Or, as Douglas Hofstadter puts it [3]:

Formal tokens such as ‘I’ or “hamburger” are in themselves empty. They do not denote.  Nor can they be made to denote in the full, rich, intuitive sense of the term by having them obey some rules.

Throughout the history of AI, I suspect, people have made intelligent programs and chosen some atomic object type to use for symbols, sometimes even something intrinsic to the programming language they were using.  But simple symbol manipulation doesn’t result in in human-like understanding.  Hofstadter, at least in the 1970s and 80s, said that symbols have to be “active” in order to be useful for real understanding.  “Active symbols” are actually agencies which have the emergent property of symbols.  They are decomposable, and their constituent agents are quite stupid compared to the type of cognitive information the symbols are taking part in.  Hofstadter compares these symbols to teams of ants that pass information between teams which no single ant is aware of.  And then there can be hyperteams and hyperhyperteams.

References
[1] P. Smolensky http://web.jhu.edu/cogsci/people/faculty/Smolensky/
[2] M. Minsky, Society of Mind, Simon & Schuster, 1986.
[3] D. Hofstadter, Metamagical Themas, Basic Books, 1985.

Tags: , , , , ,