Cognitive Abstraction Manifolds

Posted in artificial intelligence, philosophy on July 19th, 2014 by Samuel Kenyon

A few days ago I started thinking about abstractions whilst reading Surfaces and Essences, a recent book by Douglas Hofstadter and Emmanuel Sander. I suspect efforts like Surfaces and Essences, which traverse vast and twisted terrains across cognitive science, are probably underrated as scientific contributions.

But first, let me briefly introduce the idea-triggering tome. Surfaces and Essences focuses on the role of analogies in thought. It’s kind of an über Metaphors We Live By. Hofstadter and his French counterpart Sander are concerned with categorization, which is held to be essentially the primary way of the mind creating concepts and the act of remembering. And each category is made entirely from a sequence of analogies. These sequences generally get bigger and more complicated as a child develops into an adult.

The evidence is considerable, but based primarily on language as a window into the mind’s machinery, an approach which comes as no surprise to those who know of Steven Pinker’s book, The Stuff of Thought: Language as a Window into Human Nature. There are also some subjective experiences used as evidence, namely what categories and mechanisms allowed a bizarre memory to occur in a situation. (You can get an intro in this video recording of a Hofstadter presentation).

Books like this—I would include in this spontaneous category Marvin Minsky’s books Society of Mind and The Emotion Machine—offer insight into psychology and what I would call cognitive architecture. They appeal to some artificial intelligence researchers / aficionados but they don’t readily lend themselves to any easy (or fundable) computer implementations. And they usually don’t have any single or easy mapping to other cognitive science domains such as neuroscience. Partly, the practical difficultly is that of needing full systems. But more to the point of this little essay, they don’t even map easily to other sub-domains nearby in psychology or artificial intelligence.

Layers and Spaces

One might imagine that a stack of enough layers at different levels will provide a full model and/or implementation of the human mind. Even if the layers overlap, one just needs full coverage—and small gaps presumably will lend themselves to obvious filler layers.

For instance, you might say one layer is the Surfaces and Essences analogy engine, and another layer deals with consciousness, another with vision processing, another with body motion control, and so on.

layers

layers

But it’s not that easy (I know, I know…that’s pretty much the mantra of skeptical cognitive science).

I think a slice of abstraction space is probably more like a manifold or some other arbitrary n-dimensional space. And yes, this is an analogy.

These manifolds could be thought of—yay, another analogy!–as 3D blobs, which on this page will be represented as a 2D pixmap (see the lava lamp image). “Ceci n’est pas une pipe.”

blobs in a lava lamp

blobs in a lava lamp

Now, what about actual implementations or working models—as opposed to theoretical models. Won’t there be additional problems of interfaces between the disparate manifolds?

Perhaps we need a class of theories whose abstraction space is in another dimension which represents how other abstraction spaces connect. Or, one’s model of abstraction spaces could require gaps between spaces.

Imagine blobs in a lava lamp, but they always repel to maintain a minimal distance from each other. Interface space is the area in which theories and models can connect those blobs.

I’m not saying that nobody has come up with interfaces at all in these contexts. We may already have several interface ideas, recognized as such or not. For instance, some of Minky’s theories which fall under the umbrella of Society of Mind are about connections. And maybe there are more abstract connection theories out there that can bridge gaps between entirely different theoretical psychology spaces.

Epilogue

Recently Gary Marcus bemoaned the lack of good meta-theories for brain science:

… biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking. What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

At theme level 2 of this essay, most likely these bridges will be dependent on analogies as prescribed by Surfaces and Essences. At theme level 1, perhaps these bridges will be the connective tissues between cognitive abstraction manifolds.


Image Credits:

  1. Jorge Konigsberger
  2. anthony gavin

Tags: , , , ,

Heterarchies and Society of Mind’s Origins

Posted in artificial intelligence on February 4th, 2014 by Samuel Kenyon

Ever wonder how Society of Mind came about? Of course you do.

One of the key ideas of Society of Mind [1] is that at some range of abstraction levels, the brain’s software is a bunch of asynchronous agents. Agents are simple—but a properly organized society of them results in what we call “mind.”

agents

agents

The book Society of Mind includes many sub-theories of how agents might work, structures for connecting them together, memory, etc. Although Minsky mentions some of the development points that lead to the book, he makes no explicit references to old papers. The book is copyrighted “1985, 1986.” Rewind back to 1979, “long” before I was born. In the book Artificial Intelligence: An MIT Perspective [2], there is a chapter by Minsky called “The Society Theory of Thinking.” In a note, Minsky summarizes it as:

Papert and I try to combine methods from developmental, dynamic, and cognitive psychological theories with ideas from Artificial Intelligence and computational theories. Freud and Piaget play important roles.

Ok, that shouldn’t be a surprise if you read the later book. But what about heterarchies? In 1971 Patrick Winston described heterarchical organization as [3]:

An interacting community of processes, some narrow experts, others broad generalists, and still others in the role of critics.

tangled

tangled

“Heterarchy” is a term that many attribute to Warren McCulloch in 1945 based on his neural research. Although it may have been abandoned in AI, the concept had success in anthropology (according to the intertubes). It is important to note that a heterarchy can be viewed as a parent class to heirarchies and a heterarchy can contain hierarchies.

In 1973 the student Eugene Freuder, who later became well known for constraint based reasoning, reported on his “active knowledge” for vision thesis, called SEER [4]. In one of the funniest papers I’ve read, Freuder warns us that:

this paper will probably vacillate between cryptic and incoherent.

Nevertheless, it is healthy to write things down periodically. Good luck.

And later on that:

SEER never demands that much be done, it just makes a lot of helpful suggestion. A good boss.

This basic structure is not too hairy, I hope.

If you like hair, however, there are enough hooks here to open up a wig salon.

He refers to earlier heterarchy uses in the AI Lab, but says that they are isolated hacks, whereas his project is more properly a system designed to be a heterachy which allows any number of hacks to be added during development. And this supposedly allows the system to make the “best interactive use of the disparate knowledge it has.”

This supposed hetarchical system:

  • “provides concrete mechanisms for heterarchical interactions and ‘institutionalizes’ and encourages forms of heterarchy like advice”
  • allows integration of modules during development (a user (the programmer) feature)

One aspect of this is the parallelism and whether that was actually better than serial methods. The MIT heterarchy thread eventually turned into Society of Mind, or at least that’s what Patrick Winston indicates in his section introduction [2]:

Minsky’s section introduces his theory of mind in which the basic constituents are very simple agents whose simplicity strongly affects the nature of communication between different parts of a single mind. Working with Papert, he has greatly refined a set of notions that seem to have roots in the ideas that formerly went by the name of heterarchy.

Society of Mind is highly cited but rarely implemented or tested. Reactive aka behavioral robotics can be heterarchies, but are either ignored by AI or relegated to the bottom of 3 layer architectures for robots. The concepts of modularity and parallel processing have folded into general software engineering paradigms.

But I wonder if maybe the heterarchy concept(s) for cognitive architectures were abandoned too quickly. The accidents of history may have already incorporated the best ideas from heterarchies into computer science, however I am not yet sure about that.

References

[1] M. Minsky. The Society of Mind. New York: Simon and Schuster, 1986, pp. 249-250.
[2] P.H. Winston & R.H. Brown, Eds., Artificial Intelligence: An MIT Perspective, vol. 1, MIT Press, 1979.
[3] P.H. Winston, “Heterarchy in the M.I.T. Robot.” MIT AI Memo Vision Flash 8, March 1971.
[4] E.C. Freuder, “Active Knowledge.” MIT AI Memo Vision Flash 53, Oct. 1973.


Image Credits

  1. Samuel H. Kenyon’s mashup of Magritte and Agent Smith of the Matrix trilogy
  2. John Zeweniuk

 

Tags: , ,

On the Concept of Shaping Thought with Language

Posted in artificial intelligence on February 24th, 2013 by Samuel Kenyon

Psychologist Lera Boroditsky says she’s “interested in how the languages we speak shape the way we think” [1].

This statement seems so innocent, and yet it implies that language definitely does shape thought1. It also leads us to use a metaphor with “shape.”

Causes and Dependencies

Does language cause thought? Or at least in part? Or is it the other direction—thought causes language?

Is language even capable of being a cause of thought, even if it isn’t in practice?

Or in an architectural sense, is one dependent on the other? Is thought built on top of language?

Or is language built on top of thought?

Does language influence thought at all, even if one is not dependent on the other?

When people talk about language causing thought or vice versa, are they talking about language as a mental module (or distributed functionality) or the interactive act of using language?

Read more »

Tags: , , ,

What is a Room?

Posted in artificial intelligence, interfaces on February 5th, 2013 by Samuel Kenyon

We all share the concept of rooms. I suspect it’s common and abstract enough to span cultures and millennia of history.

a room

a room

The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice something because it is always before one’s eyes.)
—Wittgenstein

Rooms are so common that at first it seems silly to even talk about rooms as an abstract concept. Yet, the simple obvious things are often important. Simple human things are often also quite difficult for computers and artificial intelligence.

Is there a such thing as a room? It seems to be a category. Categories like this are probably the results of our minds’ development and learning.

Read more »

Tags: , ,