Cognitive Abstraction Manifolds

Posted in artificial intelligence, philosophy on July 19th, 2014 by Samuel Kenyon

A few days ago I started thinking about abstractions whilst reading Surfaces and Essences, a recent book by Douglas Hofstadter and Emmanuel Sander. I suspect efforts like Surfaces and Essences, which traverse vast and twisted terrains across cognitive science, are probably underrated as scientific contributions.

But first, let me briefly introduce the idea-triggering tome. Surfaces and Essences focuses on the role of analogies in thought. It’s kind of an über Metaphors We Live By. Hofstadter and his French counterpart Sander are concerned with categorization, which is held to be essentially the primary way of the mind creating concepts and the act of remembering. And each category is made entirely from a sequence of analogies. These sequences generally get bigger and more complicated as a child develops into an adult.

The evidence is considerable, but based primarily on language as a window into the mind’s machinery, an approach which comes as no surprise to those who know of Steven Pinker’s book, The Stuff of Thought: Language as a Window into Human Nature. There are also some subjective experiences used as evidence, namely what categories and mechanisms allowed a bizarre memory to occur in a situation. (You can get an intro in this video recording of a Hofstadter presentation).

Books like this—I would include in this spontaneous category Marvin Minsky’s books Society of Mind and The Emotion Machine—offer insight into psychology and what I would call cognitive architecture. They appeal to some artificial intelligence researchers / aficionados but they don’t readily lend themselves to any easy (or fundable) computer implementations. And they usually don’t have any single or easy mapping to other cognitive science domains such as neuroscience. Partly, the practical difficultly is that of needing full systems. But more to the point of this little essay, they don’t even map easily to other sub-domains nearby in psychology or artificial intelligence.

Layers and Spaces

One might imagine that a stack of enough layers at different levels will provide a full model and/or implementation of the human mind. Even if the layers overlap, one just needs full coverage—and small gaps presumably will lend themselves to obvious filler layers.

For instance, you might say one layer is the Surfaces and Essences analogy engine, and another layer deals with consciousness, another with vision processing, another with body motion control, and so on.

layers

layers

But it’s not that easy (I know, I know…that’s pretty much the mantra of skeptical cognitive science).

I think a slice of abstraction space is probably more like a manifold or some other arbitrary n-dimensional space. And yes, this is an analogy.

These manifolds could be thought of—yay, another analogy!–as 3D blobs, which on this page will be represented as a 2D pixmap (see the lava lamp image). “Ceci n’est pas une pipe.”

blobs in a lava lamp

blobs in a lava lamp

Now, what about actual implementations or working models—as opposed to theoretical models. Won’t there be additional problems of interfaces between the disparate manifolds?

Perhaps we need a class of theories whose abstraction space is in another dimension which represents how other abstraction spaces connect. Or, one’s model of abstraction spaces could require gaps between spaces.

Imagine blobs in a lava lamp, but they always repel to maintain a minimal distance from each other. Interface space is the area in which theories and models can connect those blobs.

I’m not saying that nobody has come up with interfaces at all in these contexts. We may already have several interface ideas, recognized as such or not. For instance, some of Minky’s theories which fall under the umbrella of Society of Mind are about connections. And maybe there are more abstract connection theories out there that can bridge gaps between entirely different theoretical psychology spaces.

Epilogue

Recently Gary Marcus bemoaned the lack of good meta-theories for brain science:

… biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking. What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

At theme level 2 of this essay, most likely these bridges will be dependent on analogies as prescribed by Surfaces and Essences. At theme level 1, perhaps these bridges will be the connective tissues between cognitive abstraction manifolds.


Image Credits:

  1. Jorge Konigsberger
  2. anthony gavin

Tags: , , , ,

On That Which is Called “Memory”

Posted in artificial intelligence, philosophy on April 19th, 2014 by Samuel Kenyon

Information itself is a foundational concept for cognitive science theories.

But the very definition of information can also cause issues, especially when the term is used to describe the brain “encoding” information from the senses without any regard for types of information and levels of abstraction.

Some philosophers are concerned with information that has meaning (although really everybody should be concerned…) and the nature of “content.” Prof. of Philosophical Psychology Daniel D. Hutto posted an article recently on memory [1]. He points out the entrenched metaphor of memories as items archived in storehouses in our minds and the dangers of not realizing the metaphor.

Hutto also points out two important and different notions of information:

  1. covariant
  2. contentful
rings of a tree carry information about the age of the tree

rings of a tree carry information about the age of the tree

Information as covariance is one of the most basic ways to define information itself philosophically. Naturally occurring information, and presumably artificial information, is at least covariance. I.e., information is the relationship between two arbitrary states which change together always (or at least fairly reliably).

What is content as philosophers use the term? I’ve mentioned the issue with that term before; here we will pretend it means representations. It’s common and easy to describe thoughts as built out of representations of things, be they real or imaginary, since those things are not actually inside a person’s brain, and to the best of our knowledge there is not some ethereal linkage between a real object and a thought of that object (although it makes for exciting fiction to build a world in which any story written by a character creates a new universe where that fantasy is real).

I think content is built off of covariance. And what we often think of as “remembering” and “recall” are abstractions resting on several lower layers.

Behavioral AI

Herbert, a robot by Jonathan Connell, a student of Rod Brooks

Herbert, a robot by Jonathan Connell, a student of Rod Brooks

I think if Rod Brooks [2] and company (Connell, Flynn, Angle and others) had continued their research in certain directions they might have actually achieved layered mental architectures with learning capabilities, either by design and/or emergence. And they would have emergent internal behavior that could be referred to as “memory,” and not like memory in a computer but like memory in an animal.

Memory in a system grown from reflexive subsystems is not just a module dropped in—it is in the nature of the system. And if that unspecific nature makes it difficult to use the term “memory” than so be it. Maybe memory should be abandoned for this kind of project since the word recalls “storage” and a host of computational baggage which is very achievable in computers (indeed, at many levels of abstraction) but misleading for making bio-inspired embodied situated creatures.

Stories?

Hutto claims that contentful information is not in a mind—it requires connections to external things [1]:

Yet arguably, the contents in question are not recovered from the archives of individual minds rather they are actively constructed as a means of relating and making claims about past happenings. Again, arguably, the ability is not stem purely from our being creatures who have biologically inherited machinery for perceiving and storing informational contents, rather this is a special competence that that comes only through mastery of specific kinds of discursive practices.

As that quote introduces, Hutto furthermore suggests that human narrative abilities, e.g. telling stories, may be part of our developmental program to achieve contentful information. If that’s true then it means my ability to pull up memories is somehow derived from childhood learning of communicating historic and fictional narratives to other humans.

Regardless of whether narrative competence is required, we can certainly explore many ways in which a computational architecture can expand from pre-programmed reflexes to conditioned responses to full-blown human level semantic memories. It could mean there are different kinds of mechanisms scaffolded on top of each other, as well as scaffolded on externally available interactions, and/or scaffolded on new abstractions such as semantic or pre-semantic symbols composed of essentially basic reflexes and conditioning.

What We Really Want from a Robot with Memory

A requirements approach could break down “memory” into what behaviors one wants a robot to be capable of regardless of how it is done.

A first stab at a couple parts of such a decomposition might be:

  1. It must be able to learn, in other words change its internal informational state based on current and previous contexts.
  2. Certain kinds of environmental patterns that are observed should be able to be reproduced to some degree of accuracy by the effectors.

In number 2, I mean remembering and recalling “pieces of information.” For example, the seemingly simple act of Human A remembering a name involves many stages of informational coding and translation. Later the recollection of the name involves a host of various computations in which the contexts trigger patterns at various levels of abstraction, resulting in conscious content as well as motor actions such as marking (or speaking) a series of symbols (e.g. letters of the alphabet) that triggers in Human B a fuzzy cloud of patterns that is close enough to the “same” name. Human B would say that Human A “remembered a name.” Or if the human produced the wrong marks, or no marks at all, we might say that the human “forgot a name” or perhaps “never learned it in the first place.”


References

  1. Hutto, D. D. (2014) “Remembering without Stored Contents: A Philosophical Reflection on Memory,” forthcoming in Memory in the Twenty-First Century. https://www.academia.edu/6799100/Remembering_without_Stored_Contents_A_Philosophical_Reflection_on_Memory
  2. Brooks, R. A. (1991). Intelligence without representation. Artificial intelligence, 47(1), 139-159.

Image Credits

  1. Nicholas Benner
  2. Cyberneticzoo

Tags: , , ,

Content/Internalism Space and Interfacism

Posted in artificial intelligence, interfaces, philosophy on December 29th, 2013 by Samuel Kenyon

Whenever a machine or moist machine (aka animal) comes up with a solution, an observer could imagine an infinite number of alternate solutions. The observed machine, depending on its programming, may have considered many possible options before choosing one. In any case, we could imagine a 2D or 3D (or really any dimensionality) mathematical space in which to place all these different solutions.

Fake example of a design/solution or analysis space.

Fake example of a design/solution or analysis space.

Of course, you have to be careful what the axes are. What if you chose the wrong variables? We could come up with dimensions for any kind of analysis or synthesis. I want to introduce here one such space, which illuminates particular spectra, offering a particular view into the design space of cognitive architectures. In some sense, it’s still a solution space—as a thinking system I am exploring the possible solutions for making artificial thinking systems.

Keep in mind that this landscape is only one view, just one scenic route through explanationville—or is it designville?

This photo could bring to mind several relevant concepts: design vs. analysis, representations, and the illusion of reality in reflections

This photo could bring to mind several relevant concepts: design vs. analysis, representations, and the illusion of reality in reflections

Content/Internalism Space

In the following diagram I propose that these two cognitive spectra are of interest and possibly related:

  1. Content vs. no content
  2. Internal cognition vs. external cognition
Content vs. Internalism Space

Content vs. Internalism Space

What the Hell is “Content”?

I don’t have a very good definition of this, and so far I haven’t seen one from anybody else despite its common usage by philosophers. One aspect (or perhaps container) of content is representation. That is a bit easier to comprehend—an informational structure can represent something in the real world (or represent some other informational structure). It may seem obvious that humans have representations in their minds, but that is debatable. Some, such as Hutto and Myin, suggest that human minds are primarily without content, and only a few faculties require content [1]:

Some cognitive activity—plausibly, that associated with and dependent upon the mastery of language—surely involves content. Still, if our analyses are right, a surprising amount of mental life (including some canonical forms of it, such as human visual experience) may well be inherently contentless.

And the primary type of content that Hutto and Myin try to expunge is representational. It’s worth mentioning that representation can be divorced from the Computational Theory of Mind. Nothing here goes against the mind as computation. If you could pause a brain, you could point to various informational states, which in turn compose structures, and say that those structures are “representations.” But they don’t necessarily mean anything—they don’t have to be semantic. This leads to aboutness…

Another aspect of content is aboutness. “Aboutness” is an easier to use word in place of the philosophical term intentionality; “intentionality” has a different everyday meaning which can cause confusion [2]. We think about stuff. We talk about stuff. External signs are about stuff. And we all seem to have a lot of overlapping agreements on what stuff means, otherwise we wouldn’t be able to communicate at all and there wouldn’t be any sense of logic in the world.

So does this mean we all have similar representations? Does a stop sign represent something? Is that representation stored in all of our brains, thus we all know what a stop sign means? And what things would we not understand without in-brain representations? For instance, consider some sensory stimulus that sets off a chain reaction resulting in a particular behavior that most humans share. Is that internal representation, or are dynamic interfaces, however complicated, something different?

Internal vs. External

This is about the prevailing cognitive science assumption that anything of interest cognitively is neural. Indeed, most would go even further than neural and limit themselves just to the brain. The brain is just one part of your nervous system. Although human brain evolution and development seem to be the cause of our supposed mental advantages over other animals, we should be careful not to discard all the supporting and/or interacting structures. If we take the magic glass elevator a bit down and sideways, we might want to consider our insect cousins in which the brain is not the only critical part of the nervous system—and insects can still operate in some fashion for days or weeks without their brains.

I’m not saying here that the brain focus is wrong; I’m merely saying that one can have a spectrum. For instance, a particular ALife experiment could be analyzed from the point of view of anywhere on that axis. Or you could design an ALife situation on any point, e.g. just by focusing on the internal controller that is analogous to a brain (internalist) vs. focusing on the entire system of brain-body-environment (externalist).

Interfacism

Since there has to be an “ism” for everything, there is of course representationalism. Another philosophical stance that is sometimes pitted against representationalism is direct realism.

Direct realism seems to be kind of sloppy. It could simply mean that at some levels of abstraction in the mind, real world objects are experienced as whole objects, not as the various mental middlemen which were involved in constructing the representation of that object. E.g., we don’t see a chair by consciously and painstakingly sorting through various raw sensory data chunks—we have an evolved and developed system for becoming aware of a chair as an object “directly.”

Or, perhaps, in an enactivism or dynamic system sense, one could say that regardless of information processing or representations, real world objects are the primary cause of information patterns that propagate through the system which lead to experience of the object.

My middle ground between direct and indirect realism would, perhaps, be called “interfacism,” which is a form of representationalism that is enactivism-compatible. Perhaps most enactivists already think that way, although I don’t recall seeing any enactivist descriptions of mental representation in terms of interfaces.

What I definitely do not concede is any form of cognitive architecture which requires veridical, aka truthful, accounts anywhere in the mind. What I do propose is that any concept of an organism can be seen as interactions. The organism itself is a bunch of cellular interactions, and that blob interacts with other blobs and elements of the environment, some of which may be tools or cognitively-extensive information processors. Whenever you try to look at a particular interaction, there is an interface. Zooming into that interface reveals yet more interfaces, and so on. To say anything is direct, in that sense, is false.

For example, an interfacism description of a human becoming aware of a glass of beer would acknowledge that the human as an animate object and the beer glass as an inanimate object are arbitrary abstractions or slices of reality. At that level in that slice, we can say there is an interface between the human and the glass of beer, presumably involving the mind attributed to the human.

Human-beer interface

Human-beer interface

But, if we zoom into the interface, there will be more interfaces.

Zooming in to the human-beer interface reveals more interfaces.

Zooming in to the human-beer interface reveals more interfaces.

And semantics will probably require links to other things, for instance we don’t just see what is in front of us—we can be primed or biased, or hallucinate, or dream, etc. How sensory data comes to mean anything at all almost certainly involves evolutionary history and ontogeny (life history) and current brain states at least as much as any immediate perceptual trigger. And our perception is just a contraption of evolution, so we aren’t really seeing true reality ever—it’s a nonsensical concept.

I think interfacism is possibly a good alternate way to look at how cognition, be it wide or narrow—at any given cognitive granularity, there is no direct connection between two “nodes” or objects. There is just an interface, and anything “direct” is at a level below, recursively. It’s also compatible with non-truthful representations and/or perception.

Some might say that representations have to be truthful or that there are representations, for instance in animal behaviors, because there is some truthful mapping between the real world and the behavior. With an interface point of view we can throw truth out the window. Mappings can be arbitrary. There may be consistent and/or accurate mappings. But they don’t necessarily have to be truthful in any sense aside from that.


References
[1]D. D. Hutto and E. Myin, Radicalizing enactivism: basic minds without content. Cambridge, Mass.: MIT Press, 2013.
[2]D. C. Dennett, Intuition Pumps and Other Tools for Thinking. W.W. Norton & Company, 2013.


Image credits
Figures by Samuel H. Kenyon.
Photo by Dimosthenis Kapa

Tags: , , , , , ,

The Code Experience vs. the Math Experience

Posted in culture, philosophy on November 29th, 2012 by Samuel Kenyon

In the book The Mathematical Experience, the chapter on symbols mentions computer programming [1]. But it really doesn’t do justice to programming (aka coding). In fact it’s actually one of the lamer parts of an otherwise thought-provoking book. It’s not that it’s dated—a concern since the book was published in 1981—but that the authors only provide the paltry sentence, “Computer science embraces several varieties of mathematical disciplines but has its own symbols,” followed by some random examples of BASIC keywords and some operators.

cover of the edition I have

As mentioned in “The Disillusionment of Math,” I’ve always thought of programming as different than mathematics. And I almost always choose the experience of thinking in code over the experience of thinking in equations.

But I suspect others think of these as similar activities occupied by mathematical people. Likewise, if a programmer tells somebody that they are a software engineer, the keyword “engineer” can create a response of “oh you must do a lot of math.”

Read more »

Tags: , , ,