Cognitive Abstraction Manifolds

Posted in artificial intelligence, philosophy on July 19th, 2014 by Samuel Kenyon

A few days ago I started thinking about abstractions whilst reading Surfaces and Essences, a recent book by Douglas Hofstadter and Emmanuel Sander. I suspect efforts like Surfaces and Essences, which traverse vast and twisted terrains across cognitive science, are probably underrated as scientific contributions.

But first, let me briefly introduce the idea-triggering tome. Surfaces and Essences focuses on the role of analogies in thought. It’s kind of an über Metaphors We Live By. Hofstadter and his French counterpart Sander are concerned with categorization, which is held to be essentially the primary way of the mind creating concepts and the act of remembering. And each category is made entirely from a sequence of analogies. These sequences generally get bigger and more complicated as a child develops into an adult.

The evidence is considerable, but based primarily on language as a window into the mind’s machinery, an approach which comes as no surprise to those who know of Steven Pinker’s book, The Stuff of Thought: Language as a Window into Human Nature. There are also some subjective experiences used as evidence, namely what categories and mechanisms allowed a bizarre memory to occur in a situation. (You can get an intro in this video recording of a Hofstadter presentation).

Books like this—I would include in this spontaneous category Marvin Minsky’s books Society of Mind and The Emotion Machine—offer insight into psychology and what I would call cognitive architecture. They appeal to some artificial intelligence researchers / aficionados but they don’t readily lend themselves to any easy (or fundable) computer implementations. And they usually don’t have any single or easy mapping to other cognitive science domains such as neuroscience. Partly, the practical difficultly is that of needing full systems. But more to the point of this little essay, they don’t even map easily to other sub-domains nearby in psychology or artificial intelligence.

Layers and Spaces

One might imagine that a stack of enough layers at different levels will provide a full model and/or implementation of the human mind. Even if the layers overlap, one just needs full coverage—and small gaps presumably will lend themselves to obvious filler layers.

For instance, you might say one layer is the Surfaces and Essences analogy engine, and another layer deals with consciousness, another with vision processing, another with body motion control, and so on.

layers

layers

But it’s not that easy (I know, I know…that’s pretty much the mantra of skeptical cognitive science).

I think a slice of abstraction space is probably more like a manifold or some other arbitrary n-dimensional space. And yes, this is an analogy.

These manifolds could be thought of—yay, another analogy!–as 3D blobs, which on this page will be represented as a 2D pixmap (see the lava lamp image). “Ceci n’est pas une pipe.”

blobs in a lava lamp

blobs in a lava lamp

Now, what about actual implementations or working models—as opposed to theoretical models. Won’t there be additional problems of interfaces between the disparate manifolds?

Perhaps we need a class of theories whose abstraction space is in another dimension which represents how other abstraction spaces connect. Or, one’s model of abstraction spaces could require gaps between spaces.

Imagine blobs in a lava lamp, but they always repel to maintain a minimal distance from each other. Interface space is the area in which theories and models can connect those blobs.

I’m not saying that nobody has come up with interfaces at all in these contexts. We may already have several interface ideas, recognized as such or not. For instance, some of Minky’s theories which fall under the umbrella of Society of Mind are about connections. And maybe there are more abstract connection theories out there that can bridge gaps between entirely different theoretical psychology spaces.

Epilogue

Recently Gary Marcus bemoaned the lack of good meta-theories for brain science:

… biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking. What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

At theme level 2 of this essay, most likely these bridges will be dependent on analogies as prescribed by Surfaces and Essences. At theme level 1, perhaps these bridges will be the connective tissues between cognitive abstraction manifolds.


Image Credits:

  1. Jorge Konigsberger
  2. anthony gavin

Tags: , , , ,

Multitasking, Consciousness, and George Lucas

Posted in interaction design on August 8th, 2010 by Samuel Kenyon

Humans can only be conscious of one task at a time.

Tasks that user experience and interaction designers are concerned with are usually relatively complex.  Tasks that require you to think about them.  Generally this means you are aware of what you are doing.  Later on you might be so familiar with a standard task that you don’t have to be aware of it, but at first you have to learn it.  You might think this means consciousness is only needed for learning tasks.  However, in many cases not being aware during a task can result in failure–because your consciousness is required to handle new problems.

And yet it seems like we are multitasking all the time.  I routinely have 3-4 computers and 5-6 monitors with dozens of applications running at work, typing a line of code while somebody asks me a question.  In this photo you can see me multitasking while teleoperating a robot (this was my old office in 2008 with only 4 monitors…)

But I’m not consciously attentive of all that simultaneously.  I just switch between them quickly.  Typing while listening to someone talk is difficult without accidental cross-pollination, but it is easy if you have a buffer of words/code already in your head and you’re just unconsciously typing it while your attention is now focused on the completely different context of listening to a human talk.

Task switching and flipping between conscious and unconscious control happens so quickly and effortlessly that it’s hard to believe that there is really just one task getting “processed” at a time.  For some strange people, like computer engineers, this makes perfect sense, since that’s how basic CPUs work–one simple instruction at a time, millions of times per second.  Multiple programs can run on serial computers because the computer keeps all the programs in memory, and then hops between them very fast.  A little bit of this program, then a little bit of that program, and so on.

As Missy Cummings, a former Navy pilot and human factors researcher, puts it: “In complex problem solving tasks, humans are serial processors in that they can only solve a single complex problem or task at a time, and while they can rapidly switch between tasks, any sequence of tasks requiring complex cognition will form a queue…” [1].

For this reason, Cummings has warned people of the dangers of cell phone use while driving.  However, you can in fact drive while using a cell phone.  You can do lots of things while driving.  Have you ever been spaced out while driving (or walking) and found yourself transported to another location?  Who was driving in the interim?  You have trained yourself to drive enough that your mind can actually do it unconsciously.  However, if there is a problem or an unexpected event you will be alerted to that consciously–or you will not be alert and crash into something or someone.

But, since we can get close to multitasking–by switching quickly and letting learned tasks run unconsciously–why would user interaction designers be worried about multitasking?

Well first, as we already mentioned, often you need to be snapped out of auto-pilot to handle a new or emergency situation.  In some situations, not being conscious most of the time on the primary task can be very dangerous.  Do you want your ambulance driver to be playing GTA IV and polishing his/her nails on the way to rescue you (from your texting-related auto accident)?

Second, the more you multitask, generally the less efficient you become at all the tasks.  Personally, I have also found that if the tasks are in very different contexts, the context switching itself uses a lot of energy.

As Dave Crenshaw said (quote via Janna DeVylder) [2]:

When most people refer to multitasking, they are really talking about switchtasking. No matter how they do it, switching rapidly between two things is just not very efficient or effective.

And see DeVylder’s blog post “Save Me From Myself: Designing for Multitasking” for a good intro to the design considerations of multitasking.

    
Why is it Serial?

I think that serial consciousness evolved in animals because they are situated and embodied.  It wouldn’t work to have two conscious threads trying to drive one body in different directions.  Multiple threads have to share resources.  Having one thread conscious at a time gets closer to guaranteeing that multiple threads don’t conflict.  I would expect that when the system breaks down it would be very confused and might hurt itself.

Note: If the term “thread” is too computerese for your liking, then perhaps you can think of trains.  Consciousness is like a train station with only one track.  The metaphor breaks down pretty quickly, but hopefully that will get us on the same page.

Certainly there is parallelism in the brain–indeed that is touted as one of the brain’s great advantages.  The parallelism is also very different from most of our digital computers (for those who like to compare brains to computers).  But cell networks are at a much lower level in the skyscraper of the mind.  

What about behaviors?  Somewhere in the middle levels of the mental skyscraper, we do have parallel behaviors, but they are automatic.  The autonomic nervous system (ANS) keeps everything running–breathing, heart rate, sweating, digestion, sexual arousal, etc.  You can be conscious about some of these behaviors, such as breathing, but you don’t need to do that.  And I would venture that if you could, and tried, to turn off the ANS and control all those functions consciously at the same time, you would die quickly.

It may be trite but it’s worth invoking a manager hierarchy metaphor: The top manager is consciousness, and as you go lower, things become more automatic and less directly controllable by the higher up manager.  And this top manager is not director George Lucas, who supposedly micro-manages the tiniest details in his movies.  This manager is more like the other George Lucas, the one who oversees a vast empire–he doesn’t care about details (fast-forward to 08:15 in the video below for the relevant discussion).

References

[1] Cummings, M.L.,& Mitchell P.J., “Predicting Controller Capacity in Remote Supervision of Multiple Unmanned Vehicles”, IEEE Systems, Man, and Cybernetics,Part A Systems and Humans, (2008) 38(2), p. 451-460.

[2] D. Crenshaw, The Myth of Multitasking: How “Doing It All” Gets Nothing Done.  Jossey-Bass, 2008.

Crosspost with my other blog, In the Eye of the Brainstorm.
Tags: , , , , , , , , , ,