Cognitive Abstraction Manifolds

Posted in artificial intelligence, philosophy on July 19th, 2014 by Samuel Kenyon

A few days ago I started thinking about abstractions whilst reading Surfaces and Essences, a recent book by Douglas Hofstadter and Emmanuel Sander. I suspect efforts like Surfaces and Essences, which traverse vast and twisted terrains across cognitive science, are probably underrated as scientific contributions.

But first, let me briefly introduce the idea-triggering tome. Surfaces and Essences focuses on the role of analogies in thought. It’s kind of an über Metaphors We Live By. Hofstadter and his French counterpart Sander are concerned with categorization, which is held to be essentially the primary way of the mind creating concepts and the act of remembering. And each category is made entirely from a sequence of analogies. These sequences generally get bigger and more complicated as a child develops into an adult.

The evidence is considerable, but based primarily on language as a window into the mind’s machinery, an approach which comes as no surprise to those who know of Steven Pinker’s book, The Stuff of Thought: Language as a Window into Human Nature. There are also some subjective experiences used as evidence, namely what categories and mechanisms allowed a bizarre memory to occur in a situation. (You can get an intro in this video recording of a Hofstadter presentation).

Books like this—I would include in this spontaneous category Marvin Minsky’s books Society of Mind and The Emotion Machine—offer insight into psychology and what I would call cognitive architecture. They appeal to some artificial intelligence researchers / aficionados but they don’t readily lend themselves to any easy (or fundable) computer implementations. And they usually don’t have any single or easy mapping to other cognitive science domains such as neuroscience. Partly, the practical difficultly is that of needing full systems. But more to the point of this little essay, they don’t even map easily to other sub-domains nearby in psychology or artificial intelligence.

Layers and Spaces

One might imagine that a stack of enough layers at different levels will provide a full model and/or implementation of the human mind. Even if the layers overlap, one just needs full coverage—and small gaps presumably will lend themselves to obvious filler layers.

For instance, you might say one layer is the Surfaces and Essences analogy engine, and another layer deals with consciousness, another with vision processing, another with body motion control, and so on.

layers

layers

But it’s not that easy (I know, I know…that’s pretty much the mantra of skeptical cognitive science).

I think a slice of abstraction space is probably more like a manifold or some other arbitrary n-dimensional space. And yes, this is an analogy.

These manifolds could be thought of—yay, another analogy!–as 3D blobs, which on this page will be represented as a 2D pixmap (see the lava lamp image). “Ceci n’est pas une pipe.”

blobs in a lava lamp

blobs in a lava lamp

Now, what about actual implementations or working models—as opposed to theoretical models. Won’t there be additional problems of interfaces between the disparate manifolds?

Perhaps we need a class of theories whose abstraction space is in another dimension which represents how other abstraction spaces connect. Or, one’s model of abstraction spaces could require gaps between spaces.

Imagine blobs in a lava lamp, but they always repel to maintain a minimal distance from each other. Interface space is the area in which theories and models can connect those blobs.

I’m not saying that nobody has come up with interfaces at all in these contexts. We may already have several interface ideas, recognized as such or not. For instance, some of Minky’s theories which fall under the umbrella of Society of Mind are about connections. And maybe there are more abstract connection theories out there that can bridge gaps between entirely different theoretical psychology spaces.

Epilogue

Recently Gary Marcus bemoaned the lack of good meta-theories for brain science:

… biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking. What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

At theme level 2 of this essay, most likely these bridges will be dependent on analogies as prescribed by Surfaces and Essences. At theme level 1, perhaps these bridges will be the connective tissues between cognitive abstraction manifolds.


Image Credits:

  1. Jorge Konigsberger
  2. anthony gavin

Tags: , , , ,

On Rehacktoring

Posted in programming on May 6th, 2014 by Samuel Kenyon

For at a least a year now, I’ve been using the term “rehacktor,” a portmanteau of “refactor” and “hack.” Although I came up with it on my own, I was not surprised that googling it resulted in many hits. Perhaps we are witnessing the convergent evolution of an urgently needed word.

My definition was at first a tongue-in-check descriptor in VCS commits for refactoring tasks that I was not pleased about. However, upon reflection, I think the term and the situation which leads to its necessity is important.

But first, how are others using this term?

Other Definitions

One “intertubeloper” (a term I just coined) tweeting as WizOne Solutions has tried to establish this unsatisfactory definition:

Rehacktoring (n.) – The act of reaching a level of hackedness so great that the resultant code is elegant and sophisticated.

Something a bit more sensible is the concept of rehacktoring as refactoring without tests (for instance a Twitter search of “rehacktoring” will get some results along those lines).

My one minute research bout also found a recent blog post by Katrina Owen which contains the string “Refactoring is Not Rehacktoring.” Her article paradoxically offers a meaning by not defining “rehacktoring.” She explains via some strategies how one way to refactor is to change in baby steps. Each step results in code that still passes the tests. Each intermediate step may be ugly and redundant, but the tests will still pass—if they don’t you revert and retry.

I’ve been using this baby-step refactoring strategy for a long time, having come up with it on my own. But as I read Owen’s post I realized that if one does not follow through with the full refactoring, one might end up with a ridiculous state of the code which is not necessarily any better than its pre-refactoring state. In that sense, “rehacktoring” can be a retroactive term to describe an abandoned refactoring.

Owen meant for “rehacktoring” to mean unsafe code changes (I think). Both that meaning and abandoned refactoring should be valid variations.

Choices

Aside from those notions, I propose another valid meaning: any refactoring effort which is a mini death march. In other words, the refactoring has a high chance of failure due to schedule, resources, etc. I would guess a common trigger of rehacktoring are managers demanding refactoring of crappy code that is critical to a deployed application.

But refactoring is not the only answer. There are other options, regardless of what John Manager or Joe Principal claims. Some very legitimate alternatives include, but are not limited to:

  • Don’t refactor. Spend time on other/new functionality.
  • Rewrite the class/module/application from scratch.
  • Stretch the refactoring out to a longer time frame, so that everyone involved is still spending most of their effort on new functionality that users care about.

The proposal to rewrite a module or application from scratch will scare those of weaker constitution. But it does work in many circumstances. Keep in mind that the first crack at doing something new might be the necessary learning phase to do it right the second time. A manager, or your own mind, might convince you that refactoring is quicker and/or lower risk than redoing it. But that is not always the case, especially if you are more motivated to make something new than tinker with something old. And I have seen the rewrite approach work in the wild with two different programmers, in which an intern writes working version 1, and then someone else (even another intern) writes version 2.

Rehacktoring may be the wrong thing to do. It is especially the wrong thing to do if it causes psychological depression in the programmer. After all, who wants to waste their life changing already-working code from bad to semi-bad? Based on observations (albeit not very scientific), I suspect that programmers are particularly ill-suited to refactoring code that they did not originally write. They have no emotional connection to that code. And that makes the alternate solutions listed above all the more appealing for the “soft” aspect of a software team.

Tags: ,

On That Which is Called “Memory”

Posted in artificial intelligence, philosophy on April 19th, 2014 by Samuel Kenyon

Information itself is a foundational concept for cognitive science theories.

But the very definition of information can also cause issues, especially when the term is used to describe the brain “encoding” information from the senses without any regard for types of information and levels of abstraction.

Some philosophers are concerned with information that has meaning (although really everybody should be concerned…) and the nature of “content.” Prof. of Philosophical Psychology Daniel D. Hutto posted an article recently on memory [1]. He points out the entrenched metaphor of memories as items archived in storehouses in our minds and the dangers of not realizing the metaphor.

Hutto also points out two important and different notions of information:

  1. covariant
  2. contentful
rings of a tree carry information about the age of the tree

rings of a tree carry information about the age of the tree

Information as covariance is one of the most basic ways to define information itself philosophically. Naturally occurring information, and presumably artificial information, is at least covariance. I.e., information is the relationship between two arbitrary states which change together always (or at least fairly reliably).

What is content as philosophers use the term? I’ve mentioned the issue with that term before; here we will pretend it means representations. It’s common and easy to describe thoughts as built out of representations of things, be they real or imaginary, since those things are not actually inside a person’s brain, and to the best of our knowledge there is not some ethereal linkage between a real object and a thought of that object (although it makes for exciting fiction to build a world in which any story written by a character creates a new universe where that fantasy is real).

I think content is built off of covariance. And what we often think of as “remembering” and “recall” are abstractions resting on several lower layers.

Behavioral AI

Herbert, a robot by Jonathan Connell, a student of Rod Brooks

Herbert, a robot by Jonathan Connell, a student of Rod Brooks

I think if Rod Brooks [2] and company (Connell, Flynn, Angle and others) had continued their research in certain directions they might have actually achieved layered mental architectures with learning capabilities, either by design and/or emergence. And they would have emergent internal behavior that could be referred to as “memory,” and not like memory in a computer but like memory in an animal.

Memory in a system grown from reflexive subsystems is not just a module dropped in—it is in the nature of the system. And if that unspecific nature makes it difficult to use the term “memory” than so be it. Maybe memory should be abandoned for this kind of project since the word recalls “storage” and a host of computational baggage which is very achievable in computers (indeed, at many levels of abstraction) but misleading for making bio-inspired embodied situated creatures.

Stories?

Hutto claims that contentful information is not in a mind—it requires connections to external things [1]:

Yet arguably, the contents in question are not recovered from the archives of individual minds rather they are actively constructed as a means of relating and making claims about past happenings. Again, arguably, the ability is not stem purely from our being creatures who have biologically inherited machinery for perceiving and storing informational contents, rather this is a special competence that that comes only through mastery of specific kinds of discursive practices.

As that quote introduces, Hutto furthermore suggests that human narrative abilities, e.g. telling stories, may be part of our developmental program to achieve contentful information. If that’s true then it means my ability to pull up memories is somehow derived from childhood learning of communicating historic and fictional narratives to other humans.

Regardless of whether narrative competence is required, we can certainly explore many ways in which a computational architecture can expand from pre-programmed reflexes to conditioned responses to full-blown human level semantic memories. It could mean there are different kinds of mechanisms scaffolded on top of each other, as well as scaffolded on externally available interactions, and/or scaffolded on new abstractions such as semantic or pre-semantic symbols composed of essentially basic reflexes and conditioning.

What We Really Want from a Robot with Memory

A requirements approach could break down “memory” into what behaviors one wants a robot to be capable of regardless of how it is done.

A first stab at a couple parts of such a decomposition might be:

  1. It must be able to learn, in other words change its internal informational state based on current and previous contexts.
  2. Certain kinds of environmental patterns that are observed should be able to be reproduced to some degree of accuracy by the effectors.

In number 2, I mean remembering and recalling “pieces of information.” For example, the seemingly simple act of Human A remembering a name involves many stages of informational coding and translation. Later the recollection of the name involves a host of various computations in which the contexts trigger patterns at various levels of abstraction, resulting in conscious content as well as motor actions such as marking (or speaking) a series of symbols (e.g. letters of the alphabet) that triggers in Human B a fuzzy cloud of patterns that is close enough to the “same” name. Human B would say that Human A “remembered a name.” Or if the human produced the wrong marks, or no marks at all, we might say that the human “forgot a name” or perhaps “never learned it in the first place.”


References

  1. Hutto, D. D. (2014) “Remembering without Stored Contents: A Philosophical Reflection on Memory,” forthcoming in Memory in the Twenty-First Century. https://www.academia.edu/6799100/Remembering_without_Stored_Contents_A_Philosophical_Reflection_on_Memory
  2. Brooks, R. A. (1991). Intelligence without representation. Artificial intelligence, 47(1), 139-159.

Image Credits

  1. Nicholas Benner
  2. Cyberneticzoo

Tags: , , ,

Polymorphism in Mental Development

Posted in artificial intelligence on March 9th, 2014 by Samuel Kenyon

Adaptability and uninterrupted continuous operations are important features of mental development.

An organism that can’t adapt enough is too rigid and brittle—and dies. The environment will never be exactly as expected (or exactly the same as any other previous time during evolution). Sure, in broad strokes the environment has to be the same (e.g. gravity), and the process of reproduction has limits, but many details change.

During its lifetime (ontogeny), starting from inception, all through embryogeny and through childhood and into adulthood, an organism always has to be “on call.” At the very least, homeostasis is keeping it alive. I’d say for the most part all other informational control/feedback systems, including the nervous system which in turn includes the brain, all have to be always operational. There are brief gaps, for instance highly altricial animals depend on their parents protecting them as children. And of course there’s the risk when one is asleep. But even in those vulnerable states, organisms do not suddenly change their cognitive architectures or memories in a radically broken way. Otherwise they’d die.

There are a couple computer-ish concepts that might be useful for analyzing and synthesizing these aspects of mental development:

  1. Parameter ranges
  2. Polymorphism

These aren’t the only relevant concepts; I just happen to be thinking about them lately.

Parameter Ranges

Although it seems to be obvious, I want to point out that animals such as humans have bodies and skills that do not require exact environmental matches. For example, almost any human can hammer a nail regardless of the exact size and shape of the hammer, or the nail for that matter.

Crows have affordances with sticks and string-like objects. Actually, crows aren’t alone—other bird species have been seen pulling strings for centuries [1]. Anyway, crows use their beaks and feet to accomplish tasks with sticks and strings.

crow with a stick

a crow with a stick

crow incrementally pulling up a string to get food

a crow incrementally pull-stepping a string to get food

Presumably there is some range of dimensions and appearances of sticks and strings that a crow can:

  1. Physically manipulate
  2. Recognize as an affordance

And note that those items are required regardless of whether crows have insight at all or are just behaving through reinforcement feedback loops [1].

Organisms fit into their niches, but there’s flexibility to the fit. However, these flexibilities are not unlimited—there are ranges. E.g., physically a person’s hand cannot grip beyond a certain width. The discipline of Human factors (and ergonomics) studies these ranges of the human body as well as cognitive limits.

For an AI agent, there are a myriad places in which there could a function f and its parameter x with some range R of acceptable values for x. Perhaps the levels of abstraction for this to be useful are exactly those involved with perception of affordances and related skill-knowledge.

Polymorphism

Polymorphic entities have the same interfaces. Typically one ends up with a family of types of objects (e.g. classes). At any given time, if you are given an object of a family, you can interface to it the same way.

In object oriented programming this is exemplified by calling functions (aka methods) on pointers to a base class object without actually knowing what particular child class the object is an instance of. It works conceptually because all the child classes inherit the same interface.

For example, in C++:

class A {
public:
    virtual void foo() = 0;
};

class B : public A
{
public:
    void foo() { std::cout << "B foo!\n"; }
};

class C : public A
{
public:
    void foo() { std::cout << "C foo!\n"; }
};

void main() 
{
    std::unique_ptr<A> obj(new B);
    obj->foo(); // does the B variation of foo()

    std::unique_ptr<A> obj2(new C);
    obj2->foo(); //does the C variation of foo()
}

That example is of course contrived for simplicity. It’s practical if you have one piece of code which should not need to know the details of what particular kind of object it has—it just needs to do operations defined by an interface. E.g., the main rendering loop for a computer game could have a list of objects to call draw() on, and it doesn’t care what specific type each object is—as long as it can call draw() on each object it’s happy.

So what the hell does this have to do with mental development? Well, it could be a way for a mind to add new functions to existing concepts without blowing away the existing functions. For instance, at a high level, a new skill S4 could be learned and connected to concept C. But concepts S1, S2, and S3 are still usable for concept C. In a crisp analogy to programming polymorphism, the particular variant of S that is used in a context would be based on the sub-type of C that is perceived.

On the other hand, polymorphism could be fuzzier and more like the switch concept I mentioned in a previous post. In other words, the mechanism of the switch could be anything. E.g. a switch could be influenced by global emotional states instead of the active sub-type of C.

One can imagine polymorphism both at reflexive levels and on conceptual symbol-based levels. Reflexive (or “behavioral”) networks could be designed to effectively arbitrate based on the “type” of an external situation via the mapping of inputs (sensor data) to outputs (actuators). Conceptual based mental polymorphism would presumably actually categorize a perception (or perhaps an input from some other mental module) which would be mapped by subcategory to the appropriate version of an action. And maybe it’s not an action, but just the next mental node to visit.


References
[1] Taylor, A., Medina, F., Holzhaider, J., Hearne, L., Hunt, G., & Gray, R. (2010) An Investigation into the Cognition Behind Spontaneous String Pulling in New Caledonian Crows. PLoS ONE, 5(2). DOI: 10.1371/journal.pone.0009345



Image credits:

  1. University of Oxford
  2. David Schulz

Tags: , , ,