Content/Internalism Space and Interfacism

Posted in artificial intelligence, interfaces, philosophy on December 29th, 2013 by Samuel Kenyon

Whenever a machine or moist machine (aka animal) comes up with a solution, an observer could imagine an infinite number of alternate solutions. The observed machine, depending on its programming, may have considered many possible options before choosing one. In any case, we could imagine a 2D or 3D (or really any dimensionality) mathematical space in which to place all these different solutions.

Fake example of a design/solution or analysis space.

Fake example of a design/solution or analysis space.

Of course, you have to be careful what the axes are. What if you chose the wrong variables? We could come up with dimensions for any kind of analysis or synthesis. I want to introduce here one such space, which illuminates particular spectra, offering a particular view into the design space of cognitive architectures. In some sense, it’s still a solution space—as a thinking system I am exploring the possible solutions for making artificial thinking systems.

Keep in mind that this landscape is only one view, just one scenic route through explanationville—or is it designville?

This photo could bring to mind several relevant concepts: design vs. analysis, representations, and the illusion of reality in reflections

This photo could bring to mind several relevant concepts: design vs. analysis, representations, and the illusion of reality in reflections

Content/Internalism Space

In the following diagram I propose that these two cognitive spectra are of interest and possibly related:

  1. Content vs. no content
  2. Internal cognition vs. external cognition
Content vs. Internalism Space

Content vs. Internalism Space

What the Hell is “Content”?

I don’t have a very good definition of this, and so far I haven’t seen one from anybody else despite its common usage by philosophers. One aspect (or perhaps container) of content is representation. That is a bit easier to comprehend—an informational structure can represent something in the real world (or represent some other informational structure). It may seem obvious that humans have representations in their minds, but that is debatable. Some, such as Hutto and Myin, suggest that human minds are primarily without content, and only a few faculties require content [1]:

Some cognitive activity—plausibly, that associated with and dependent upon the mastery of language—surely involves content. Still, if our analyses are right, a surprising amount of mental life (including some canonical forms of it, such as human visual experience) may well be inherently contentless.

And the primary type of content that Hutto and Myin try to expunge is representational. It’s worth mentioning that representation can be divorced from the Computational Theory of Mind. Nothing here goes against the mind as computation. If you could pause a brain, you could point to various informational states, which in turn compose structures, and say that those structures are “representations.” But they don’t necessarily mean anything—they don’t have to be semantic. This leads to aboutness…

Another aspect of content is aboutness. “Aboutness” is an easier to use word in place of the philosophical term intentionality; “intentionality” has a different everyday meaning which can cause confusion [2]. We think about stuff. We talk about stuff. External signs are about stuff. And we all seem to have a lot of overlapping agreements on what stuff means, otherwise we wouldn’t be able to communicate at all and there wouldn’t be any sense of logic in the world.

So does this mean we all have similar representations? Does a stop sign represent something? Is that representation stored in all of our brains, thus we all know what a stop sign means? And what things would we not understand without in-brain representations? For instance, consider some sensory stimulus that sets off a chain reaction resulting in a particular behavior that most humans share. Is that internal representation, or are dynamic interfaces, however complicated, something different?

Internal vs. External

This is about the prevailing cognitive science assumption that anything of interest cognitively is neural. Indeed, most would go even further than neural and limit themselves just to the brain. The brain is just one part of your nervous system. Although human brain evolution and development seem to be the cause of our supposed mental advantages over other animals, we should be careful not to discard all the supporting and/or interacting structures. If we take the magic glass elevator a bit down and sideways, we might want to consider our insect cousins in which the brain is not the only critical part of the nervous system—and insects can still operate in some fashion for days or weeks without their brains.

I’m not saying here that the brain focus is wrong; I’m merely saying that one can have a spectrum. For instance, a particular ALife experiment could be analyzed from the point of view of anywhere on that axis. Or you could design an ALife situation on any point, e.g. just by focusing on the internal controller that is analogous to a brain (internalist) vs. focusing on the entire system of brain-body-environment (externalist).

Interfacism

Since there has to be an “ism” for everything, there is of course representationalism. Another philosophical stance that is sometimes pitted against representationalism is direct realism.

Direct realism seems to be kind of sloppy. It could simply mean that at some levels of abstraction in the mind, real world objects are experienced as whole objects, not as the various mental middlemen which were involved in constructing the representation of that object. E.g., we don’t see a chair by consciously and painstakingly sorting through various raw sensory data chunks—we have an evolved and developed system for becoming aware of a chair as an object “directly.”

Or, perhaps, in an enactivism or dynamic system sense, one could say that regardless of information processing or representations, real world objects are the primary cause of information patterns that propagate through the system which lead to experience of the object.

My middle ground between direct and indirect realism would, perhaps, be called “interfacism,” which is a form of representationalism that is enactivism-compatible. Perhaps most enactivists already think that way, although I don’t recall seeing any enactivist descriptions of mental representation in terms of interfaces.

What I definitely do not concede is any form of cognitive architecture which requires veridical, aka truthful, accounts anywhere in the mind. What I do propose is that any concept of an organism can be seen as interactions. The organism itself is a bunch of cellular interactions, and that blob interacts with other blobs and elements of the environment, some of which may be tools or cognitively-extensive information processors. Whenever you try to look at a particular interaction, there is an interface. Zooming into that interface reveals yet more interfaces, and so on. To say anything is direct, in that sense, is false.

For example, an interfacism description of a human becoming aware of a glass of beer would acknowledge that the human as an animate object and the beer glass as an inanimate object are arbitrary abstractions or slices of reality. At that level in that slice, we can say there is an interface between the human and the glass of beer, presumably involving the mind attributed to the human.

Human-beer interface

Human-beer interface

But, if we zoom into the interface, there will be more interfaces.

Zooming in to the human-beer interface reveals more interfaces.

Zooming in to the human-beer interface reveals more interfaces.

And semantics will probably require links to other things, for instance we don’t just see what is in front of us—we can be primed or biased, or hallucinate, or dream, etc. How sensory data comes to mean anything at all almost certainly involves evolutionary history and ontogeny (life history) and current brain states at least as much as any immediate perceptual trigger. And our perception is just a contraption of evolution, so we aren’t really seeing true reality ever—it’s a nonsensical concept.

I think interfacism is possibly a good alternate way to look at how cognition, be it wide or narrow—at any given cognitive granularity, there is no direct connection between two “nodes” or objects. There is just an interface, and anything “direct” is at a level below, recursively. It’s also compatible with non-truthful representations and/or perception.

Some might say that representations have to be truthful or that there are representations, for instance in animal behaviors, because there is some truthful mapping between the real world and the behavior. With an interface point of view we can throw truth out the window. Mappings can be arbitrary. There may be consistent and/or accurate mappings. But they don’t necessarily have to be truthful in any sense aside from that.


References
[1]D. D. Hutto and E. Myin, Radicalizing enactivism: basic minds without content. Cambridge, Mass.: MIT Press, 2013.
[2]D. C. Dennett, Intuition Pumps and Other Tools for Thinking. W.W. Norton & Company, 2013.


Image credits
Figures by Samuel H. Kenyon.
Photo by Dimosthenis Kapa

Tags: , , , , , ,

Artificial Intelligence is a Design Problem

Posted in artificial intelligence on September 2nd, 2013 by Samuel Kenyon

Take a look at this M.C. Escher lithograph, “Ascending and Descending”.

"Ascending and Descending" by M.C. Escher

“Ascending and Descending” by M.C. Escher

Is this art? Is it graphic design? Is it mathematical visualization? It is all of those things. One might even say that it’s also an architectural plan given that it has been used to implement physical 3-dimensional structures which feature the same paradox when viewed from a particular angle.

A physical implementation of the MC Escher's "Ascending and Descending"

A physical implementation of the M.C. Escher’s “Ascending and Descending”

Design and Specialization

Design stated as a single word typically means design of form, such as with industrial design, graphic design, and interior design. It would come as no surprise to anyone to see the term design used for any other form-creation discipline, such as architecture and software interfaces. Usually if somebody means production of stuff in their specialization, they use a modifier, e.g. “software design.”

I do not often encounter software engineers self-styled as “designers.” When I hang out with people in the various related disciplines of user experience, calling oneself a “designer” is perfectly fine—there is an atmosphere of design of form. Some of them also are “developers”, i.e. programmers of web or mobile apps. It’s common for the younger designers to speak of a wish to learn to code. Perhaps the design specialists are a bit more open to cross-discipline experience. To be fair, the creative point of view seems to be gaining in favor—there are now coding schools which focus on rapidly teaching the creative aspect of coding to people who have no code experience and possibly no mathematical experience.

Specialized design fields presumably have a lot of overlap if one were to examine general aspects and the notion of abstractions and how abstractions are used; yet software engineers aren’t typically thinking of themselves as “designers.” And they aren’t expected to or encouraged to. I have experienced situations in which I designed the interface of an application using primarily interaction design practices, and also developed the software using typical software practices, and no single person would comprehend that I did both of those things to any degree of competence. Professional designers in the realm of human-computer interfaces and user experience were brought in and were genuinely surprised that my product had already been designed—they expected a typical engineer interface design. Engineers have a very bad reputation in industry for making horrible human-computer interfaces. But as I said, they aren’t expected to. Or even worse in particular cases, no expectations for design existed at all because the people in charge had no concept of design at all.

What I’m trying to get across is my observation that design of form is not integrated with engineering and computer science disciplines, at least not to the degree that a single person is expected to be competent in both. And really entire corporate organizations that were traditionally hyper-engineering-focused have had rough times trying to comprehend what interaction design is and why that it is important for making usable products that customers want to buy. It’s easy to point to some mega-popular companies that got the balance right from the start, at least organizationally—not necessarily for each individual in the company—such as Google. Google has a reputation as a place for smart programmers to go hack all day to make information free for all humankind. But really it became big and has sustained that in part because of the integration of the design of the human-computer interfaces. If you don’t think of Google as a design company, it’s because you think design means stylish, trendy, “new”, etc. You might expect to be stopped in your tracks by the mere sight of some artistic designed form and say—wow, look at that amazing design. But the truth is invisible design is an apex that most never achieve; even if they try they find it’s difficult. Simple is hard. The phrase “less is more” may sound too trite to be true and yet it generally is a good target. If you notice the interface and find yourself marveling at it, it’s probably getting in your way of actually accomplishing a goal.

It should come as no surprise that most AI implementations, and many of the theories, are now and have always been generated by persons living in the realm of computer science and/or engineering.

Why is AI a Design Problem?

One might ask, first, is AI a problem at all? Some of the early AI papers referred to “the artificial intelligence problem”—singular—such as the 1955 proposal for the Dartmouth Summer Research Project on AI [1] and a report from one of those proposers, Minsky, done the following year at Lincoln Lab [2]. But even in those papers, they had already started viewing the problem often as a collection of problems. Each intelligence ability that humans can do should be able to be mechanized and therefore computerized. The problem for each ability was how to figure out what those mechanical descriptions are. Still, be it one problem or many, AI was described in terms of problems.

So, on the premise that AI is indeed a problem, why would I say it’s a design problem? Why AI books rarely, if ever, even mention the word “design”?

AI is a design problem because there is no mechanical reduction that will automatically generate a solution for the problem. And there is no single, best solution. To be clear, that doesn’t mean that intelligence is irreducible. That just means that the creation of an artifact which is intelligent necessarily involves the artifact (which is a form) and an environment (the context). And to create an intelligent artifact in that system does not have any single directly available answer.

Form and Context

Although AI is often defined and practiced as hundreds of narrow specialist sub-disciplines, as one old AI textbook put it [3]:

The ultimate goal of AI research (which we are very far from achieving) is to build a person, or, more humbly, an animal.

(Of course humans are animals, so by pretending they aren’t, right off the bat there’s an arbitrary wall built which prevents cross-fertilization of concepts between disciplines.)

A more recent popular undergraduate AI textbook [4] abandons all hope of human-like AI:

Human behavior, on the other hand, is well-adapted for one specific environment and is the product, in part, of a complicated and largely unknown evolutionary process that still is far from producing perfection. This book will therefore concentrate on general principles of rational agents and on components for constructing them.

So we’re starting to see how a major portion of AI research has decided that since humans are too difficult to understand and non-perfect the focus will be on ideal models that work in ideal contexts. The derision of real-world environments as “one specific environment” is interesting—if it was simply one specific environment, wouldn’t that make human-type AI easier to figure out? But it’s not an easy to define environment, of course. It is specific to areas of Earth and involves myriad dynamic concepts interacting in different ways. And, as said, it is not perfect.

But design disciplines routinely tackle exactly those kinds of problems—a problem is defined, perhaps very vaguely at first, which involves a complex system operating in a real world environment. Even after coming up with a limited set of requirements, the number of interactions makes creating a form that maintains equilibrium very hard. There are too many factors for an individual to comprehend all at once without some process of organization and methods that good designers use. A bad designer has a very small chance of success given the complexity of any problem. Yet somehow some designers make artifacts that work. Both the successes and failures of design are of importance to making intelligence systems. I’m not going to go into design methods in this blog post (I will talk more about design in future posts), but I will have to say something about form and context.

It is an old notion that design problems attempt to make a good fit between a form and its context [5]. Christopher Alexander (coming from architecture of buildings and villages) described form-context systems as “ensembles,” of which there are a wide variety:

The biological ensemble made up of a natural organism and its physical environment is the most familiar: in this case we are used to describing the fit between the two as well-adaptedness.

…The ensemble may be a musical composition—musical phrases have to fit their contexts too…

…An object like a kettle has to fit the context of its use, and the technical context of its production cycle.

Form is not naturally isolated. It must be reintroduced into the wild of the system at hand, be it a civilized urban human context or literally in a wild African savanna. Form is relevant because of its interface with context. They are the yin and yang of creating and modifying ensembles. And really, form is merely a slice of form-context. The boundary can be shifted arbitrarily. Alexander suggests that a designer may actually have to consider several different context-form divisions simultaneously.

And this ensemble division is another important aspect of design that goes right to the heart of artificial intelligence and even cognitive science as a whole—is the form of intelligence the brain, or should the division line be moved to include the whole nervous system, or the whole body, or perhaps the whole local system that was previously defined as “environment”? Assuming one without consideration for the others (or not admitting the assumption at all) is a very limiting way to solve problems. And Alexander’s suggestion of layering many divisions might be very useful for future AI research.

Someone might argue that using design methods to create the form of an artificial mind is not necessary because AI researchers should instead by trying to implement a “seed” which grows into the form automatically. However, that involves defining a context in which a new form, the seed, turns into the target form over time. The form is still being fit to the ensemble. Indeed, we may need to solve even more form-context design problems such as the development mechanisms. One could imagine designing a form, and then afterword somehow working in reverse to figure out a compressed form which can grow into that form with the assistance of environmental interactions. Still, design was not made irrelevant.

Conclusion

In case it wasn’t clear by now, the form of an artificial organism includes its informational aspect. Its mind is a form.

Creating mental forms is a design problem because there is no single perfect solution. One can solve a design problem by creating a form that sets to false all the binary “misfits” (states in the ensemble where form and context do not fit). This satisfies the requirements only at that level, not at some optimal level. It is not the “best possible” way [5]. There is no best possible way; if there was, it would not be a design problem. Artificial minds do not have a best possible way either; they merely work in a context or they don’t. You could synthesize many different minds and compare them in a context and say “this one is the best”—but that will have to be a very small context or be a rating on only one variable.

References

[1] J. McCarthy, M. L. Minksy, N. Rochester, and C. E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” 1955. [Online]. Available: http://www-formal.stanford.edu/jmc/history/dartmouth.pdf. [Accessed: 03-Sep-2013].
[2] [“Heuristic Aspects of the Artificial Intelligence Problem.” MIT Lincoln Laboratory Report 34-55, 1956.
[3] E. Charniak and D. V. McDermott, Introduction to artificial intelligence. Reading, Mass.: Addison-Wesley, 1985.
[4] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach. Upper Saddle River, N.J.: Prentice Hall/Pearson Education, 2003.
[5] C. Alexander, Notes on the synthesis of form. Cambridge, MA: Harvard University Press, 1971.

Image Credits

  1. Print by Van Dooren N.V. of Maurits C. Escher,  Klimmen en dalen, 1960.
  2. “Ascending and Descending” in LEGO by Andrew Lipson http://www.andrewlipson.com/escher/ascending.html
Tags: , , , ,

The Timeless Way of Building Software, Part 1: User Experience and Flow

Posted in interaction design on May 31st, 2012 by Samuel Kenyon

The Timeless Way of Building by Christopher Alexander [1] was exciting. As I read it, I kept making parallels between building/town design and software design.

Architecture

We’re not talking any kind of architecture here. The whole point of the book is to explain a theory of “living” buildings. They are designed and developed in a way that is more like nature in many ways—iterative, embracing change, flexibility, and repair.

Design is recognized not as the act of some person making a blueprint—it’s a process that’s tied into construction itself. Alexander’s method is to use a language of patterns to generate an architecture that is appropriate for its context. It will be unique, yet share many patterns with other human-used architectures.

This architecture theory includes a concept of the Quality Without a Name. And this is achieved in buildings/towns in a way that is more organic than the popular ways (of course there are exceptions and partially “living” modern architectures).

User Experience

Humans are involved in every step. Although patterns are shared, each building has its own appropriate language which uses only certain patterns and puts them in a particular order. The entire design and building process serves human nature in general, and specifically how humans will use this particular building and site. Is that starting to stir up notions of usability or user-centered design in your mind?

Read more »

Tags: , , , , ,

UX Conference 2012: Design Studio

Posted in interaction design on May 11th, 2012 by Samuel Kenyon

One of the particularly good presentations at the UPA Boston 11th Annual User Experience Conference (#UPABOS12) was called “Design Studio” by designer Adam Connor.

The main points are:

  • Why brainstorming is usually implemented wrong.
  • How to properly generate ideas and consensus (the “design studio”).
    • Charrettes (are used by the design studio process).

Design

Going from many concepts to one good one

As a super condensed version of the presentation, the main aspect of the design process that concerns us here is how to go from lots of concepts to the best single concept.

At the beginning of a project, or perhaps when some major failure has happened, some companies might try to throw people into a room for a “brainstorm” session. But…

Read more »

Tags: , ,