## Language Does Not Shape Thought

Posted in artificial intelligence on May 18th, 2013 by Samuel Kenyon

Cognition causes language, not the other way around. Correlations between changes in thought with changes in language abound. But the arguments are very weak for causality from language to cognition in this context.

### What do People Mean by Language Shapes Thought?

Lera Boroditsky likes to spread the meme language shapes thought. Others have used it too when talking about Whorfian matters.

Previously I explored what “shaping” means in this context and how it might be a metaphor. It certainly matters—why not just say language controls thought or language causes thought? I think the reason is that people want to allow thought to control language as well. Indeed this is the weak form of Whorf-Sapir Hypothesis, i.e. linguistic relativity.

So shaping means a partial control or cause which does not prevent control or cause in the other direction. Furthermore it seems that language shaping thought is not about the parts of the mind which are dedicated to language itself—those parts have to be partially language-defined in order to be able to produce utterances in that particular language.

Here we’ll look at various reasons why people might believe that language shapes thought.

### Thinking in Language?

One argument might be that the language of thought (mentalese) is:

1. The same type of language that is uttered by humans.
2. The same instance as the language of utterances.

The reason I specified #2 is because one could posit that the language of thought is the same type but a difference instance, for example thinking in Russian but speaking in Mandarin.

I suppose it’s easy to assume this given the notion of “internal dialog”. But there’s a big difference between mentalese and conscious imagined or remembered linguistic memories. It’s a matter of abstraction levels. Mentalese is a lower level.

Thinking in language has another problem. It implies that every language you learn must be translated back into the mental mother “tongue”, such as English in my case. But what about computer languages and other language-like things? What about mathematics? What about non-linguistic concepts? How would those get translated into the mother tongue (e.g. English)?

The argument that we think with language flies in the face of the computational theory of mind, which is fundamental to most of cognitive science.

As P. Schlenker [1] points out (echoing Steven Pinker):

we do not literally ‘think in words’: if we did, patients with a language deficit should automatically have a deficit in thought as well, which does not appear to be the case. Thus verbal language and thought should in principle be taken to be distinct.

Mentalese is symbolic and generative, but it doesn’t have to be a spoken or written language. The symbols are not words or characters, they are arbitrary computational patterns which we can think of as symbols in the context of the computational theory of mind.

Counter argument: Daniel Casasanto has argued that not thinking in language does not entail that language shaping thought is false [2]. Thus this logical statement is invalid, where O is We think in language and W is Language shapes thought:

$\neg O \to \neg W$

Logically, Casasanto is correct. But from a mind-software architectural approach, one wonders what the interfaces are for high level abstractions of language to define and/or influence thought. And where is the evidence of principle causes of language affecting thought? Casasanto only provides a hypothesis that the frequencies of phrases in a language can reinforce the already existing mental concepts in preference to certain others. So we have a possible weak form…of the weak form of Whorfianism.

### Development

Another argument is that since human babies epigenetically and ontogenetically develop language skills and linguistic knowledge at the same time and in concert with other mental faculties, then language and thought shaped each other.

This baby-context weak Whorfianism seems like it must be true to some degree, but then again perhaps most of the language functionality is built off of other capabilities. Don’t children have vision capabilities before speaking capabilities? Regardless, even if there is a lot of cross-dependencies during development, what does that say about adults?

If development is the only opportunity for language to influence thought, then we still have no support for adults who claim to have cognitive changes due to learning new languages (dependent on the new language itself).

### Metaphors

I’m not quite sure about this, but I am starting to suspect that people might be confusing linguistic metaphors with conceptual metaphors in the context of linguistic relativity.

For example, Boroditsky wrote in a 2001 paper [3]:

But how does language affect thought? Let us again consider the domain of time. How do spatiotemporal metaphors affect thinking about time? Spatial metaphors can provide relational structure to those aspects of time where the structure may not be obvious from world experience (Boroditsky, 2000). In the case of space and time, using spatial metaphors to describe time encourages structural alignment between the two domains and may cause relational structure to be imported from space to time

And it’s not just the people in support of language shapes thought—the opposition also might be misconstruing metaphors. In his essay “The pernicious persistence of the ‘language shapes thought’ theory”, John McWhorter says that Whorfianism is “insulting” to half of the world [4]:

If language creates thought, the Chinese aren’t exactly quick on the uptake—nor are speakers of countless languages in Southeast Asia and Africa. In Japanese, to say I like Bob you just say, roughly, “Bob likeability,” with no I or anything else.

But, on metaphors, he doesn’t make a distinction between linguistic and conceptual when mentioning George Lakoff. But perhaps that is really a Lakoff problem, what with all his words and phrases that politicians use to activate different metaphors in voters’ brains. But isn’t that activation just that–not a change in thought by the language itself, but a conceptual metaphor issue? The same conceptual metaphor could be activated with any language.

### Multilingual

brishti

When people say things like this [5]:

Perhaps the earliest clue that words themselves can at least subtly alter experience comes from learning a second language and appreciating that the same object can be represented by a completely different string of syllables that actually brings out a different quality of it. The word ‘rain’ for example is ‘brishti’ in my mother tongue Bengali – and to my ear all the drama of a tropical storm is present within the word ‘brishti’ and not so much of it in the somewhat curt ‘rain’.

I think that they are conflating anything learned during the process of learning a new language with the new language itself affected my thought. The quote I chose is particularly ridiculous as the author claims that a more onomatopoetic word changes how one thinks. On the other hand, perhaps aesthetics is some backdoor route to influencing, at least temporarily, one’s thoughts. But as I mentioned before, isn’t language just instrumental—not principle—in such things?

References
[1] P. Schlenker, Introduction to Language – Lecture Notes 2B: Language and Thought. UCLA, Winter 2006.
[2] D. Casasanto, Who’s Afraid of the Big Bad Whorf? Crosslinguistic Differences in Temporal Language and Thought. Language Learning. 58:Suppl. 1, December 2008, pp. 63–79
[3] L. Boroditsky, Does Language Shape Thought?: Mandarin and English Speakers’ Conceptions of Time. Cognitive Psychology 43, 1–22 (2001).
[4] J. McWhorter, The pernicious persistence of the “language shapes thought” theory. New Republic, Feb 4, 2013.
[5] S. Gupta, The Relationship Between Language and Thought. Lecture delivered in Melbourne in 2005.

Image credits

## Symbol Grounding and Symbol Tethering

Posted in artificial intelligence on April 3rd, 2013 by Samuel Kenyon

Philosopher Aaron Sloman claims that symbol grounding is impossible. I say it is possible, indeed necessary, for strong AI. Yet my own approach may be compatible with Sloman’s.

Sloman equates “symbol grounding” with concept empiricism, thus rendering it impossible. However, I don’t see the need to equate all symbol grounding to concept empiricism. And what Sloman calls “symbol tethering” may be what I call “symbol grounding,” or at least a type of symbol grounding.

Firstly, as Sloman says about concept empiricism [1]:

Kant refuted this in 1781, roughly by arguing that experience without prior concepts (e.g. of space, time, ordering, causation) is impossible.

Well that’s fine. My interpretation of symbol grounding didn’t involve the baggage to bootstrap everything. Innate concepts, which are triggered by machinations in phylogenetic space, can contribute to grounding.

Sloman also says that concept empiricism was [2]:

finally buried by 20th century philosophers of science considering the role of theoretical terms in science (e.g. “electron”, “gene”, “valence”, etc.) that are primarily defined by their roles in explanatory theories.

Here is the only inkling of why this would take down symbol grounding: abstract concepts might actually be defined in terms of each other. As Sloman explains here[3]:

Because a concept can be (partially) defined implicitly by its role in a powerful theory, and therefore some symbols expressing such concepts get much of their meaning from their structural relations with other symbols in the theory (including relations of derivability between formulae including those symbols) it follows that not all meaning has to come from experience of instances, as implied by the theory of concept empiricism

Theory concept tethering?

On the other hand, maybe theory concepts are grounded, but in a very tenuous way. Here is a metaphor, albeit not a great one: Imagine a family of hot air balloons with links between them, and this group is floating free. However, they aren’t quite free because there is a single rope tying one of them, and indirectly all of them, to the ground. Sloman seems to be saying something like that, via mechanisms of how good a theory concept is at modeling something, hence the term “symbol tethering”. Whatever the case, I don’t see why all symbols have to be like theory concepts.

If the goal is to understand how human minds create and use knowledge, then one is led down the road of grounding. Otherwise you’re playing Wacky Mad Libs or tainting an experiment with an observer’s human knowledge. Imagine if you could pause a human (or some other animal) and have access to the layer or point-of-view of internal mental symbols. You might then ask, what is the genealogy of a particular symbol—what symbols are its ancestors? The path to embodiment-derived symbols or innate symbols may be long and treacherous, yet there it is. And if the path stops short, then you have chosen a symbol which is somehow usable in a biological mind, yet is completely encapsulated in a self-referential subsystem.

Sloman has hypothesized that theory concept networks don’t need to be grounded in any normal sense of the word. But that doesn’t mean we throw the baby out with the bathwater. As far as I can tell, we should add the theory tethering mechanism in as a method of grounding symbols. Or perhaps it is simply one of the other ways in which information structures can be handled in a mind. I think it is plausible to have ungrounded symbols generated by a mind which also has grounded symbols. The inherent structure of a ungrounded self-referential database could be useful in certain contexts. But ungrounded symbols are easy. That’s the default for all existing computing systems. And that’s what a dictionary is. The nature of those dictionary-like systems are at most a subset of the nature of human-like knowledge systems. We end up with humans injecting the meaning into the computer (or dictionary or whatever). The tricky problem is making systems that are grounded in the same way humans or other animals are. Those systems could have compatible notions of common sense and general (to humans) understanding. They would, in turn, be capable of doing the same kind of knowledge injection or anchoring that humans do with ungrounded systems.

References

[1] A. Sloman, “Symbol Grounding is Not a Serious Problem. Theory Tethering Is,” IEEE AMD Newsletter, April 2010.
[2] A. Sloman, “Some Requirements for Human-like Robots: Why The Recent
Over-emphasis on Embodiment has Held up Progress,” in B. Sendhoff et al., Eds., Creating Brain-Like Intelligence, pp. 248-277, Springer-Verlag, 2009.
[3] A. Sloman, “What’s information, for an organism or intelligent machine? How can a machine or organism mean?,” 2011. http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-inf-chap.html

Tags:

## Sherlock Holmes, Master of Code

Posted in artificial intelligence, programming on March 28th, 2013 by Samuel Kenyon

What if I told you that fictional mysteries contain practical real-world methodologies? I have pointed out the similarities between detectives solving mysteries to software debugging before. My day job of writing code often involves fixing bugs or solving bizarre cases of bad behavior in complex systems.

In a new book called Mastermind: How to Think Like Sherlock Holmes, Maria Konnikova also compares the mental approaches of a detective to non-detective thinking.

But Konnikova has leaped far beyond my own detective model by creating a metaphorical framework for mindfulness, motivation, and deduction, all tied to the fictional world of Sherlock Holmes. This framework is a convenient place to investigate cognitive biases as well. And of course her book discusses problem solving in general, using the crime mysteries of Holmes for examples.

Mastermind book cover

The core components of the metaphor are:

• The Holmes system.
• The Watson system.
• The brain attic.

The systems are of human thinking, and you can probably imagine circumstances where you operated using a Watson System but in others you used a Holmes system to some degree. Most people are probably more like Watson, who is intelligent but mentally lazy.

Watson

The Holmes system is the aspirational, hyper-aware, self-checking system that’s not afraid to take the road less traveled in order to solve the problem.

Holmes

The brain attic metaphor comes in as a way to organize knowledge purposely instead of haphazardly. The Holmes system actively chooses what to store in its attic, whereas the Watson system just lets memory happen without much management.

### Bias

Here’s an excerpt of one of the many bias-related issues discussed, where the “stick” is character James Mortimer’s walking stick that has been left behind:

Hardly has Watson started to describe the stick and already his personal biases are flooding his perception, his own experience and history and views framing his thoughts without his realizing it. The stick is no longer just a stick. It is the stick of the old-fashioned family practitioner, with all the characteristics that follow from that connection.

When I programmed military robots and human-robot interfaces for iRobot, I often received feedback and problem reports as directly as possible from the field and/or from testers. I encouraged this because it was great from a user experience point of view, but I had to develop filters and Sherlockian methods in order to maintain sanity and actually solve the issues.

Just trying to comprehend what was wrong at all was sometimes a big hurdle. A tester or field service engineer might report a bug in the manner of his or her personal theory, which—like Watson—was heavily biased, and then I had to extract bits of evidence in order to come up with my own theories which may or may not be the same. Or in some cases the people closest to the field reported the issue and data objectively, but by the time it went through various Watsons, irrational assumptions of the cause had been added. Before you can figure out the problem, you have to figure the real problem description and what data you actually have.

As Konnikova writes:

Holmes, on the other hand, realizes that there is always a step that comes before you begin to work your mind to its full potential. Unlike Watson, he doesn’t begin to observe without quite being aware of it, but rather takes hold of the process from the very beginning—and starting well before the stick itself.

And the walking stick example isn’t just the removal of bias. It’s also about increased mindfulness.

### Emotions

Emotional bias comes in because that can determine what observations you are even able to access consciously, let alone remember in an organized way. For instance:

To observe the process in action, let’s revisit that initial encounter in The Signs of Four, when Mary Morstan, the mysterious lady caller, first makes her appearance. Do the two men see Mary in the same light? Not at all. The first thing Watson notices is the lady’s appearance. She is, he remarks, a rather attractive woman. Irrelevant, counters Holmes. “It is of the first importance not to allow your judgment to be biased by personal qualities,” he explains. “A client is to me a mere unit, a factor in a problem. The emotional qualities are antagonistic to clear reasoning…”

Emotions are a very important part of human minds; they evolved because of their benefits. I often talk about emotions and artificial intelligence. However, in some very specific contexts, the dichotomy of emotion vs. reason becomes true. Konnikova says:

It’s not that you won’t experience emotion. Nor are you likely to be able to suspend the impressions that form almost automatically in you mind. But you don’t have to let those impressions get in the way of objective reasoning.

Of course, even in the context of reasoning about the solution to a problem, one’s mind is still an emotional system, and that system is providing some benefits such as, perhaps, motivation to solve the problem and keep plugging away at it.

### Feedback

Maria Konnikova at Harvard Book Store

Today at the Harvard Book Store, Maria Konnikova gave a presentation about the book Mastermind. I attended, and I asked a question about whether certain professions lent themselves to the Sherlockian methods better given the parallels I had drawn to software debugging in my own experience.

Konnikova’s reply was that any profession with good feedback would be good for the Holmes system approach. She specifically mentioned doctors and bartenders.

Feedback does seem to be important for many systematic things—so she’s probably right. I suppose what makes feedback particularly important to the Sherlockian mindfulness approach is the observation of one’s own mind. And there is also a feedback aspect when one is solving mysteries—the verification or disproving of hypotheses.

### Conclusion

Anyway, I won’t try to summarize the whole book. I highly enjoyed it and found many parallels to my personal approach at mental life and especially the mystery solving of software systems, including psychological flow and creativity.

Tags: , , , , ,

## Biomimetic Emotional Learning Agents

Posted in artificial intelligence on March 26th, 2013 by Samuel Kenyon

Since I didn’t blog in back in 2004, you get to suffer—I mean, enjoy—another breathtaking misadventure down memory lane.

In 2004 I started designing and coding (in C++) a cognitive architecture called Biomimetic Emotional Learning Agents (BELA).

### The Grand Plan

This antique diagram reveals my old plans:

BELA Meta-Dev Diagram

The diagram indicates general flow of time from left to right, but the arrows primarily show development paths. Useful developments to the primary tree could come from any position vertically.

The top line is so-called “blue sky” research such as figuring out ways to make a human-level AI and testing combinations of existing AI architectures, for instance the merger of commonsense reasoning with emotional learning, etc. This of course could have many useful results which is why it can contribute to the main tree and hence the lower lines such as applied R&D and practical applications, both of which could be targeted for applied-research agency contracts, and possibly eventually marketable/industrial software and robotics. Since this is all happening in parallel (ideally), the research towards the top may find that the applied research has come up with some useful versions of, for instance, the biomimetic emotional learning agents, and exploit those.

### The Framework

Although focused on low-level emotional architectures as involved with homeostasis, knowledge acquisition, reactions, and learning, the framework was designed at all points to be extensible.

In my documentation I wrote:

Another issue to keep in mind is that although now the system is simple enough that the programmed class family is almost isomorphic to the abstract system design itself, that won’t always be the case, especially since this is a framework with many possible systems and the more complex systems later on will probably be reusing the same components in different ways.

What this means is that the design, or model (or system of models)…whatever…may look very close the C++ class hierarchy. That may give someone a warm and fuzzy feeling of an object-oriented modeling-to-code done right. But as my former self warned, the design of systems made with this design may break that nice model-to-code mapping.

### The List of Concepts

This is the list of mental concepts that I wanted my system to implement or experiment with:

• reactions
• instincts
• “gut feelings”
• homeostasis (some have suggested “homeodynamics”)
• emotions
• “background” emotions
• “primary” emotions
• “secondary” emotions
• pain
• fear
• phobias
• pleasure
• excitement/appetitive/dopamine vs. tranquility/consumatory/opioid
• overlap with agency-affiliation cycle system?
• rewards
• fixing the negative side-effects of biological emotional learning/behavior
• learning in real-time
• associative learning
• emotional learning
• learning new methods of learning
• change/growth in the learning mechanism itself
• concepts without experience? null issue?
• conditioning, predispositions
• phylogenetic vs. ontogenetic learning/training
• self-preservation configuration from nurture vs. nature
• “growing pains” training
• emotional-associative knowledge bases, emotional maps
• ontology KBs
• operating domains
• fuzzy logic—many comparisons will be of ranges not single values
• knowledge through association
• configuration parameters for every aspect of the framework—no hard-coded constants or formulas, they should all be loaded from data config files (this will also make it easier for brute force enumerations & evolutionary dev)
• parallel processes/modules/layers
• ENS-style [semi]-autonomous agencies
• arbitration
• goal definition/descriptions
• soft goals—emergent behavior from configuration
• hard goals—minimize difference between current state and a stored described state
• feelings of emotions
• moods
• modes of thinking / “frames of mind” / alternate problem-solving approaches
• knowledge-lines and switching to previous configurations
• self-history and forgetfulness
• fall-back configurations for input overload or lack of input
• tricking with form instead of content
• priming
• plasticity / adaptability of agent when it is damaged
• probability of survival of a particular class of agent in a particular domain
• immune system
• stress responses, “biological modes”–arbitration rules for action subsumption between modes?
• autonomic nervous system
• Sympathetic Nervous System–”fight or flight” capabilities: instant reconfiguration of agent viscera + stock survival behaviors
• Parasympathetic Nervous System–”rest and digest”
• exhaustion, sleep—even robots have to recharge their batteries and possibly correlate/cull/compress memories
• crisis situations that require suppression of SNS stress responses?
• noise, false memories, misinformation, probability of wrong inferences (+ theory-of-mind == capability to lie ?)
• intentionality, guessing intentions of objects/agents
• self-model, self-awareness, “proto-self”
• critic/skeptic analyzers, checks-and-balances
• attention, salient objects/agents
• Exploiting the environment/situation (with constraints, e.g. nondestructive)
• Utilizing the environment-technology situation to extend computational/problem-solving abilities

I described this enumeration as:

not a complete list, nor am I trying to test all of those concepts at once in the first phases of this project. Some may be removed as irrelevant, and new ones will be added. Of course, there is no shake-n-bake implementation—one doesn’t just make a module or sub-agent for each one of those concepts, throw them in a bag, and get something that works as expected or at all.

Some of these concepts are the same things, overlap, and most are connected somehow; some may turn out to be null concepts or unneeded in an artificial framework.

The intention of the experimental research stemming from this engineering-oriented project was to discover which concepts are needed in a minimal agent framework that is both useful itself and allows the other concepts that are needed (e.g., for a more intelligent—by the metrics of the application domain—robot) to be incorporated without a total redesign from scratch.

The first agents being tested would have been based only on some of the concepts, but with the design goal of supporting most of the rest (for research and/or practical robots) through configuration, versions, extensions, etc. of an evolving (in human design space—certain parts may actually be generated with evolutionary algorithms though) agent framework.

### Phylo vs. Onto

Phylogenetic (the evolutionary space) versus ontogenetic (the individual  lifetime space) development/learning and a “growing pains” training scenario was a major part of the framework. “Growing pains” is a scheme to keep an artificial organism in increasingly more dangerous/complex sandboxes until they are at adult level. These concepts and how artificial organisms can exploit them are still of significant importance for my thinking about cognitive architectures today.

### The End?

I didn’t get very far with BELA, and pretty much abandoned the codebase in February 2005. However, I didn’t abandon cognitive architectures at all. And many of the ideas in BELA are still good in my opinion. The overreaching nature of the project was noted by a reviewer of my rejected extended abstract who told me my ideas were θ-baked, where θ <= 0.5.

The project failure, as I have indicated can happen quite often, was just that—the project breaking down, not the concepts. A large part of this project’s brief history was my descending motivation and focus. I didn’t abandon AI or cognitive architectures, however. On the contrary in 2005 I took two AI-related grad classes at MIT (Society of Mind/Emotion Machine and Commonsense Reasoning for Interactive Applications), started working at a mobile robot company (iRobot), and was on an autonomous underwater vehicle competition team (MIT ORCA).

If the me of today went back to circa 2004 to work on BELA, I would change the name. I still think emotions and learning are important, and necessary if one is in fact mimicking biology. But they are only two aspects of many.

Tags: ,

## Comparison: ChainLocker vs. Heirarchical Mutexes

Posted in programming on March 8th, 2013 by Samuel Kenyon

In “Concurrent Programming with Chain Locking,” Gigi Sayfan presents a C# class demonstrating chain locked operations on a hierarchical data structure. This reminded me of lock hierarchies described by Anthony Williams in the book C++ Concurrency in Action.

To take a step back for a moment, the overall goal is to create multithreaded code which doesn’t cause deadlocks, race conditions, etc.  Although it may seem like a confusion of metaphors, lock hierarchies are a type of hand-over-hand locking, which is basically defined lock ordering. I think it would be fair to call a “chain” a particular path of locking through a hierarchy. Defining lock ordering is what you do if you can’t do the better idea, which is to acquire two of more locks in a single operation, for instance with the C++11 std::lock() function.

As Herb Sutter pointed out in his article “Use Lock Hierarchies to Avoid Deadlock,” you may already have layers in your application (or at least the data in a certain context). This can be taken advantage of when making the software concurrency-friendly. The general idea is that a layer does not access code in layers above it. For mutexes, this means that a layer cannot lock a mutex if it already holds a mutex from a lower layer.

Tags: , , ,

## On the Concept of Shaping Thought with Language

Posted in artificial intelligence on February 24th, 2013 by Samuel Kenyon

Psychologist Lera Boroditsky says she’s “interested in how the languages we speak shape the way we think” [1].

This statement seems so innocent, and yet it implies that language definitely does shape thought1. It also leads us to use a metaphor with “shape.”

### Causes and Dependencies

Does language cause thought? Or at least in part? Or is it the other direction—thought causes language?

Is language even capable of being a cause of thought, even if it isn’t in practice?

Or in an architectural sense, is one dependent on the other? Is thought built on top of language?

Or is language built on top of thought?

Does language influence thought at all, even if one is not dependent on the other?

When people talk about language causing thought or vice versa, are they talking about language as a mental module (or distributed functionality) or the interactive act of using language?

Tags: , , ,

## What is a Room?

Posted in artificial intelligence, interfaces on February 5th, 2013 by Samuel Kenyon

We all share the concept of rooms. I suspect it’s common and abstract enough to span cultures and millennia of history.

a room

The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice something because it is always before one’s eyes.)
—Wittgenstein

Rooms are so common that at first it seems silly to even talk about rooms as an abstract concept. Yet, the simple obvious things are often important. Simple human things are often also quite difficult for computers and artificial intelligence.

Is there a such thing as a room? It seems to be a category. Categories like this are probably the results of our minds’ development and learning.

Tags: , ,

## Rambling: Should All Software Be Real-Time?

Posted in programming on February 2nd, 2013 by Samuel Kenyon

As a design guideline, most disembodied “smart” computer systems should be real-time, at least of the soft variety. But they aren’t. We’ve gotten used to the cheap non-real-time properties of mainstream software.

Evolution may have resulted in that design guideline for minds. But our computer programs and networks aren’t minds—at least not yet. However, successful narrow AI techniques continually get added to the toolbox of software engineering. Although most software systems are not in any partition considered to be “minds,” they are doing narrow tasks. Some tasks are human level, such as identifying faces in a photo on Facebook. Some are not human level at all, such as a MapReduce system searching exabytes of data.

All of these should respond quick enough to all inputs so as not to slow down or foul up the system. The humans involved will not wait for very long on their end. Obviously a lot of computer programs are not real-time (not even soft real-time).

It’s more about usability—the slow and/or inconsistent-response programs are less useful and more annoying. They only survive in the software / Internet ecosystem because the nature of that ecosystem is different than the nature of biological evolution.

Tags: ,