All Minds Are Real-Time Control Systems

Posted in artificial intelligence on January 25th, 2013 by Samuel Kenyon

I conjecture that all minds are real-time control systems.

\forall x(Mind(x) \to RealTimeControlSystem(x))

In this post I will explain what that means and why it seems to be true.

Creatures and Real-Time Systems

Consider, if you will, artificial creatures that exist in either the real world or some model thereof. These [ro]bots do not know the environment beforehand, at least not all of it. Sure they may know and learn some universal traits of the environment. But there will always be changes.

And those changes can happen quickly. Meanwhile the creature itself can move very quickly. Even if it doesn’t want to move, it may be moved. Or it may fall down. All of these interactions are happening in a range of speeds dictated by physics.

An artificial creature that lives just in a software simulation could have a lot of freedom as far as time spent thinking before reacting. This freedom is not necessarily available in reality.

In software land, it’s quite easy to make a program that responds much slower than an animal to events. And any system, be it software or a mix of hardware and software, can get fouled up temporarily and miss a deadline. In fact, that’s the normal state of affairs for a lot of the computing devices you interact with on a daily basis.

Does your desktop or laptop computer or tablet or phone respond to every one of your interactions instantly and consistently? Of course not. Cheap general-purpose consumer computing devices and operating systems (e.g. Linux, Windows, Darwin) are slower or faster depending on what else they are processing and what other networks/systems they are waiting on. They are at best soft real-time.

The consequence of a missing a deadline on a soft real-time system is merely annoying the fuck out of the user. But for other applications, such as patient heart-rate monitoring, aircraft jet engine control, car airbag deployment, offshore drilling platform positioning, etc. we have hard real-time systems.

The consequence of missing a hard real-time deadline could be fatal to either the system or to various humans involved. Or as Payton and Bihari [1] put it, “A timing constraint is hard if small violations of the constraint result in significant drops in a computation’s value.”

Read more »

Tags: , , , ,

Emotional Developmental Symbol Creation

Posted in artificial intelligence on December 2nd, 2012 by Samuel Kenyon

To create an artificial intelligence system that is similar to humans or other animals, it has to have some way to generate meaning. A potential mechanism of meaning is an emotional system which has built in symbol grounding.

When I talk about mental symbols in this article, I assume that ideas, concepts, knowledge, representations, etc. are all composed of mental symbols.

Before getting into emotions, lets take a higher level look at development levels of children.

Read more »

Tags: , , , , , ,

Mechanisms of Meaning

Posted in artificial intelligence on October 15th, 2012 by Samuel Kenyon

How do organisms generate meaning during their development? What designs of information structures and processes best explain how animals understand concepts?

In “A Deepness in the Mind: The Symbol Grounding Problem“, I showed a three layer diagram with semantic connections on the top. I’d like to spend some time discussing the bottom of the top layer.

Basic Meaning Mechanisms

At the moment, there seems to be a few worthwhile avenues of investigation:

  • Emotions
  • Affordances
  • Metaphors And Blends

Each of these points of view involve certain theories and architecture concepts. Soon I will have more blog posts describing these concepts and how we might implement and synthesize them. Marvin Minsky’s books Society of Mind and The Emotion Machine have many mental agency and structure ideas that may be useful in this context.

Tags: , , , , , , ,

A Deepness in the Mind: The Symbol Grounding Problem

Posted in artificial intelligence, philosophy on September 29th, 2012 by Samuel Kenyon

The Symbol Grounding Problem reared its ugly head in my previous post. Some commenters suggested certain systems as being symbol-grounding-problem-free because those systems learn concepts that were not chosen beforehand by the programmers.

However, the fact that a software program learns concepts doesn’t mean it is grounded. It might be, but it might not be.

Here’s a thought experiment example of why that doesn’t float my boat: Let’s say we have a computer program with some kind of semantic network or database (or whatever) which was generated by learning during runtime. Now lets say we have the exact same system, except a human hard-coded the semantic network. Did it really matter that one of them auto-generated the network versus the other as far as grounding goes? In other words, runtime generation doesn’t guarantee grounding.

Experience and Biology

Now let’s say symbol grounded systems require learning from experience. A first-order logic representation of that would be:

SymbolGrounding -> LearningFromExperience

Note that is not a biconditional relationship. But is that true? Why might learning from experience matter?

herring gull and chicks

Well our example systems are biological, and certainly animals learn while they are alive. But that is merely the ontogeny. What about the stuff that animals already know when they are born? And how do they know how to learn the right things? That’s why the evolutionary knowledge via phylogeny is also important. It’s not stored in the same way though. It unfolds in complex ways as the tiny zygote becomes a multicellular animal.

Read more »

Tags: , , , ,