Why My Computer Didn’t Do (All of) My Homework

In my innocent early teenage years of computer programming I started dabbling in Artificial Intelligence (AI).

I wanted my computer to write stuff in English. I’m not sure why, but I think the main triggers were:

  1. I found a BASIC version of ELIZA, the “THE PSYCHOANALYTIC CONVERSATIONALIST”. ELIZA is a chatbot originally created in the 1960s (possibly the first chatbot ever).
  2. I happened to flip through a book title Language Development [1] which described various grammars of natural language. In the new light of programming, the sight of symbolic grammar constituents in tree diagrams immediately made me think—hey, I could program that!

Your Mom

The first thing I did was make a new version of ELIZA called Ernie.

Ernie begins…

Ernie would swear, insult your mother, and generally piss you off.

Ernie in conversation…

More Ernie…

In the interest of education, I took the liberty of installing it on some of my high school’s computers. Later I augmented it with a program I wrote called Expletive Generator.

I also tried to make programs to auto generate prose, perhaps in the hope that this would somehow do my homework for me. Homework in middle school and high school was generally annoying to me as it was usually a rude distraction from my main interests.

I also experimented with computer generated poetry. You occasionally got mildly provocative drivel as a result of the uncompromising creativity of random combinations. Here is an example:

Flow procreate
choose Devil late
sun red peaches mate
waiting would late
race who hate
right song have cheat
saw trust form great
behind damn high.

I combined a few of them together, printed the resulting poem out, and submitted it in a high school creative writing class. The teacher’s response:

I can’t say I “get” it but it sounds pretty cool.

This is AI?

At first, I didn’t realize I was coding AI. My notion of “AI” was from science fiction.

image from Short Circuit (1986)

But I happened upon two AI textbooks at the town dump which I brought home. One was Winston [2], and the other was Charniak and McDermott [3]. And no, they did not convert me to LISP. In fact, to my teenage mind they were a bit dry—the most exciting part was the term “artificial intelligence.” The main influence those books had on me was to recognize that AI was a real field of endeavor and that natural language parsing and generation was part of it.

Where is the Meaning?

Now, why would it surprise me that natural language generation was part of the field of AI?

I think the reason is that at first, language understanding and generation seems like it can be done easily in one’s mind, but then when you start writing programs you realize it’s not that easy.

They other reason I might have been surprised is that the generation of text seems like a peripheral activity. It’s not even the real “intelligence,” it’s merely an interface to world. Of course, nowadays, I realize how important interfaces and the environment are to mental phenomena. And of course, there never is a “center” to the mind—eventually the homunculus must be slayed.

But, why would computer code to deal with natural language seem like an unwelcome cousin to the the AI party? Well, the main reason is the lack of meaning.

One might say a dictionary has meanings, but it doesn’t really. It’s just a tool that allows an organism such as a human to understand one word in terms of others. Imagine if you kept following definitions forever, never reaching the final definition that made sense to you. That’s kind of how most computer programs are—they never reach the final destination of meaning.

Symbols seduce you into thinking anything is possible, and just around the corner. But symbols are a double edged sword. Or, perhaps a better metaphor is that of an inbred family. Whatever metaphor you like, the bottom line is that amateur symbol juggling won’t magically create meaning.

Words, symbols, definitions, concepts…they lead to more symbols, but eventually they have to hit “ground”. Us humans are “grounded” through our biological evolution as animals with bodies interacting in an environment. But most, if not all, computer programs do not have any kind of human-like grounding.

The Bastard Children

My programs were strange bastard children of context free grammars and Mad Libs. After about two years of writing these programs during high school, I stopped working on them. Something was missing, and that was meaning.

Note that creativity was not missing. Psuedo-random combinations proved to be quite creative.

But there was no architecture of understanding anything. Also missing were large stores of knowledge that linked concepts together. There was no memory of experience like a human might have. To make a computer have and use humanlike knowledge requires an architecture that connects code to the world that humans occupy.

Segue

And so, as a segue to two different future blog posts, the main problems were:

  1. A lack of symbol grounding.
  2. A lack of common sense.


References


[1] P.S. Dale, Language Development: Structure and Function, Dryden Press, 1972.
[2] P.H. Winston, Artificial Intelligence, 2nd ed., Addison Wesley, 1984.
[3] E. Charniak and D. McDermott, Introduction to Artificial Intelligence, Addison Wesley, 1985.