A World of Affect

Posted in artificial intelligence, interaction design on August 2nd, 2013 by Samuel Kenyon

Back in the fall of 2005 I took a class at the MIT Media Lab called Commonsense Reasoning for Interaction Applications taught by Henry Lieberman and TA’d by Hugo Liu.

Screenshot from Affectworld

For the first programming assignment I made a project called AffectWorld, which allows the user to explore in 3D space the affective (emotional) appraisal of any document.

The program uses an affective normative ratings word list expanded with the Open Mind Common Sense (OMCS) knowledgebase. This norms list is used both for appraising input text and for generating an affect-rated image database. The affective norms data came from a private dataset created by Margaret M. Bradley and Peter J. Lang at the NIMH Center for the Study of Emotion and Attention, consisting of English words rated in terms of pleasure, arousal and dominance (PAD).

To generate the interactive visualization, AffectWorld analyzes a text, finds images that are linked affectively, and applies them to virtual 3D objects, creating a scene filled with emotional metaphors.

The image files were scraped from a few places, including Eric Conveys an Emotion, in which some guy photographed himself making every emotional expression he could think of and then started doing requests. I used OGRE for the 3D graphic engine.

Screenshot from Affectworld

Screenshot from Affectworld

So what was the point? If I remember correctly, somebody asked that in the class and Hugo interjected that it was art. Basically the emotional programming looks to an outsider like a pseudo-random image selector applied to cubes in a 3D world…well, that’s not completely true. With a lot more pictures to choose from (with accurate descriptive words assigned to each picture), I think that one could make a program like this that does give some kind of emotional feel that’s appropriate from a text.

Certainly stories are a kind of text that explicitly describe affect: the emotions of characters and the environments enveloping the characters. AffectWorld programs would never be perfect though, because stories themselves are just triggers, and what they trigger in any given person’s mind is somewhat unique. This is perhaps the realm that film directors using published stories live in—creating a single visual representation of something that already has thousands or millions of mental representations. But in an AffectWorld, I simplify the problem but assuming from the beginning that the visual pictures are arbitrary. It is only the emotional aspects that matter.

At the time of the demo, some people seemed momentarily impressed, but that was partially because I made them look at a bunch of boring code and then suddenly whipped out the interactive 3D demo. Otherwise, my first version of AffectWorld was just a glimmer of something potentially entertaining. I started another project for that class which I will talk about in a future blog.

Screenshot from Affectworld

Screenshot from Affectworld

Part of the reason why I took the class was because I was skeptical of using commonsense databases, especially those based on sentences of human text. During my early natural language explorations I became suspicious of what I learned later was called the “hermeneutic hall of mirrors” by Stevan Harnad—in other words, computer “knowledge” dependent on English (or any other human language) is basically convoluted Mad Libs. However, I did witness other students making interfaces which were able to make use of shallow knowledge for unique user experiences. Just as Mad Libs lends itself to a kind of surprising weird humor, so do some of these “commonsense” programs.

An example of mad libs in action.

An example of mad libs in action.

This is somewhat useful for interaction designers—in some cases a “cute” or funny mistake is better than a depressing mistake that triggers the user to throw the computer out the window. Shallow knowledge is another tool that is perfectly fine to use in certain practical applications. But it’s not a major win for human-level or “strong” AI.

The Semantic Web is a similar beast as far as I can tell. Despite recent good-intentioned articles full of buzzwords, the Semantic Web has been around for a long time, at least conceptually. Seven years ago I went to an AI conference where Tim Berners-Lee (the inventor of the World Wide Web) told us about how the Semantic Web was the new hotness and its relationship to AI (AAAI-06 Keynote Address). OWL, the web ontology language standard, had already been started. And now the semantic web is apparently finally here, sort of. Companies that rose to power since the concept of OWL like Facebook and Google have made massive semantic networks out of user data. These are great enablers and we probably have not even seen the killer apps to come out of these new semantic nets. And in some narrow contexts, semantic net powered apps could be smarter than humans. But they do not understand as human organisms do. Sure, there could be a lot of overlap with some level of abstraction in the human mind, and it is not necessarily true that all knowledge is grounded in the same way or at the same level.

Someone will probably post a comment along the lines of “well that is ultimately how the brain works, just a big semantic net in terms of itself” which skips the issue of the nodes in computer semantic networks that depend on human input and/or interpretation for the meaning. Or somebody might argue that the patterns have inherent meaning, but I don’t buy that for the entirety of human-like meaning because of our evolutionary history and the philosophical possibility that our primitive mental concepts are merely reality interfaces selected for reproductive ability in certain contexts.

Epilogue

Screenshot from Affectworld

Screenshot from Affectworld

At the time of the commonsense reasoning class—and also Marvin Minsky’s Society of Mind / Emotion Machine class I took before that—a graduate student named Push Singh was the mastermind behind Open Mind Common Sense. Although I was skeptical of that kind of knowledgebase, I was very interested in his approaches and courage to tackle some of the Society of Mind and Emotion Machine architectural concepts. His thesis project was in fact called EM-ONE, as in Emotion Machine 1, dealing with levels of cognition and mental critics. I didn’t know him very well but I talked to him several times and he had encouraged me to keep the dialogue going. I recall one day when I reading a book about evo-devo in the second floor cafe at the Harvard Co-op bookstore, ignoring all humans around me, Push happened to be there and made sure to say hello and ask what I was reading.

One day I went to his website to see if there was anything new, and found a message from somebody else posted there: Push was dead. He had committed suicide. Below that, stuck to my computer monitor, lurked an old post-it note with a now unrealizable to-do: “Go chat with Push.”


Image credits:


Mad libs example by Becca Dulgarian via Emily Hill

Tags: , , ,

Sherlock Holmes, Master of Code

Posted in artificial intelligence, programming on March 28th, 2013 by Samuel Kenyon

What if I told you that fictional mysteries contain practical real-world methodologies? I have pointed out the similarities between detectives solving mysteries to software debugging before. My day job of writing code often involves fixing bugs or solving bizarre cases of bad behavior in complex systems.

In a new book called Mastermind: How to Think Like Sherlock Holmes, Maria Konnikova also compares the mental approaches of a detective to non-detective thinking.

But Konnikova has leaped far beyond my own detective model by creating a metaphorical framework for mindfulness, motivation, and deduction, all tied to the fictional world of Sherlock Holmes. This framework is a convenient place to investigate cognitive biases as well. And of course her book discusses problem solving in general, using the crime mysteries of Holmes for examples.

Mastermind book cover

Mastermind book cover

The core components of the metaphor are:

  • The Holmes system.
  • The Watson system.
  • The brain attic.

The systems are of human thinking, and you can probably imagine circumstances where you operated using a Watson System but in others you used a Holmes system to some degree. Most people are probably more like Watson, who is intelligent but mentally lazy.

Watson

Watson

The Holmes system is the aspirational, hyper-aware, self-checking system that’s not afraid to take the road less traveled in order to solve the problem.

Holmes

Holmes

The brain attic metaphor comes in as a way to organize knowledge purposely instead of haphazardly. The Holmes system actively chooses what to store in its attic, whereas the Watson system just lets memory happen without much management.

Bias

Here’s an excerpt of one of the many bias-related issues discussed, where the “stick” is character James Mortimer’s walking stick that has been left behind:

Hardly has Watson started to describe the stick and already his personal biases are flooding his perception, his own experience and history and views framing his thoughts without his realizing it. The stick is no longer just a stick. It is the stick of the old-fashioned family practitioner, with all the characteristics that follow from that connection.

When I programmed military robots and human-robot interfaces for iRobot, I often received feedback and problem reports as directly as possible from the field and/or from testers. I encouraged this because it was great from a user experience point of view, but I had to develop filters and Sherlockian methods in order to maintain sanity and actually solve the issues.

Just trying to comprehend what was wrong at all was sometimes a big hurdle. A tester or field service engineer might report a bug in the manner of his or her personal theory, which—like Watson—was heavily biased, and then I had to extract bits of evidence in order to come up with my own theories which may or may not be the same. Or in some cases the people closest to the field reported the issue and data objectively, but by the time it went through various Watsons, irrational assumptions of the cause had been added. Before you can figure out the problem, you have to figure the real problem description and what data you actually have.

As Konnikova writes:

Holmes, on the other hand, realizes that there is always a step that comes before you begin to work your mind to its full potential. Unlike Watson, he doesn’t begin to observe without quite being aware of it, but rather takes hold of the process from the very beginning—and starting well before the stick itself.

And the walking stick example isn’t just the removal of bias. It’s also about increased mindfulness.

Emotions

Emotional bias comes in because that can determine what observations you are even able to access consciously, let alone remember in an organized way. For instance:

To observe the process in action, let’s revisit that initial encounter in The Signs of Four, when Mary Morstan, the mysterious lady caller, first makes her appearance. Do the two men see Mary in the same light? Not at all. The first thing Watson notices is the lady’s appearance. She is, he remarks, a rather attractive woman. Irrelevant, counters Holmes. “It is of the first importance not to allow your judgment to be biased by personal qualities,” he explains. “A client is to me a mere unit, a factor in a problem. The emotional qualities are antagonistic to clear reasoning…”

Emotions are a very important part of human minds; they evolved because of their benefits. I often talk about emotions and artificial intelligence. However, in some very specific contexts, the dichotomy of emotion vs. reason becomes true. Konnikova says:

It’s not that you won’t experience emotion. Nor are you likely to be able to suspend the impressions that form almost automatically in you mind. But you don’t have to let those impressions get in the way of objective reasoning.

Of course, even in the context of reasoning about the solution to a problem, one’s mind is still an emotional system, and that system is providing some benefits such as, perhaps, motivation to solve the problem and keep plugging away at it.

Feedback

Maria Konnikova at Harvard Book Store

Maria Konnikova at Harvard Book Store

Today at the Harvard Book Store, Maria Konnikova gave a presentation about the book Mastermind. I attended, and I asked a question about whether certain professions lent themselves to the Sherlockian methods better given the parallels I had drawn to software debugging in my own experience.

Konnikova’s reply was that any profession with good feedback would be good for the Holmes system approach. She specifically mentioned doctors and bartenders.

Feedback does seem to be important for many systematic things—so she’s probably right. I suppose what makes feedback particularly important to the Sherlockian mindfulness approach is the observation of one’s own mind. And there is also a feedback aspect when one is solving mysteries—the verification or disproving of hypotheses.

Conclusion

Anyway, I won’t try to summarize the whole book. I highly enjoyed it and found many parallels to my personal approach at mental life and especially the mystery solving of software systems, including psychological flow and creativity.

Tags: , , , , ,

The Need for Emotional Experience Does Not Prevent Conscious Artificial Intelligence

Posted in artificial intelligence on January 18th, 2013 by Samuel Kenyon

I have mentioned the book The First Idea by Greenspan and Shanker many times recently. Lest anybody assume I am a fanboy of that tome, I wanted to argue with a ridiculous statement that the authors make in regards to consciousness and artificial intelligence.

Greenspan and Shanker make it quite clear that they don’t think artificial intelligence can have consciousness:

What is the necessary foundation for consciousness? Can computers be programmed to have it or any types of truly reflective intelligence? The answer is NO! Consciousness depends on affective experience (i.e. the experience of one’s own emotional patterns). True affects and their near infinite variations can only arise from living biological systems and the developmental processes that we have been discussing.

Let’s look at that logically. The first part of their argument is Consciousness (C) depends on Affective experience (A):

Read more »

Tags: , ,

Infants, Monkeys, Love, and AI

Posted in artificial intelligence on December 27th, 2012 by Samuel Kenyon

Perhaps you have seen pictures or videos from the 1960s of rhesus monkey babies clinging to inanimate surrogate mothers.

H.F. Harlow's Research Into Relationship Between Child and Mother Utilizing Infant Rhesus Monkey

In one of Harlow’s experiments, a baby monkey clings to the softer surrogate mother.

These experiments were by Harry Harlow, who eventually went against the psychology mainstream to demonstrate that love—namely caregiver-baby affection—was required for healthy development.

Dr. Harlow created inanimate surrogate mothers for the rhesus infants from wire and wood. Each infant became attached to its particular mother, recognizing its unique face and preferring it above all others. Harlow next chose to investigate if the infants had a preference for bare wire mothers or cloth covered mothers. For this experiment he presented the infants with a cloth mother and a wire mother under two conditions. In one situation, the wire mother held a bottle with food and the cloth mother held no food, and in the other, the cloth mother held the bottle and the wire mother had nothing.

Overwhelmingly, the infant macaques preferred spending their time clinging to the cloth mother. Even when only the wire mother could provide nourishment, the monkeys visited her only to feed. Harlow concluded that there was much more to the mother/infant relationship than milk and that this “contact comfort” was essential to the psychological development and health of infant monkeys and children. [1]

According to Stuart G. Shanker [2], various primates reach levels of funtional-emotional development similar to the first 2-3 levels (out of 9) that humans accomplish. Perhaps part of the system is the infancy period is much longer for humans.

Although a baby rhesus doesn’t express its positive affects with the same sorts of wide joyful smiles that we see in human infants between the ages of two and five months, in other respects it behaves in a manner similar to that of a human infant. The rhesus baby spends lots of time snuggling into its mother’s body or looking keenly at her face. It visibly relaxes while being rocked, and vocalizes happily when the mother plays with it. We can even see the baby rhythmically moving its arms and legs and vocalizing in time to its caregiver’s movements and vocalizations.

Shanker said this about Harlow’s experiments:

Although it was clear that the infants were deriving great comfort from the cloth-covered surrogates, they still suffered from striking social and emotional disorders.

One might interject here: Well so what? Who cares about social and emotional disorders? Well, aside from gunshot victims. What about intelligence? What about self awareness? The thing is though, that intelligence and possibly even the capacity for basic symbolic thought—ideas—are developed via emotions and social interactions.

Read more »

Tags: , , , ,