A World of Affect

Posted in artificial intelligence, interaction design on August 2nd, 2013 by Samuel Kenyon

Back in the fall of 2005 I took a class at the MIT Media Lab called Commonsense Reasoning for Interaction Applications taught by Henry Lieberman and TA’d by Hugo Liu.

Screenshot from Affectworld

For the first programming assignment I made a project called AffectWorld, which allows the user to explore in 3D space the affective (emotional) appraisal of any document.

The program uses an affective normative ratings word list expanded with the Open Mind Common Sense (OMCS) knowledgebase. This norms list is used both for appraising input text and for generating an affect-rated image database. The affective norms data came from a private dataset created by Margaret M. Bradley and Peter J. Lang at the NIMH Center for the Study of Emotion and Attention, consisting of English words rated in terms of pleasure, arousal and dominance (PAD).

To generate the interactive visualization, AffectWorld analyzes a text, finds images that are linked affectively, and applies them to virtual 3D objects, creating a scene filled with emotional metaphors.

The image files were scraped from a few places, including Eric Conveys an Emotion, in which some guy photographed himself making every emotional expression he could think of and then started doing requests. I used OGRE for the 3D graphic engine.

Screenshot from Affectworld

Screenshot from Affectworld

So what was the point? If I remember correctly, somebody asked that in the class and Hugo interjected that it was art. Basically the emotional programming looks to an outsider like a pseudo-random image selector applied to cubes in a 3D world…well, that’s not completely true. With a lot more pictures to choose from (with accurate descriptive words assigned to each picture), I think that one could make a program like this that does give some kind of emotional feel that’s appropriate from a text.

Certainly stories are a kind of text that explicitly describe affect: the emotions of characters and the environments enveloping the characters. AffectWorld programs would never be perfect though, because stories themselves are just triggers, and what they trigger in any given person’s mind is somewhat unique. This is perhaps the realm that film directors using published stories live in—creating a single visual representation of something that already has thousands or millions of mental representations. But in an AffectWorld, I simplify the problem but assuming from the beginning that the visual pictures are arbitrary. It is only the emotional aspects that matter.

At the time of the demo, some people seemed momentarily impressed, but that was partially because I made them look at a bunch of boring code and then suddenly whipped out the interactive 3D demo. Otherwise, my first version of AffectWorld was just a glimmer of something potentially entertaining. I started another project for that class which I will talk about in a future blog.

Screenshot from Affectworld

Screenshot from Affectworld

Part of the reason why I took the class was because I was skeptical of using commonsense databases, especially those based on sentences of human text. During my early natural language explorations I became suspicious of what I learned later was called the “hermeneutic hall of mirrors” by Stevan Harnad—in other words, computer “knowledge” dependent on English (or any other human language) is basically convoluted Mad Libs. However, I did witness other students making interfaces which were able to make use of shallow knowledge for unique user experiences. Just as Mad Libs lends itself to a kind of surprising weird humor, so do some of these “commonsense” programs.

An example of mad libs in action.

An example of mad libs in action.

This is somewhat useful for interaction designers—in some cases a “cute” or funny mistake is better than a depressing mistake that triggers the user to throw the computer out the window. Shallow knowledge is another tool that is perfectly fine to use in certain practical applications. But it’s not a major win for human-level or “strong” AI.

The Semantic Web is a similar beast as far as I can tell. Despite recent good-intentioned articles full of buzzwords, the Semantic Web has been around for a long time, at least conceptually. Seven years ago I went to an AI conference where Tim Berners-Lee (the inventor of the World Wide Web) told us about how the Semantic Web was the new hotness and its relationship to AI (AAAI-06 Keynote Address). OWL, the web ontology language standard, had already been started. And now the semantic web is apparently finally here, sort of. Companies that rose to power since the concept of OWL like Facebook and Google have made massive semantic networks out of user data. These are great enablers and we probably have not even seen the killer apps to come out of these new semantic nets. And in some narrow contexts, semantic net powered apps could be smarter than humans. But they do not understand as human organisms do. Sure, there could be a lot of overlap with some level of abstraction in the human mind, and it is not necessarily true that all knowledge is grounded in the same way or at the same level.

Someone will probably post a comment along the lines of “well that is ultimately how the brain works, just a big semantic net in terms of itself” which skips the issue of the nodes in computer semantic networks that depend on human input and/or interpretation for the meaning. Or somebody might argue that the patterns have inherent meaning, but I don’t buy that for the entirety of human-like meaning because of our evolutionary history and the philosophical possibility that our primitive mental concepts are merely reality interfaces selected for reproductive ability in certain contexts.


Screenshot from Affectworld

Screenshot from Affectworld

At the time of the commonsense reasoning class—and also Marvin Minsky’s Society of Mind / Emotion Machine class I took before that—a graduate student named Push Singh was the mastermind behind Open Mind Common Sense. Although I was skeptical of that kind of knowledgebase, I was very interested in his approaches and courage to tackle some of the Society of Mind and Emotion Machine architectural concepts. His thesis project was in fact called EM-ONE, as in Emotion Machine 1, dealing with levels of cognition and mental critics. I didn’t know him very well but I talked to him several times and he had encouraged me to keep the dialogue going. I recall one day when I reading a book about evo-devo in the second floor cafe at the Harvard Co-op bookstore, ignoring all humans around me, Push happened to be there and made sure to say hello and ask what I was reading.

One day I went to his website to see if there was anything new, and found a message from somebody else posted there: Push was dead. He had committed suicide. Below that, stuck to my computer monitor, lurked an old post-it note with a now unrealizable to-do: “Go chat with Push.”

Image credits:

Mad libs example by Becca Dulgarian via Emily Hill

Tags: , , ,

The Timeless Way of Building Software, Part 1: User Experience and Flow

Posted in interaction design on May 31st, 2012 by Samuel Kenyon

The Timeless Way of Building by Christopher Alexander [1] was exciting. As I read it, I kept making parallels between building/town design and software design.


We’re not talking any kind of architecture here. The whole point of the book is to explain a theory of “living” buildings. They are designed and developed in a way that is more like nature in many ways—iterative, embracing change, flexibility, and repair.

Design is recognized not as the act of some person making a blueprint—it’s a process that’s tied into construction itself. Alexander’s method is to use a language of patterns to generate an architecture that is appropriate for its context. It will be unique, yet share many patterns with other human-used architectures.

This architecture theory includes a concept of the Quality Without a Name. And this is achieved in buildings/towns in a way that is more organic than the popular ways (of course there are exceptions and partially “living” modern architectures).

User Experience

Humans are involved in every step. Although patterns are shared, each building has its own appropriate language which uses only certain patterns and puts them in a particular order. The entire design and building process serves human nature in general, and specifically how humans will use this particular building and site. Is that starting to stir up notions of usability or user-centered design in your mind?

Read more »

Tags: , , , , ,

UX Conference 2012: Design Studio

Posted in interaction design on May 11th, 2012 by Samuel Kenyon

One of the particularly good presentations at the UPA Boston 11th Annual User Experience Conference (#UPABOS12) was called “Design Studio” by designer Adam Connor.

The main points are:

  • Why brainstorming is usually implemented wrong.
  • How to properly generate ideas and consensus (the “design studio”).
    • Charrettes (are used by the design studio process).


Going from many concepts to one good one

As a super condensed version of the presentation, the main aspect of the design process that concerns us here is how to go from lots of concepts to the best single concept.

At the beginning of a project, or perhaps when some major failure has happened, some companies might try to throw people into a room for a “brainstorm” session. But…

Read more »

Tags: , ,

User Experience Conference 2012: Link Blast

Posted in interaction design on May 7th, 2012 by Samuel Kenyon

This post lists tools and websites I learned about today at the UPA Boston 11th Annual User Experience Conference.

Tools for Mobile Prototyping and Usability Testing

Although I’m familiar with wearable computers, especially for the military, and have developed for PDAs back in the day, I am fairly new to the current popular commercial mobile platforms like Android and iOS. So here is a blast of links taken primarily from Vijay Hanumolu’s UPA presentation “Whirlwind Tour of Mobile Usability Testing Apps & Services.”

Responsive Design

Vijay (who works at Mobiquity) mentioned Responsive Design several times, which means crafting a website/app as a single source of content that can automatically display in many types of devices/screens. The term assumes HTML with CSS3, although I suppose there could be other technologies used/tested for the same goals. Here’s a website that lets you test responsive design.

Detailed Design

  • Adobe Shadow (Chrome plugin)
    Inspect and preview web workflows on iOS and Android devices.
  • Blueprint for iPad
    iOS UI Design app.
  • AppCooker
    iOS mockups/prototypes app that uses the actual Apple UI. It can’t port to XCode yet (i.e. converting the mockup into the beginning of the working program) but they are supposedly working on a Mac application to do that.
  • Nokia Flowella
    Prototypes/mockups (apparently just for Symbian)

Read more »

Tags: , , , , ,