Languages, Assemble!

Posted in making and hacking, programming, robotics on October 9th, 2012 by Samuel Kenyon

Look what I found in my closet:

Motorola 68HC11 on an eval board made by Axiom Manufacturing

This is my old Motorola 68HC11 microcontroller board. Here’s a close up photo of the microcontroller itself:

Motorola 68HC11 Microcontroller IC

For those who aren’t familiar with these terms, a microcontroller is basically a computer on a chip. They are often very tiny and low powered, and are ubiquitous whether you realize it or not.

Read more »

Tags: , , ,

Social Games for Health Behavior Modification

Posted in interaction design on January 16th, 2012 by Samuel Kenyon

Gamification is a topic I have mentioned not too long ago (see this post). Recently I attended a Boston CHI presentation by Chris Cartter called “The Socialization and Gamification of Health Behavior Change Apps.”

Gamification

One thing that Cartter said that sounds right, and may resonate with some of my readers, is that games are fuzzy, not perfect sequential processes. And that is what health behavior changes are more like.

So gamification in this area might actually result in better methods than old fashioned x-step procedures.

Cartter works for a Boston company called MeYouHealth, which is cranking out these well-being apps for the iPhone. Largely the games use a concept of small action—little by little a specific goal is approached.

Mobile

The whole tie in with mobile phones is a big trend for everything of course, but it’s possible that health behavior change is particularly ideal for mobile phone apps. According to behavior design guru BJ Fogg, “Mobile phones will become the #1 platform for persuasion,” i.e. behavior change.

Socialization

As indicated by the title, the other major aspect of Cartter’s presentation is the social network links. Hopefully Boston CHI will post the video eventually, but basically there are lots of social graphs which can indicate interest things such as self emergent groups and individuals who are catalysts and how new members integrate into groups, and stuff like that.

Read more »

Tags: , , ,

Gamification and Self-Determination Theory

Posted in interaction design on November 9th, 2011 by Samuel Kenyon

Games are not just for fun anymore—and indeed “fun” is not a good enough description for the psychology of gameplay anyway. Designers are trying to “gamify” applications which traditionally were not game-like at all. And this isn’t limited to just the Serious Games movement that’s been around for several years. This is a type of design thinking that has spread from the gaming world and is now merging with the User Experience Design / Interaction Design world.

Beyond the hype and mistakes of gamification that might be going on right now, there does seem to be a design thinking emerging with the intention to increase engagement and motivation of products. I assume the business angle is that this of course can result in more users and keeping users longer.

Dustin DiTommaso, experience design director at Mad*Pow, presented “Beyond Gamification: Architecting Engagement Through Game Design” yesterday. As I already mentioned, he says how “fun” is not a good definition. His main psychological theory (at least for this presentation) is Self-Determination Theory (SDT). What follows are my notes based on DiTommaso’s presentation (hopefully I haven’t butchered it too much).

Games keep people in intrinsic motivation. There are three intrinsic motivation needs (these terms are directly from SDT):

  1. Competence
  2. Autonomy
  3. Relatedness

Competence

This is about meaningful growth. Good games achieve a path to mastery. The user experiences increased skill over time. There are nested short-term achievable goals that lead to success of the overarching long-term goal.

The experience should be that of a challenge. If you’re familiar with Csíkszentmihályi’s Flow, it is similar (or perhaps exactly the same) as that.

As with most good interaction design, there has to be feedback. Specifically, there has to be:

  1. Meaningful information
  2. Recognition
  3. Next steps

Action-Rules-Feedback loop

On the meaningful info item: Progress should be made visible. But, rewards have to be meaningful. Rewards for meaningless actions are not good in the long term—-users will hack (or “game”) the system if they get bored and/or detached.

Screenshot from Rockband 3 (developed by Harmonix)

DiTommaso says that you should strive for “juicy” feedback. For example, the interface for the popular video game series Rock Band is entirely “juicy” feedback. Visual Thesaurus is a good example of juicy feedback that is less flashy than Rock Band.

Failure should be allowed in a graceful manner if it provides an opportunity to learn and grow. This might sound weird for interaction design where usually you don’t want users to fail at all. Mad*Pow supposedly has done research to back this up.

Autonomy

The game belongs to the user. Choice, control, and personal preference lead to deep engagement and loyalty. There has to be the right feedback for the type of autonomy for a given user. Experience pathways can be designed “on rails” to limit or give the illusion of freedom.

To motivate sustained interest the game should provide opportunities for action. For example, on a ski mountain, there are literally multiple pathways, and multiple levels of difficulty.

Relatedness

This is about mutual dependence. We’re intrinsically motivated to seek meaningful connections with others.

A game should provide meaningful communities of interest. The users should somehow be able to value something in the game beyond the mechanics that run the system. The users should get recognition for actions that matter to them. And they should be able to inject their own goals. An example of a system that allows user-customizable goals is Mint.com.

It’s also worthwhile to think of non-human relatedness. Dialogues between user interface avatars and humans actually matter and affect motivation. They are a type of relationship. So scripts, text, tones, etc. are very important.

Conclusion

This is my rough interpretation of DiTommaso’s “Framework for Success” intended for designers and related professions.

  1. Why gamify? Consider the users and the business cases.
  2. Research the player profile(s) (perhaps game-oriented personas?). This research can and should inspire the design. What are the motivational drivers? Is it more about achievement or enjoyment? Is it more about structure or freedom? Is it more about control of others or connecting with others? Is it more about self interest or social interest?
  3. Goals and objectives: What’s the Long Term Goal? What steps? Etc.
  4. Skills and actions: consider what physical, mental, and social abilities are necessary. Can the skills be tracked and measured?
  5. Look through the lenses of interest. The concept of “lenses of interest” comes from Jesse Schell. The list of lenses provided by DiTommaso are:
    • Competition types
    • Time pressure
    • Scarcity
    • Puzzles
    • Novelty
    • Levels
    • Social pressure/proof (the herd must be right)
    • Teamwork
    • Currency
    • Renewals and power-ups
  6. Desired outcomes: What are the tangible and intangible rewards? What outcomes are triggered by user actions vs. schedules? How do users see and feel incremental success and failure on the way to the Ultimate Objective?
  7. Play-test and polish: Platforms are never done. This isn’t really specific to gamification. I would say this is about the general shift from waterfall to iterative development methodologies (which I have used successfully in my own work). This can even extend out to the actual end users—they can be involved in the loop and even expect updates for improvement.


Image Credits:
1. Nightrob
2. Dustin DiTommaso / Mad*Pow
3. IGN
4. Mount Sunapee

Tags: , , ,

Embedded Systems Expo 2011: A Few Notes

Posted in artificial intelligence, interfaces, robotics on September 28th, 2011 by Samuel Kenyon

Today I was at the 2011 Embedded Systems Conference / DesignCon exposition. I typically attend technology expos in Boston, keeping an eye out for devices and software that I might be able to use in my job. But of course, I’m also interested in what embedded systems technology will enable in the near future.
There wasn’t anything mind-blowingly cool, but I will mention a few things that may be of interest to my readers.

First, IBM had an instantiation of Watson there, which was housed in a large black monolith that would be menacing if not for the colorful touch screen. Yes, Watson can run on a computer that IBM actually sells, which is the IBM Power 750 server.

IBM Watson

I started playing Jeopardy against this Watson, but lost interest when I found that there wasn’t any voice recognition (to get a question right after winning the buzz, the software would tell you the answer, at which time you would honorably press a button to confirm or not).

I also experienced NLT’s new (samples became available in June 2011) 3D display. This is an LCD module which does not require glasses to see the 3D, and although I only stared at it for less than a minute, it did work and I did not have to be in a very specific location relative to the screen. I’d like to try an actual application that made use of mixed 3D/2D. That is part of what’s supposedly unique about this 3D LCD, is that it can mix 2D and 3D and it’s all at the same resolution. This is due to their HDDP (horizontally double-density pixel) tech. NLT also claims their LCD reduces cross talk (when your brain’s visual system mixes right and left eye information).

NLT 3D LCD Tech

Speaking of display tech, I also played with Uneo’s Force Imaging Array System and 3D-Touch Module. The force array was not combined with a screen, and I’m not sure exactly what the killer app(s) would be—they claim it could be used for some unspecified medical, automotive, industrial apps. But I tried it and it works, and they told me that they would have one with even higher resolution soon (the current one has 2500 elements).

The 3D-Touch module was embedded in a tablet, and that also worked pretty well. The example app was of course a paint program, where you can see how your finger’s pressure affects the brush width as you paint. This doesn’t use the array—instead it uses sensors at the corners of the screen. That means you should be able to add it to any existing screen—it doesn’t have to be layered into the display stack. I certainly could imagine this being useful, at least occasionally, in various apps on my phone. Uneo has demoed it with Android devices so far but plans on getting support from the other mobile OSes.

Uneo 3D Touch example (photo from Uneo)

Microsoft was there. Nothing amazingly new…they had the Xbox 360 Wireless Speed Wheel, which ships in October as far as I know. It seems like such an obvious controller that I was surprised that it didn’t come out until 2011.

Xbox 360 Wireless Speed Wheel (stock photo)

They had a Kinect there, of course, and that’s always fun to play with—I spent about 10 minutes chopping flying fruit with my sword-hands. For those that are excited by this prospect, Fruit Ninja is available as of last month. For those living under rocks, Kinect is a super massively best selling controller for the XBox 360 which tracks the movement of your body as input for games. When it came out, people immediately started hacking it and using the sensor for robot applications. Microsoft didn’t like that at first, but now they’ve given in and offer a legit SDK (Software Development Kit) for it.

Fruit Ninja Kinect (stock screenshot)

I was pleased to see that one attendee teleconned in with a VGo telepresence robot. Note this photo is of the back of robot.

VGo robot in use at ESC 2011

Tags: , , , , , , , ,