Comparison: ChainLocker vs. Heirarchical Mutexes

Posted in programming on March 8th, 2013 by Samuel Kenyon

In “Concurrent Programming with Chain Locking,” Gigi Sayfan presents a C# class demonstrating chain locked operations on a hierarchical data structure. This reminded me of lock hierarchies described by Anthony Williams in the book C++ Concurrency in Action.

To take a step back for a moment, the overall goal is to create multithreaded code which doesn’t cause deadlocks, race conditions, etc.  Although it may seem like a confusion of metaphors, lock hierarchies are a type of hand-over-hand locking, which is basically defined lock ordering. I think it would be fair to call a “chain” a particular path of locking through a hierarchy. Defining lock ordering is what you do if you can’t do the better idea, which is to acquire two of more locks in a single operation, for instance with the C++11 std::lock() function.

As Herb Sutter pointed out in his article “Use Lock Hierarchies to Avoid Deadlock,” you may already have layers in your application (or at least the data in a certain context). This can be taken advantage of when making the software concurrency-friendly. The general idea is that a layer does not access code in layers above it. For mutexes, this means that a layer cannot lock a mutex if it already holds a mutex from a lower layer.

Read more »

Tags: , , ,

The Timeless Way of Building Software, Part 1: User Experience and Flow

Posted in interaction design on May 31st, 2012 by Samuel Kenyon

The Timeless Way of Building by Christopher Alexander [1] was exciting. As I read it, I kept making parallels between building/town design and software design.

Architecture

We’re not talking any kind of architecture here. The whole point of the book is to explain a theory of “living” buildings. They are designed and developed in a way that is more like nature in many ways—iterative, embracing change, flexibility, and repair.

Design is recognized not as the act of some person making a blueprint—it’s a process that’s tied into construction itself. Alexander’s method is to use a language of patterns to generate an architecture that is appropriate for its context. It will be unique, yet share many patterns with other human-used architectures.

This architecture theory includes a concept of the Quality Without a Name. And this is achieved in buildings/towns in a way that is more organic than the popular ways (of course there are exceptions and partially “living” modern architectures).

User Experience

Humans are involved in every step. Although patterns are shared, each building has its own appropriate language which uses only certain patterns and puts them in a particular order. The entire design and building process serves human nature in general, and specifically how humans will use this particular building and site. Is that starting to stir up notions of usability or user-centered design in your mind?

Read more »

Tags: , , , , ,

Architecting Emotional Robots

Posted in artificial intelligence, robotics on April 7th, 2011 by Samuel Kenyon

Creating a robot with emotions is a software development problem.

How does the zebra feel right now?

Emotion is a matter of cognitive architecture.  It is part of the information system of the mind.  Recreating “emotions” really means recreating a type of mind that uses internal mechanisms similar to our minds.  Making an emotional machine requires the proper design and implementation and deployment.

The reason I added “deployment” in there is because environment is quite important.  The mind is a system that interacts with other entities—there is an information flow.  The level of externalism required affects how situated and/or embodied an artificial agent has to be.  That is where robots come in.  However, a robot and its world can be simulated.

What Do I Mean by Architecture?

The metaphor of architecture lets us think of the mind as a building.  But really I mean a large building that was built from millions of interlocking parts and took months or years to design.  And as with buildings, a mind is not just designed and built—it also has to survive the real world.

The metaphor breaks down a bit when you consider how natural minds emerge from ever-changing, growing (or shrinking), adaptable, flexible bags of meat.  However, in their own slow way buildings do change via maintenance and new additions, and they are in fact flexible and movable so as to survive wind and earthquakes.

Of equal or greater importance, I also think of architecture here in its software engineering usage [1]:

Architecture is the fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution.

Why Do We Need an Entire Architecture?

One might think that a programmer could just add in some emotion—make a few calls to the emotion function, or tack on a loosely coupled Emotion module.  But based on the best theories so far from researchers, emotions are actually many things going in the mind.  And they are all inherent in the system and intertwined with evolutionarily old parts.  In other words, if you ripped out all the stuff related to emotions, you wouldn’t have much left.  In future posts I will go into the details of potential architectures and why there are certain dependencies.

References:
[1] IEEE Computer Society, IEEE Recommended Practice for Architectural Description of Software-Intensive Systems: IEEE Std 1472000. 2000. via P. Eeles, “What is a software architecture?”

Image credits:

1. safari-partners
2. Marko Ljubez

Tags: , , , , , ,

Five Ways Machines Could Fix Themselves

Posted in interaction design, robotics on September 30th, 2010 by Samuel Kenyon

Now published on h+ magazine: my article “Five Ways Machines Could Fix Themselves.” Check it out!

As I see cooling fans die and chips fry, as I see half the machines in a laundry room decay into despondent malfunctioning relics, as my car invents new threats every day along the theme of catastrophic failure, and as I hear the horrific clunk of a “smart” phone diving into the sidewalk with a wonderful chance of breakage, I wonder why we put up with it. And why can’t this junk fix itself?

Design guru and psychologist Donald A. Norman has pointed out how most modern machines hide their internal workings from users. Any natural indicators, such as mechanical sounds, and certainly the view of mechanical parts, are muffled and covered. As much machinery as possible has been replaced by electronics which are silent except for the sound of fans whirring. And electronics are even more mysterious to most users than mechanical systems are.

Our interfaces to machines are primarily composed of various kinds of transducers (like buttons), LEDs (those little glowing lights), and display screens. We are, at the very least, one—if not a dozen—degrees removed from the implementation model. As someone who listens to user feedback, I can assure you that a user’s imagining of how a system works is often radically different than how it really works.

Yet with all this hiding away of the dirty reality of machinery, we have not had a proportional increase in machine self support.

Argument: Software, in some cases, does fix itself. Specifically I am thinking about automatic or pushed software updates. And, because that software runs on a box, it is by default also fixing a machine. For instance, console game platforms like XBox 360 and Playstation 3 receive numerous updates for bug fixes, enhancements, and game specific updates. Likewise, with some manual effort from the user, smart phones and even cars can have their firmware updated to get bug fixes and new features (or third-party hacks).

Counterargument: Most machines don’t update their software anywhere close to “automatically.” And none of those software updates actually fix physical problems. Software updates also require a minimal subset of the system to be operational, which is not always the case. The famous Red Ring of Death on the early XBox 360 units could not be fixed except via replacement of hardware. You might be able to flash your car’s engine control unit with new software, but that won’t fix mechanical parts that are already broken. And so on.

Another argument: Many programs and machines can “fail gracefully.” This phrase comforts a user like the phrase “controlled descent into the terrain” comforts the passenger of an airplane. However, it’s certainly the minimum bar that our contraptions should aim for. For example, if the software fails in your car, it should not default to maximum throttle, and preferably it would be able to limp to the nearest garage just in case your cell phone is dead. Another example: I expect my laptop to warn me, and then shutdown, if the internal temperature is too hot, as opposed to igniting the battery into a fireball.

The extreme solution to our modern mechatronic woes is to turn everything into software. If we made our machines out of programmable matter or nanobots that might be possible. Or we could all move into virtual realities, in which we have hooks for the meta—so a software update would actually update the code and data used to generate the representation of a machine (or any object) in our virtual world.

However, even if those technologies become mature, there won’t necessarily be one that is a monopoly or ubiquitous. A solution that is closer and could be integrated into current culture would be a drop-in replacement that utilizes existing infrastructures.

Some ideas that come close:

1. The device fixes itself without any external help. This has the shortcoming that it might be too broken to fix itself, or might not realize it’s broken. In some cases, we already have this in the form of redundant systems as used in aircraft, the Segway, etc.

2. Software updating (via the Internet) combined with 3D printing machines: the 3D printers would produce replacement parts. However, the printer of course needs the raw material but that could be as easy as putting paper in a printer. Perhaps in the future, that raw printer material will become some kind of basic utility, like water and Internet access.

3. Telepresence combined with built-in repair arms (aka “waldoes”). Many companies are currently trying to productize office-compatible telepresence robots. Doctors already use teleoperated robots such as Da Vinci to do remote, minimally-invasive surgery. Why not operate on machines? How to embed this into a room and/or within a machine is another—quite major—problem. Fortunately, with miniaturization of electronics, there might be room for new repair devices embedded in some products. And certainly not all products need general purpose manipulator arms. They could be machine specific devices, designed to repair the highest probability failures.

4. Autonomous telepresence combined with built-in repair arms: A remote server connects to the local machine via the Internet, using the built-in repair arms or device-specific repair mechanism. However, we also might need an automatic meta-repair mechanism. In other words, the fixer itself might break, or the remote server might crash. Now we enter endless recursions. However, this need not go on infinitely. It’s just a matter of having enough self-repair capacity to achieve some threshold of reliability.

5. Nothing is ever repaired, just installed. A FedEx robot appears within fifteen minutes with a replacement device and for an extra fee will set it up for you.

Tags: , , , , , , , , , , , ,