On Rehacktoring

Posted in programming on May 6th, 2014 by Samuel Kenyon

For at a least a year now, I’ve been using the term “rehacktor,” a portmanteau of “refactor” and “hack.” Although I came up with it on my own, I was not surprised that googling it resulted in many hits. Perhaps we are witnessing the convergent evolution of an urgently needed word.

My definition was at first a tongue-in-check descriptor in VCS commits for refactoring tasks that I was not pleased about. However, upon reflection, I think the term and the situation which leads to its necessity is important.

But first, how are others using this term?

Other Definitions

One “intertubeloper” (a term I just coined) tweeting as WizOne Solutions has tried to establish this unsatisfactory definition:

Rehacktoring (n.) – The act of reaching a level of hackedness so great that the resultant code is elegant and sophisticated.

Something a bit more sensible is the concept of rehacktoring as refactoring without tests (for instance a Twitter search of “rehacktoring” will get some results along those lines).

My one minute research bout also found a recent blog post by Katrina Owen which contains the string “Refactoring is Not Rehacktoring.” Her article paradoxically offers a meaning by not defining “rehacktoring.” She explains via some strategies how one way to refactor is to change in baby steps. Each step results in code that still passes the tests. Each intermediate step may be ugly and redundant, but the tests will still pass—if they don’t you revert and retry.

I’ve been using this baby-step refactoring strategy for a long time, having come up with it on my own. But as I read Owen’s post I realized that if one does not follow through with the full refactoring, one might end up with a ridiculous state of the code which is not necessarily any better than its pre-refactoring state. In that sense, “rehacktoring” can be a retroactive term to describe an abandoned refactoring.

Owen meant for “rehacktoring” to mean unsafe code changes (I think). Both that meaning and abandoned refactoring should be valid variations.

Choices

Aside from those notions, I propose another valid meaning: any refactoring effort which is a mini death march. In other words, the refactoring has a high chance of failure due to schedule, resources, etc. I would guess a common trigger of rehacktoring are managers demanding refactoring of crappy code that is critical to a deployed application.

But refactoring is not the only answer. There are other options, regardless of what John Manager or Joe Principal claims. Some very legitimate alternatives include, but are not limited to:

  • Don’t refactor. Spend time on other/new functionality.
  • Rewrite the class/module/application from scratch.
  • Stretch the refactoring out to a longer time frame, so that everyone involved is still spending most of their effort on new functionality that users care about.

The proposal to rewrite a module or application from scratch will scare those of weaker constitution. But it does work in many circumstances. Keep in mind that the first crack at doing something new might be the necessary learning phase to do it right the second time. A manager, or your own mind, might convince you that refactoring is quicker and/or lower risk than redoing it. But that is not always the case, especially if you are more motivated to make something new than tinker with something old. And I have seen the rewrite approach work in the wild with two different programmers, in which an intern writes working version 1, and then someone else (even another intern) writes version 2.

Rehacktoring may be the wrong thing to do. It is especially the wrong thing to do if it causes psychological depression in the programmer. After all, who wants to waste their life changing already-working code from bad to semi-bad? Based on observations (albeit not very scientific), I suspect that programmers are particularly ill-suited to refactoring code that they did not originally write. They have no emotional connection to that code. And that makes the alternate solutions listed above all the more appealing for the “soft” aspect of a software team.

Tags: ,

Minor Sim Tribulations and Second-Order Cybernetics

Posted in artificial intelligence, programming on October 2nd, 2013 by Samuel Kenyon

As part of my effort towards developmental systems for cognitive architectures, I’ve been trying to beat some sense into an Alife simulator. I’ve used the Breve simulator in the past, and although it’s no longer supported, it still works fine. My existing Anibots code uses Breve [1]—which in turn uses Open Dynamics Engine—as the physical simulation, and I wanted to expand that for some developmental experiments.

The old Anibots were spheres. Single shape bodies are trivial in the Breve world. Rolling ball robots were easy. One might say that it’s too fantastic—but I happen to know that there are in fact multiple ways of implementing real world spherical robots as I studied the literature a bit back in 2004 for my Northeastern University final “capstone” project which involved spherical robots.

What I wanted to do presently was start out with a single bot and consider that a module (the module is the blue sphere in the screenshot).

concept with one module

The next step in development would grow that single module by adding another similar module, and so on adding modules. In the simulation I wanted to connect the rolling spheres together somehow, such as with some rigid body struts forcing a fixed distance between the bots.

This would be part of the context to coerce new behaviors to be learned for the goal, which at first would be pushing the box as far as possible. As more modules are connected together, different strategies will emerge for how to move the entire multi-bot ensemble for the best outcome.

concept (with 3 modules)

concept (with 3 modules)

Easy Things are Hard (or at Least Annoying)

This simulation environment doesn’t make it easy to just tie moving bodies together. Obviously the collision detection would have to be turned off if it was done as the above screenshot indicates, but even then there isn’t a simple or obvious solution in Breve. And again, if it was implemented in the real world it would be possible, in this case as inverse mouse-ball drive robots connected together with rigid links.

There are many possible solutions, for instance manually calculating the bot positions—but then why am I using this simulator? I might as well make my own. So, perhaps sooner than I had anticipated, I may have to develop a Breve-like alternative sim world, perhaps using Panda3D. Or maybe I will use Bullet and Ogre3D, both of which I’ve used in the past but not at the same time.

Another cool idea that I thought of to bind the bots together is to use an external object. It would be kind of like the triangular thing used to place billiard balls. But this fence would expand to fit any number of balls from 1 to n. I started to implement it as a rectangle. But then I ran into the problem that arbitrary rigid bodies aren’t free; so I started making it out of links and revolute joints; but a loop of those was not stable.

Concept using a closed fence

Concept using a closed fence

There are other options, like trying to make a custom rigid body shape which would have to be scaled to fit around the number of modules.

Whatever the case, it’s time that could be viewed as “wasted.”

The Curse and Second-Order Cybernetics

The curse of AI work is to waste time on implementing tools. And if one is doing AI with embodied robots, the curse also involves making robotic hardware. All this “other” work subverts focusing on the juicy stuff.

However, my view on this has changed a bit over the years. As second-order cybernetics seems to say, the researcher is inherently part of the system [2]:

Occurring between observer and observed, interaction is the primitive from which arises what-can-be-known.

And…

Not merely passive, observers are participants—in a particular sense, that they themselves have direct impact on what they see, on how they observe.

There will always be the design of the design, and the control of the control. Recognizing that is probably better than ignoring it.

References

  1. J. Klein, BREVE: a 3D Environment for the Simulation of Decentralized Systems and Arti cial Life. Proceedings of Artificial Life VIII, the 8th International Conference on the Simulation and Synthesis of Living Systems, 2002.
  2. P.A. Pangaro, New Order from Old:The Rise of Second-Order Cybernetics and Implications for Machine Intelligence. 1988/2002.
Tags:

Sherlock Holmes, Master of Code

Posted in artificial intelligence, programming on March 28th, 2013 by Samuel Kenyon

What if I told you that fictional mysteries contain practical real-world methodologies? I have pointed out the similarities between detectives solving mysteries to software debugging before. My day job of writing code often involves fixing bugs or solving bizarre cases of bad behavior in complex systems.

In a new book called Mastermind: How to Think Like Sherlock Holmes, Maria Konnikova also compares the mental approaches of a detective to non-detective thinking.

But Konnikova has leaped far beyond my own detective model by creating a metaphorical framework for mindfulness, motivation, and deduction, all tied to the fictional world of Sherlock Holmes. This framework is a convenient place to investigate cognitive biases as well. And of course her book discusses problem solving in general, using the crime mysteries of Holmes for examples.

Mastermind book cover

Mastermind book cover

The core components of the metaphor are:

  • The Holmes system.
  • The Watson system.
  • The brain attic.

The systems are of human thinking, and you can probably imagine circumstances where you operated using a Watson System but in others you used a Holmes system to some degree. Most people are probably more like Watson, who is intelligent but mentally lazy.

Watson

Watson

The Holmes system is the aspirational, hyper-aware, self-checking system that’s not afraid to take the road less traveled in order to solve the problem.

Holmes

Holmes

The brain attic metaphor comes in as a way to organize knowledge purposely instead of haphazardly. The Holmes system actively chooses what to store in its attic, whereas the Watson system just lets memory happen without much management.

Bias

Here’s an excerpt of one of the many bias-related issues discussed, where the “stick” is character James Mortimer’s walking stick that has been left behind:

Hardly has Watson started to describe the stick and already his personal biases are flooding his perception, his own experience and history and views framing his thoughts without his realizing it. The stick is no longer just a stick. It is the stick of the old-fashioned family practitioner, with all the characteristics that follow from that connection.

When I programmed military robots and human-robot interfaces for iRobot, I often received feedback and problem reports as directly as possible from the field and/or from testers. I encouraged this because it was great from a user experience point of view, but I had to develop filters and Sherlockian methods in order to maintain sanity and actually solve the issues.

Just trying to comprehend what was wrong at all was sometimes a big hurdle. A tester or field service engineer might report a bug in the manner of his or her personal theory, which—like Watson—was heavily biased, and then I had to extract bits of evidence in order to come up with my own theories which may or may not be the same. Or in some cases the people closest to the field reported the issue and data objectively, but by the time it went through various Watsons, irrational assumptions of the cause had been added. Before you can figure out the problem, you have to figure the real problem description and what data you actually have.

As Konnikova writes:

Holmes, on the other hand, realizes that there is always a step that comes before you begin to work your mind to its full potential. Unlike Watson, he doesn’t begin to observe without quite being aware of it, but rather takes hold of the process from the very beginning—and starting well before the stick itself.

And the walking stick example isn’t just the removal of bias. It’s also about increased mindfulness.

Emotions

Emotional bias comes in because that can determine what observations you are even able to access consciously, let alone remember in an organized way. For instance:

To observe the process in action, let’s revisit that initial encounter in The Signs of Four, when Mary Morstan, the mysterious lady caller, first makes her appearance. Do the two men see Mary in the same light? Not at all. The first thing Watson notices is the lady’s appearance. She is, he remarks, a rather attractive woman. Irrelevant, counters Holmes. “It is of the first importance not to allow your judgment to be biased by personal qualities,” he explains. “A client is to me a mere unit, a factor in a problem. The emotional qualities are antagonistic to clear reasoning…”

Emotions are a very important part of human minds; they evolved because of their benefits. I often talk about emotions and artificial intelligence. However, in some very specific contexts, the dichotomy of emotion vs. reason becomes true. Konnikova says:

It’s not that you won’t experience emotion. Nor are you likely to be able to suspend the impressions that form almost automatically in you mind. But you don’t have to let those impressions get in the way of objective reasoning.

Of course, even in the context of reasoning about the solution to a problem, one’s mind is still an emotional system, and that system is providing some benefits such as, perhaps, motivation to solve the problem and keep plugging away at it.

Feedback

Maria Konnikova at Harvard Book Store

Maria Konnikova at Harvard Book Store

Today at the Harvard Book Store, Maria Konnikova gave a presentation about the book Mastermind. I attended, and I asked a question about whether certain professions lent themselves to the Sherlockian methods better given the parallels I had drawn to software debugging in my own experience.

Konnikova’s reply was that any profession with good feedback would be good for the Holmes system approach. She specifically mentioned doctors and bartenders.

Feedback does seem to be important for many systematic things—so she’s probably right. I suppose what makes feedback particularly important to the Sherlockian mindfulness approach is the observation of one’s own mind. And there is also a feedback aspect when one is solving mysteries—the verification or disproving of hypotheses.

Conclusion

Anyway, I won’t try to summarize the whole book. I highly enjoyed it and found many parallels to my personal approach at mental life and especially the mystery solving of software systems, including psychological flow and creativity.

Tags: , , , , ,

Comparison: ChainLocker vs. Heirarchical Mutexes

Posted in programming on March 8th, 2013 by Samuel Kenyon

In “Concurrent Programming with Chain Locking,” Gigi Sayfan presents a C# class demonstrating chain locked operations on a hierarchical data structure. This reminded me of lock hierarchies described by Anthony Williams in the book C++ Concurrency in Action.

To take a step back for a moment, the overall goal is to create multithreaded code which doesn’t cause deadlocks, race conditions, etc.  Although it may seem like a confusion of metaphors, lock hierarchies are a type of hand-over-hand locking, which is basically defined lock ordering. I think it would be fair to call a “chain” a particular path of locking through a hierarchy. Defining lock ordering is what you do if you can’t do the better idea, which is to acquire two of more locks in a single operation, for instance with the C++11 std::lock() function.

As Herb Sutter pointed out in his article “Use Lock Hierarchies to Avoid Deadlock,” you may already have layers in your application (or at least the data in a certain context). This can be taken advantage of when making the software concurrency-friendly. The general idea is that a layer does not access code in layers above it. For mutexes, this means that a layer cannot lock a mutex if it already holds a mutex from a lower layer.

Read more »

Tags: , , ,