2014: Postmortem

Posted in film, meta on October 20th, 2015 by Samuel Kenyon

Oh no! I forgot to post a personal postmortem 1 for the year 2014 like I did for the previous year! Oh well, here it is ten months late.

What Went Right

  • Started a new day job at a biotech company called GnuBIO, now the skunkworks division of Bio-Rad. Essentially I write robotics code for microfluidics inventions that will ultimately contribute to human health improvement via diagnostics.
  • Made a short film based on my screenplay Enough to be Dangerous.
  • Got married to Emily. On Halloween. Proposal happened…on April Fool’s Day.
  • Visited Istanbul, Turkey.
  • Visited Philadelphia, including the Mütter Museum.
  • Moved to a smaller but better located apartment in Cambridge.
wedding dance to dubstep

wedding dance to dubstep




We had a few drinks with AI researcher Eray Özkural in Istanbul:

Eray Özkural, Samuel H. Kenyon, and Emily Durrant in Istanbul

Eray Özkural, Samuel H. Kenyon, and Emily Durrant in Istanbul

The Movie

I was able to kick my film Enough to be Dangerous into gear fairly quickly once I decided it should be done. Aside from writing and producing it, I also was the lead actor.

Enough to be Dangerous

Enough to be Dangerous

Making a film has all the same problems that a startup company does, although the boundaries between the “company” and the “product” can be different. Particularly in this case, my new company (called at the time “Subterfugue Films” (a horrible portmanteau)), is vaporous and all the resources existed for the “product”, which is the film itself. Recently (in 2015) I renamed my film company to Imaginary Danger Productions.

Enough to be Dangerous was finished under budget, although I went over-budget on film festival costs. It was accepted to various film festivals (see the official page for more info), so it was successful in that regard. I’d love to remake it as a full-length feature, however, as it really is uncomfortably compressed to fit into 35 minutes.

What Went Wrong


I co-founded a tech company called Glug, but within a few months decided to leave it (the company is now dissolved).

This startup turned out not to be ideal for me for various reasons, so I quit. Also at that time I realized I’d rather put all that startup energy into making my film.

The most important lesson learned is not to invest any money on incorporation or legal fees before you know if the product/company will even launch for real. Another thing I learned is a proposed new tech product should be able to garner followings from hundreds if not a thousand people (e.g. on mailing lists or social networks) even if those first thousand people aren’t the actual early adopters—it’s a crude measure of general interest.

Image Credits
– wedding photo by Tanya Rose



The Ideal Film AI

Posted in film on February 18th, 2015 by Samuel Kenyon

This is prompted by Ben Bogart‘s question “What do you consider the most seminal representations of AI in cinema of all time?”

I think the best is yet to come. Ideally an AI (Artificial Intelligence) in a film would have two elements:

  1. The alien aspect: It’s not a human or some other animal (although it can be very similar).
  2. Some connection with humans (or a human), e.g. humanity created them for a job, or this particular human created this particular AI for some reason, etc. This is the difference between the screen character being just another sci-fi alien (extraterrestrial, previously-unknown terrestrial monster, et cetera).

Autómata (dir. Gabe Ibáñez)


Automata and Tron: Legacy both make meager attempts to show AI emerging and evolving and trying to figure out their own way that’s not quite the same as for humans.

The robot R2D2 (Star Wars) superficially meets these ideals: we know it’s intelligent, yet it doesn’t speak English or get subtitles. Its connection with humans however is not really used in any interesting way in the film (we don’t analyze the slavery of robots in Star Wars …at least I don’t). And if the history stories are true, R2D2 and C3PO are just metal copies of the peasants from Akira Kurosawa’s The Hidden Fortress.


The Hidden Fortress (dir. Akira Kurosawa)


Tags: , ,

Cognitive Abstraction Manifolds

Posted in artificial intelligence, philosophy on July 19th, 2014 by Samuel Kenyon

A few days ago I started thinking about abstractions whilst reading Surfaces and Essences, a recent book by Douglas Hofstadter and Emmanuel Sander. I suspect efforts like Surfaces and Essences, which traverse vast and twisted terrains across cognitive science, are probably underrated as scientific contributions.

But first, let me briefly introduce the idea-triggering tome. Surfaces and Essences focuses on the role of analogies in thought. It’s kind of an über Metaphors We Live By. Hofstadter and his French counterpart Sander are concerned with categorization, which is held to be essentially the primary way of the mind creating concepts and the act of remembering. And each category is made entirely from a sequence of analogies. These sequences generally get bigger and more complicated as a child develops into an adult.

The evidence is considerable, but based primarily on language as a window into the mind’s machinery, an approach which comes as no surprise to those who know of Steven Pinker’s book, The Stuff of Thought: Language as a Window into Human Nature. There are also some subjective experiences used as evidence, namely what categories and mechanisms allowed a bizarre memory to occur in a situation. (You can get an intro in this video recording of a Hofstadter presentation).

Books like this—I would include in this spontaneous category Marvin Minsky’s books Society of Mind and The Emotion Machine—offer insight into psychology and what I would call cognitive architecture. They appeal to some artificial intelligence researchers / aficionados but they don’t readily lend themselves to any easy (or fundable) computer implementations. And they usually don’t have any single or easy mapping to other cognitive science domains such as neuroscience. Partly, the practical difficultly is that of needing full systems. But more to the point of this little essay, they don’t even map easily to other sub-domains nearby in psychology or artificial intelligence.

Layers and Spaces

One might imagine that a stack of enough layers at different levels will provide a full model and/or implementation of the human mind. Even if the layers overlap, one just needs full coverage—and small gaps presumably will lend themselves to obvious filler layers.

For instance, you might say one layer is the Surfaces and Essences analogy engine, and another layer deals with consciousness, another with vision processing, another with body motion control, and so on.



But it’s not that easy (I know, I know…that’s pretty much the mantra of skeptical cognitive science).

I think a slice of abstraction space is probably more like a manifold or some other arbitrary n-dimensional space. And yes, this is an analogy.

These manifolds could be thought of—yay, another analogy!–as 3D blobs, which on this page will be represented as a 2D pixmap (see the lava lamp image). “Ceci n’est pas une pipe.”

blobs in a lava lamp

blobs in a lava lamp

Now, what about actual implementations or working models—as opposed to theoretical models. Won’t there be additional problems of interfaces between the disparate manifolds?

Perhaps we need a class of theories whose abstraction space is in another dimension which represents how other abstraction spaces connect. Or, one’s model of abstraction spaces could require gaps between spaces.

Imagine blobs in a lava lamp, but they always repel to maintain a minimal distance from each other. Interface space is the area in which theories and models can connect those blobs.

I’m not saying that nobody has come up with interfaces at all in these contexts. We may already have several interface ideas, recognized as such or not. For instance, some of Minky’s theories which fall under the umbrella of Society of Mind are about connections. And maybe there are more abstract connection theories out there that can bridge gaps between entirely different theoretical psychology spaces.


Recently Gary Marcus bemoaned the lack of good meta-theories for brain science:

… biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking. What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

At theme level 2 of this essay, most likely these bridges will be dependent on analogies as prescribed by Surfaces and Essences. At theme level 1, perhaps these bridges will be the connective tissues between cognitive abstraction manifolds.

Image Credits:

  1. Jorge Konigsberger
  2. anthony gavin

Tags: , , , ,

On Rehacktoring

Posted in programming on May 6th, 2014 by Samuel Kenyon

For at a least a year now, I’ve been using the term “rehacktor,” a portmanteau of “refactor” and “hack.” Although I came up with it on my own, I was not surprised that googling it resulted in many hits. Perhaps we are witnessing the convergent evolution of an urgently needed word.

My definition was at first a tongue-in-check descriptor in VCS commits for refactoring tasks that I was not pleased about. However, upon reflection, I think the term and the situation which leads to its necessity is important.

But first, how are others using this term?

Other Definitions

One “intertubeloper” (a term I just coined) tweeting as WizOne Solutions has tried to establish this unsatisfactory definition:

Rehacktoring (n.) – The act of reaching a level of hackedness so great that the resultant code is elegant and sophisticated.

Something a bit more sensible is the concept of rehacktoring as refactoring without tests (for instance a Twitter search of “rehacktoring” will get some results along those lines).

My one minute research bout also found a recent blog post by Katrina Owen which contains the string “Refactoring is Not Rehacktoring.” Her article paradoxically offers a meaning by not defining “rehacktoring.” She explains via some strategies how one way to refactor is to change in baby steps. Each step results in code that still passes the tests. Each intermediate step may be ugly and redundant, but the tests will still pass—if they don’t you revert and retry.

I’ve been using this baby-step refactoring strategy for a long time, having come up with it on my own. But as I read Owen’s post I realized that if one does not follow through with the full refactoring, one might end up with a ridiculous state of the code which is not necessarily any better than its pre-refactoring state. In that sense, “rehacktoring” can be a retroactive term to describe an abandoned refactoring.

Owen meant for “rehacktoring” to mean unsafe code changes (I think). Both that meaning and abandoned refactoring should be valid variations.


Aside from those notions, I propose another valid meaning: any refactoring effort which is a mini death march. In other words, the refactoring has a high chance of failure due to schedule, resources, etc. I would guess a common trigger of rehacktoring are managers demanding refactoring of crappy code that is critical to a deployed application.

But refactoring is not the only answer. There are other options, regardless of what John Manager or Joe Principal claims. Some very legitimate alternatives include, but are not limited to:

  • Don’t refactor. Spend time on other/new functionality.
  • Rewrite the class/module/application from scratch.
  • Stretch the refactoring out to a longer time frame, so that everyone involved is still spending most of their effort on new functionality that users care about.

The proposal to rewrite a module or application from scratch will scare those of weaker constitution. But it does work in many circumstances. Keep in mind that the first crack at doing something new might be the necessary learning phase to do it right the second time. A manager, or your own mind, might convince you that refactoring is quicker and/or lower risk than redoing it. But that is not always the case, especially if you are more motivated to make something new than tinker with something old. And I have seen the rewrite approach work in the wild with two different programmers, in which an intern writes working version 1, and then someone else (even another intern) writes version 2.

Rehacktoring may be the wrong thing to do. It is especially the wrong thing to do if it causes psychological depression in the programmer. After all, who wants to waste their life changing already-working code from bad to semi-bad? Based on observations (albeit not very scientific), I suspect that programmers are particularly ill-suited to refactoring code that they did not originally write. They have no emotional connection to that code. And that makes the alternate solutions listed above all the more appealing for the “soft” aspect of a software team.

Tags: ,