Heterarchies and Society of Mind’s Origins

Posted in artificial intelligence on February 4th, 2014 by Samuel Kenyon

Ever wonder how Society of Mind came about? Of course you do.

One of the key ideas of Society of Mind [1] is that at some range of abstraction levels, the brain’s software is a bunch of asynchronous agents. Agents are simple—but a properly organized society of them results in what we call “mind.”

agents

agents

The book Society of Mind includes many sub-theories of how agents might work, structures for connecting them together, memory, etc. Although Minsky mentions some of the development points that lead to the book, he makes no explicit references to old papers. The book is copyrighted “1985, 1986.” Rewind back to 1979, “long” before I was born. In the book Artificial Intelligence: An MIT Perspective [2], there is a chapter by Minsky called “The Society Theory of Thinking.” In a note, Minsky summarizes it as:

Papert and I try to combine methods from developmental, dynamic, and cognitive psychological theories with ideas from Artificial Intelligence and computational theories. Freud and Piaget play important roles.

Ok, that shouldn’t be a surprise if you read the later book. But what about heterarchies? In 1971 Patrick Winston described heterarchical organization as [3]:

An interacting community of processes, some narrow experts, others broad generalists, and still others in the role of critics.

tangled

tangled

“Heterarchy” is a term that many attribute to Warren McCulloch in 1945 based on his neural research. Although it may have been abandoned in AI, the concept had success in anthropology (according to the intertubes). It is important to note that a heterarchy can be viewed as a parent class to heirarchies and a heterarchy can contain hierarchies.

In 1973 the student Eugene Freuder, who later became well known for constraint based reasoning, reported on his “active knowledge” for vision thesis, called SEER [4]. In one of the funniest papers I’ve read, Freuder warns us that:

this paper will probably vacillate between cryptic and incoherent.

Nevertheless, it is healthy to write things down periodically. Good luck.

And later on that:

SEER never demands that much be done, it just makes a lot of helpful suggestion. A good boss.

This basic structure is not too hairy, I hope.

If you like hair, however, there are enough hooks here to open up a wig salon.

He refers to earlier heterarchy uses in the AI Lab, but says that they are isolated hacks, whereas his project is more properly a system designed to be a heterachy which allows any number of hacks to be added during development. And this supposedly allows the system to make the “best interactive use of the disparate knowledge it has.”

This supposed hetarchical system:

  • “provides concrete mechanisms for heterarchical interactions and ‘institutionalizes’ and encourages forms of heterarchy like advice”
  • allows integration of modules during development (a user (the programmer) feature)

One aspect of this is the parallelism and whether that was actually better than serial methods. The MIT heterarchy thread eventually turned into Society of Mind, or at least that’s what Patrick Winston indicates in his section introduction [2]:

Minsky’s section introduces his theory of mind in which the basic constituents are very simple agents whose simplicity strongly affects the nature of communication between different parts of a single mind. Working with Papert, he has greatly refined a set of notions that seem to have roots in the ideas that formerly went by the name of heterarchy.

Society of Mind is highly cited but rarely implemented or tested. Reactive aka behavioral robotics can be heterarchies, but are either ignored by AI or relegated to the bottom of 3 layer architectures for robots. The concepts of modularity and parallel processing have folded into general software engineering paradigms.

But I wonder if maybe the heterarchy concept(s) for cognitive architectures were abandoned too quickly. The accidents of history may have already incorporated the best ideas from heterarchies into computer science, however I am not yet sure about that.

References

[1] M. Minsky. The Society of Mind. New York: Simon and Schuster, 1986, pp. 249-250.
[2] P.H. Winston & R.H. Brown, Eds., Artificial Intelligence: An MIT Perspective, vol. 1, MIT Press, 1979.
[3] P.H. Winston, “Heterarchy in the M.I.T. Robot.” MIT AI Memo Vision Flash 8, March 1971.
[4] E.C. Freuder, “Active Knowledge.” MIT AI Memo Vision Flash 53, Oct. 1973.


Image Credits

  1. Samuel H. Kenyon’s mashup of Magritte and Agent Smith of the Matrix trilogy
  2. John Zeweniuk

 

Tags: , ,

Evaluating Webots

Posted in robotics on February 4th, 2014 by Samuel Kenyon

I’m trying to find a better simulator than Breve for robots and 3D physics world creation.

I have been examining the Webots simulation environment. It seems pretty useful since I could write controllers in C++ and it comes with several robot models out of the box. And I like the scene graph (or “scene tree” as they call it) approach for environments (they use VRML97 which is obsolete but at least it’s a well known standard); I only played with the interface enough to add a sphere and a box but it seems good enough so far. A lot easier than doing it completely programmatically and/or completely from scratch with raw data files. And I have made 3D models from scratch with data files in the past and it was not that efficient (compared to some ideal GUI) except for tweaking exact numbers.

They have some Nao (aka “NAO”) robot models and it would be awesome to use that for some mental development research. I’m thinking about affordances, and certainly the Nao with its 25 DOF and lots of sensors is more than sufficient for interesting affordances with real world (or simulated 3D world) environments. It may actually be overkill…

simulated Nao looks at a brick box

simulated Nao looks at a brick box

Not that I have access to a real Nao, although I programmed two little little test scripts last year on an actual Nao using Choregraphe, a primarily visual programming tool. Webots can run a Nao server so you can actually hook Choregraphe to the Webots simulation (Choreograph has a sim but just of the robot model, not of environmental interactions). Unfortunately I couldn’t try this out as it’s blocked for free users (the screenshot below shows an attempt to use the robot soccer world).

naoqisim denied!

naoqisim denied!

And I just realized if I write a Webots Nao controller, there’s no documentation or obvious way that I can see to know what the exact actuator names are to pass to wb_robot_get_device() and the demo doesn’t show all motors (or maybe they haven’t implemented all motors?). [Update: I have been informed that you can get the device tags from the Robot Window which can be made visible with the menu (Robot -> Show Robot Window) or double-clicking on the robot in the sim view. Not as good as a text list but at least the info is there.] Making motion files manually would be a pain as well. Maybe I will end up making a simpler robot model from scratch.

The lack of access to the Superviser is also making me question using this for my non-funded research. I might try running some experiments in it, and just see how far I can go without a Superviser.

Tags: ,

Hyping Nonsense: What Happens When Artificial Intelligence Turns On Us?

Posted in culture, transhumanism on January 23rd, 2014 by Samuel Kenyon

The user(s) behind the G+ account Singularity 2045 made an appropriately skeptical post today about the latest Machines-versus-Humans “prediction,” specifically an article “What Happens When Artificial Intelligence Turns On Us” about a new book by James Barrat.

As S2045 says:

Don’t believe the hype. It is utter nonsense to think AI or robots would ever turn on humans. It is a good idea to explore in novels, films, or knee-jerk doomsday philosophizing because disaster themes sell well. Thankfully the fiction or speculation will never translate to reality because it is based upon a failure to recognize how technology erodes scarcity. Scarcity is the root of all conflict.

Smithsonian even includes a quote by the equally clueless Eliezer Yudkowsky:

In the longer term, as experts in my book argue, A.I. approaching human-level intelligence won’t be easily controlled; unfortunately, super-intelligence doesn’t imply benevolence. As A.I. theorist Eliezer Yudkowsky of MIRI [the Machine Intelligence Research Institute] puts it, “The A.I. does not love you, nor does it hate you, but you are made of atoms it can use for something else.” If ethics can’t be built into a machine, then we’ll be creating super-intelligent psychopaths, creatures without moral compasses, and we won’t be their masters for long.

In the G+ comments you can see some arguments about the evidence for or against the prediction. I would like to add a couple arguments in support of Singularity 2045’s conclusion (but not necessarily endorsing his specific arguments):

  1. Despite “future shock” (before Kurzweil and Vinge there was Toffler) from accelerating change in certain avenues, most of these worries about machines-vs-humans battles are so fictional because they assume a discrete transition point: before the machines appeared and after. The only way that could happen is if there was an massive planetary invasion of intelligent robots from another planet. In real life things happen over a period of time with transitions and various arbitrary (e.g. because of politics) diversions and fads…despite any accelerating change.
  2. We have examples of humans living in partial cooperation and simultaneously partial conflict with other species. Insects outnumber us. Millions of cats and dogs live in human homes and get better treatment than the poor and homeless in the world. Meanwhile, crows and parrots are highly intelligent animals often living in symbiosis with humans…except when they become menaces.

If we’re going to map fiction to reality, Michael Crichton techno-thrillers are a bit closer to real technological disasters, which are local specific incidences resulting from the right mixture of human errors and coincidence (and this happens in real life sometimes for instance nuclear reactor disasters). And sometimes those errors are far apart at first like somebody designing a control panel badly which assists in a bad decision by an operator 10 years later during an emergency.

And of course I’ve already talked about the Us-versus-Them dichotomy and the role of interfaces in human-robot technology in my paper “Would You Still Love Me If I Was A Robot?”

Addendum

I doubt we will have anything as clear cut as an us-vs-them new species. And if we maintain civilization (e.g. not the anti-gay anti-atheist witch-hunting segments) then new variations would not be segregated / given less rights and vice-versa they would not segregate / remove our human rights.

As far as I know, there is no such thing as a natural species on Earth that “peacefully coexists.” This may be the nature of the system, and that’s certainly easy to see when looking at the evolutionary arms races constantly happening. Anyway my point is that any attempt to appeal to nature or the mythical peaceful caveman is not the right direction. The fact that humans can even imagine never-ending peace and utopia seems to indicate that we have started to surpass nature’s “cold equations.”

Tags: ,

2013: Postmortem

Posted in meta on January 19th, 2014 by Samuel Kenyon

This is a personal postmortem (aka retrospective), not a report on the world at-large.

crossing a stream in the Amazon

crossing a stream in the Amazon

What Went Right

I accomplished several things of a wider diversity than I did in 2012, particularly new-to-me activities.

Riding horses in Iceland

Riding horses in Iceland

Highlights:

  • Sky-dived for the first time
  • Went outdoor top-rope rock climbing for the first time
  • Submitted two artificial intelligence papers to conferences/symposia, one of which was accepted
  • Went to aforementioned symposium and presented a poster for it
  • Wrote 25 blog posts
  • Started working on developmental systems
  • Started a new job (technically started mid Dec 2012)
  • Wrote a new, improved version of my short film screenplay, Enough to be Dangerous
  • Started working on a horror film screenplay
  • Acted in a music video for Wake No More
  • Acted in a book video for Kissing Oscar Wilde
  • Acted in a a BU short comedic film called South x Southeast (screened in Dec 2012, but we’ll count it since the video appeared online in 2013)
  • Explored Iceland
  • Explored Ecuador
    • Including rising to the highest elevation I’ve ever been at
    • Explored (via a guide) the Amazon rainforest
      • Monkeys!
      • I ate a grub (it wasn’t raw–it was cooked (smoked) by the natives)
  • Learned to ride a horse (in Iceland, also went riding in Ecuador in the Andes mountains)
  • Explored the entire Freedom Tunnel in New York
  • Went sailing for the first time (part of a company forced-fun day, but it was fairly interesting)
  • Participated in No Pants Subway Ride
Laundromat cafe, Reykjavik, Iceland

Laundromat cafe, Reykjavik, Iceland

Iceland

Iceland

Iceland

A volcanic crater we climbed up to

Iceland

Iceland

Kayaking in the volcanic crater of Quilotoa, Ecuador with Emily

Kayaking in the volcanic crater of Quilotoa, Ecuador with Emily

Me holding a freshly opened jungle coconut (Amazon rainforest)

Me holding a freshly opened jungle coconut (Amazon rainforest) (photo by Emily Durrant)

What Went Wrong

AI Paper Number 2

The second papers I wrote was obviously not ready (both the paper and the research), but I submitted it anyway. However, I got excellent feedback in the rejection.

Not Enough Personal Coding

Although I worked a lot on non-work projects, I didn’t write as much code as I wanted. Still, I think there was an increase from 2012.

Not Enough Hardware

Aside from testing some little programs I wrote on a NAO robot, I didn’t do any work with real robots in 2013. I need to improve the balance of theory to implementation with my robotic and computational intelligence ideas.

Not Enough Art

I didn’t make any new drawings except for a few doodles. Also although I’ve written some notes for new music compositions, I didn’t actually generate any new music last year…although I did finally post an old composition on Soundcloud.

Skydiving

I didn’t like being squashed in a painful/awkward position on the floor of an overfull plane and without even any handles to pull myself up out of this position. It kind of ruined the whole experience until later on after the parachute was out and we were gliding—after tumbling ridiculously on exiting the plane and almost going unconscious from lack of breathing. Apparently the instructor was supposed to tell me about making sure to breath (the experience is quite new if you are not used to having your face blasted with air as you plummet from the highest location you’ve ever jumped before), but he failed to do so. I probably won’t go back to that airfield and if I do, I will make sure I’m not getting on the plane unless I can crouch or sit in a safe, un-strained, non-distracting position.

970853_10151492414894302_723138573_n