Hyping Nonsense: What Happens When Artificial Intelligence Turns On Us?

Posted in culture, transhumanism on January 23rd, 2014 by Samuel Kenyon

The user(s) behind the G+ account Singularity 2045 made an appropriately skeptical post today about the latest Machines-versus-Humans “prediction,” specifically an article “What Happens When Artificial Intelligence Turns On Us” about a new book by James Barrat.

As S2045 says:

Don’t believe the hype. It is utter nonsense to think AI or robots would ever turn on humans. It is a good idea to explore in novels, films, or knee-jerk doomsday philosophizing because disaster themes sell well. Thankfully the fiction or speculation will never translate to reality because it is based upon a failure to recognize how technology erodes scarcity. Scarcity is the root of all conflict.

Smithsonian even includes a quote by the equally clueless Eliezer Yudkowsky:

In the longer term, as experts in my book argue, A.I. approaching human-level intelligence won’t be easily controlled; unfortunately, super-intelligence doesn’t imply benevolence. As A.I. theorist Eliezer Yudkowsky of MIRI [the Machine Intelligence Research Institute] puts it, “The A.I. does not love you, nor does it hate you, but you are made of atoms it can use for something else.” If ethics can’t be built into a machine, then we’ll be creating super-intelligent psychopaths, creatures without moral compasses, and we won’t be their masters for long.

In the G+ comments you can see some arguments about the evidence for or against the prediction. I would like to add a couple arguments in support of Singularity 2045′s conclusion (but not necessarily endorsing his specific arguments):

  1. Despite “future shock” (before Kurzweil and Vinge there was Toffler) from accelerating change in certain avenues, most of these worries about machines-vs-humans battles are so fictional because they assume a discrete transition point: before the machines appeared and after. The only way that could happen is if there was an massive planetary invasion of intelligent robots from another planet. In real life things happen over a period of time with transitions and various arbitrary (e.g. because of politics) diversions and fads…despite any accelerating change.
  2. We have examples of humans living in partial cooperation and simultaneously partial conflict with other species. Insects outnumber us. Millions of cats and dogs live in human homes and get better treatment than the poor and homeless in the world. Meanwhile, crows and parrots are highly intelligent animals often living in symbiosis with humans…except when they become menaces.

If we’re going to map fiction to reality, Michael Crichton techno-thrillers are a bit closer to real technological disasters, which are local specific incidences resulting from the right mixture of human errors and coincidence (and this happens in real life sometimes for instance nuclear reactor disasters). And sometimes those errors are far apart at first like somebody designing a control panel badly which assists in a bad decision by an operator 10 years later during an emergency.

And of course I’ve already talked about the Us-versus-Them dichotomy and the role of interfaces in human-robot technology in my paper “Would You Still Love Me If I Was A Robot?”

Addendum

I doubt we will have anything as clear cut as an us-vs-them new species. And if we maintain civilization (e.g. not the anti-gay anti-atheist witch-hunting segments) then new variations would not be segregated / given less rights and vice-versa they would not segregate / remove our human rights.

As far as I know, there is no such thing as a natural species on Earth that “peacefully coexists.” This may be the nature of the system, and that’s certainly easy to see when looking at the evolutionary arms races constantly happening. Anyway my point is that any attempt to appeal to nature or the mythical peaceful caveman is not the right direction. The fact that humans can even imagine never-ending peace and utopia seems to indicate that we have started to surpass nature’s “cold equations.”

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: ,

The Future of Emotions

Posted in artificial intelligence, transhumanism on September 11th, 2011 by Samuel Kenyon

I recently happened upon an article [1] about the work of Jennifer Lerner:

Lerner is best known for her research on two topics: the effects of emotion on judgment and decision making, and the effects of accountability on judgment and decision making. Recently, along with a number of National Science Foundation-supported scientists, she appeared on the PBS program “Mind Over Money,” a look at the 2008 stock market crash and the irrational financial decisions people make.

How the human emotional architecture fails us in modern life has been an interest of mine for a long time. Emotions seem to be an integral part of human decision making, but can we improve human emotional systems for the more dangerous and complicated situations existing in the modern world? I am reminded of an essay called “Neo-Emotions” that I wrote in 2005 [2], which I will re-post right here, and then I will mention some of the criticism of that article.

Neo-Emotions

dramatic mask

One of my hypotheses right now is that emotions often seem irrational to us simply because many of them are outdated to work with modern situations, culture, and technologically-enabled existence.  The solution is to develop neo-emotions.  A neo-emotional system would take whatever beneficial roles existing emotional systems provide, and extend and modify these roles to better suit the environment.

A trivial example of primitive survival emotions influencing a modern situation negatively is the neuroeconomics experiment, in which most people will choose $10 now rather than $11 tomorrow [3].  I have seen this in action many times in bargaining situations (at yard sales, flea markets, etc.)—”Do you want $20 for this widget today or $50 possibly never?” Afterwards you can make pure logical defenses that you chose the money today because you thought the probability of the greater sum later was unlikely, but your quickly-made choices are more likely to have been driven from emotional underpinnings.

That instance may seem too minor for any concern, but imagine the possibility that a good deal of your decisions are being made with biological control systems developed for animals in the wild.  Inappropriate emotions can also come from low-level fear responses and conditioning.  “The brain seems to be wired to prevent the deliberate overriding of fear responses…Our brains seem to have been designed to allow the fear system to take control in threatening situations and prevent our conscious awareness from reigning” [4].  Decisions made from what some call “negative emotions” can sometimes be devastating depending on the situation [5]:

For instance, in 1963, when John F. Kennedy learned that the Soviets had brought nuclear missiles to Cuba, he became enraged, taking it as a personal affront—after all, the Soviet ambassador had assured him just months before that such as thing would never happen.  But Kennedy’s closest advisors deliberately made the effort to help him calm down before deciding what steps to take—and so may have averted a world war.

The need for neo-emotions may seem like a general statement that is hard to test unequivocally.  In some soft form, people may already be attempting neo-emotions through various inefficient and temporary means.  Part of the reason I thought of it in the first place, however, is that we can perform autonomous software agent experiments to confirm or falsify the hypothesis that emotional systems (which develop both phylogenetically and ontogenetically) become outdated/useless/dangerous in more complex environments.  A “more complex environment” includes the notion of rapid change in new ways, especially for high-level structures; by “high-level” we mean constructed from many stages of lower levels down to the original environment.  The complex environment also includes cultural structures and resources that may have become integral to the generations.  Any environment similar to ours should have always been dynamic, of course; indeed, you could include unexpected devastating natural events as a type of environment in which an organism is suddenly ill-equipped to operate/survive.  The quickest analogy to the human situation would be future shock in a techno-industrial world, but that is a crude fit.  For humans the problem is particularly hairy: high-level environment structures store knowledge, and this “extelligence” [6] interacts through culture with individuals.

We can still start with simple experiments.  Emotional systems are most likely substrate independent—and therefore applicable to artificial intelligence, both software-only agents and embodied robots.  But a robot’s needs may not result in the same kind of neo-emotions.  Some people will instantly retort that emotions are unexplainable, not replicable in machines, etc.  This is not the place for a detailed history of emotion and arguments of what emotion is; suffice it to say, many researchers in the realms of biology, neuroscience, artificial intelligence, psychology, philosophy, etc., have attempted to define and work with emotions at least with respect to their field.  Summarily, emotion includes various brain/nervous-system processes, external behaviors, mental states, body states (e.g., facial expressions), social feelings, cultural notions, and more.  The word “emotion” itself has only been in regular use since the early 1800s as a catch-all term overlapping passions, sentiments, feelings, and affections [7].  These terms were associated with soul and will; even now some researchers think emotion may be foundational to consciousness [8].  Emotion is intertwined with the entire evolutionary biological and socio-cultural landscape and is linked to many hot buttons.

One question that people might ask is: Will I end up as an über-rational machine?  Well, how do you define rationality?  Perhaps a perceived slice of a neo-emotional person could seem cold, harsh, or too rational, I’m not sure.  Internally, things will certainly not be just the deliberative, deductive, planning capabilities of current humans.  It will be better than that, possibly involving different types of priorities/overrides of deliberative vs. emotional vs. reflexive (this is a simplification) and more self-reprogrammable parts of emotional associations, conditioned learning, etc.

That does not mean I want us to mutate into the type of sentience like an Outer Limits alien asking an Earthling scientist, “What is love?” or a Terminator trying to figure out why we cry.  What a species needs, if this turns out to be a real issue in some form, is to modify/extend the emotions.  “Evolution made our brains so smart that we ended up building environments that made some of our mental resources obsolete…We are not slaves to our emotions, but they are hardly at our beck and call either” [4].  It would be interesting to see if giving more control to the neocortex over the amygdala would result in better-functioning humans.  But that seems too narrow a solution, especially since there is still much to learn about the neurophysiology and neuroanatomy of emotions; indeed, the human emotional architecture is not limited to just the amygdala.

Unfortunately, the word “emotion” is a tangled, many-faceted collection of often inconsistent concepts, but at the very least your emotional system is an intertwined part of your mind, and as such is involved with matters of your body and interacting/communicating with other bodies.  So it may or may not be very difficult to improve one part of, for instance, the brain, without having to tweak many other factors.  Neo-emotional architectures may only come from a complete overhaul of our standard equipment wetware, or maybe just a simple matter of explicit training (with the help of extelligence) combined with tightly-coupled human-computer interaction (not necessarily invasive).  I definitely do not think any pseudo-psychological self-help book series will do the trick.  Drugs could play a part, but we don’t want to aim for the stoic zombie society of the movie Equilibrium.  Again, the concept of neo-emotions is not about utter suppression of existing emotional/feeling faculties.

Please note that the need for neo-emotions has nothing to do with the psychological construct and commercial self-help meme known as “emotional intelligence” (see [9] for a comprehensive summary and critique of EI).  Certainly recognizing emotion as a major factor in normal human operations and trying to account for them in an individual is a step in the right direction, though.  Identifying some of the so-called mental afflictions and destructive emotions could provide examples of where neo-emotions would have served better.  Here is a relevant sample of the Dalai Lama discussing the subject [5]:

What impact do our destructive emotions—hatred, prejudice, and so forth—have on the society as a whole?  What role are they playing in the tremendous problems and sufferings that society is experiencing right now?…Are these destructive emotions to which we are subject at all malleable?  If we were to diminish them, what kind of impact would that have on society as a whole and on the myriad problems that society is experiencing?

However, it is not simply a matter of emotions being destructive, or destructive on a large scale.  We especially don’t want to cut the sources of emotions that could still be essential for our survival even in a modern world.  Also, neo-emotions would not be a mere diminishing of certain emotions through whatever training or mental exercises a human can muster—they would involve changes and extensions that could result in something very hard to imagine with our current understanding.

We have to be careful not to blindly promote the dichotomy between “rationality” and “emotion” (or worse “cognition” and “emotion”) although at certain levels of detail it may be useful.  Mythical symbolic notions of the heart versus the brain should stay in fiction.  Some people seem to think that emotion can be secluded to a little black box, a module that is simply thrown into the mind mix.  This view rarely helps figure out how emotions work and how they evolved.  Again, I must stress that our notions of emotions are actually describing several interlocked processes in our brains (and other parts of our bodies), in which it becomes difficult to separate out deliberation, planning, rationality—these ideal constructions make it sometimes easier and sometimes harder to figure out how the brain works and how to simulate it.  The point is: do not assume that potential neo-emotional brain-body systems will be simple extensions of an uncorrelated emotional system.

Criticism

Well, first, I can criticize the article myself because I didn’t propose in detail various ways to achieve Neo-Emotions. Hopefully I can elaborate more in the future.

One blogger mentioned my article in 2005 [10], saying:

Ah, but our intellects are not so dark that they can’t pull themselves up by their own boot-straps! Or at least that’s what the transhumanists would have us believe.

Yes, perhaps we can change our emotions to better suit our competitive environment. We’ll get rid of pity and love, and replace them with ruthlessness and hatred. Sounds like “Genesis of the Daleks.” Or maybe just your NOW feminist.

And we haven’t even touched on the weirdest “improvement” on nature : the incredible self-mutilation of Michael Jackson. This would be funny if it weren’t true.

Ah, but it is funny.

Of course, I don’t want mental modifications to go the way of Michael Jackson’s face. And Mr. Gage raises a good point—what if people modified their emotions, not just to make better decisions, but for competitive gain? Would people really increase their ruthlessness?

I don’t think it’s that simple—as I keep saying, emotion cannot be so easily decoupled from the rest of the brain’s activities. But if we look to the existing outliers, like psychopaths, perhaps there is a danger of self-modifications resulting in similar mindsets. I don’t think that kills the concept of modifying the emotional aspect of the human mind, it just highlights the difficulty of changing evolutionarily-old structures to better handle our relatively new artificial environments and experiences.

References
[1] B. Mixon Jr. & NSF, “Personal Question Leads Scientist to Academic Excellence“, LiveScience, September 1, 2011.
[2] S. H. Kenyon (as Flanneltron), “Neo-Emotions.” Transhumanity, Feb. 14, 2005.
[3] L. Brown, “Why Instant Gratification Wins: Brain battle provides insight into consumer behavior.” Betterhumans, Oct. 2004. Available: http://www.betterhumans.com/News/news.aspx?articleID=2004-10-14-2
[4] S. Johnson, “The Brain + Emotions: Fear.” Discover, pp.33-39, March 2003.
[5] D. Goleman, et al, Destructive Emotions: How Can We Overcome Them? A Scientific Dialogue with the Dalai Lama. Bantam, 2003, pp.87,223-224.
[6] I. Stewart and J. Cohen, Figments of Reality: The Origins of the Curious Mind. Cambridge University Press, 1997.
[7] K. Oatley, Emotions: A Brief History. Blackwell, 2004, p.135.
[8] A. Damasio, The Feeling of What Happens: Body and Emotion in the Making of Conciousness. Harcourt, 1999.
[9] G. Matthews, M. Zeidner, & R.D. Roberts, Emotional Intelligence: Science and Myth. MIT Press, 2002.
[10] L. Gage, “Infant Formula, ‘Neo-Emotions,’ and the Incredible Melting Celebrity“, Real Physics (blog), March 03, 2005.


Image Credit: Zofie

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , ,

The Future of Crowd Madness

Posted in culture, transhumanism on May 2nd, 2011 by Samuel Kenyon

“Nothing New Under the Sun” is the title of Robert Silverberg’s column for the June 2011 issue of Asimov’s Science Fiction Magazine. Although the sobering thought that humanity keeps repeating certain types of mistakes is not one I particularly relish, it should be discussed.

Sam holding a paper edition of Asimov's SF magazine

Sam holding a paper edition of Asimov's SF magazine

Crowd Stupidity

“Anyone taken as an individual is tolerably sensible and reasonable — as a member of a crowd he at once becomes a blockhead.”

—Schiller as quoted by Bernard Baruch as quoted by Robert Silverberg

Silverberg shares some excerpts from the mid-1800s book Extraordinary Popular Delusions and the Madness of Crowds, written by Scottish lawyer / journalist Charles Mackay.

Silverberg compares the South-Sea Bubble disaster to the Internet dot com bubble, specifically failed companies such as Webvan (bankrupt in 2001), Boo.com (broke in 2000) and Flooz.com (died in 2001).

England’s South-Sea Bubble so long ago was not blamed, according to Mackay, on the general public’s avarice, lust of gain, etc. Instead, like with the recent economic crash of 2008, the “evil” bankers and executives are blamed. The collective insanity of a nation seems to be a major factor though. Perhaps it does take a cunning plan to fool the masses, but it also requires the masses to act idiotically as they urgently try to consume or become rich or whatever.

My intention in this post is not to get into the issues of blame for economic bubbles, but to note the special ability for a normally intelligent person to become less so in the context of a group.

Another popular delusion of humankind’s past is fortune telling, e.g. via astrology. Unfortunately, it’s still quite popular. As Mackay wrote:

Leaving out of view the oracles of pagan antiquity and religious predictions in general, and confining ourselves solely to the persons who, in modern times, have made themselves most conspicuous in foretelling the future, we shall find that the sixteenth and seventeenth centuries were the golden age of these impostors. Many of them have been already mentioned in their character of alchymists. The union of the two pretensions is not at all surprising. It was to be expected that those who assumed a power so preposterous as that of prolonging the life of man for several centuries, should pretend, at the same time, to foretell the events which were to mark that preternatural span of existence.

Of course, most transhumanists would exactly like to do the same thing that those alchemists promised, which is to live for centuries (at least). And transhumanists are quite often found foretelling events or repeating the foretellings of popular writers. I recommend that transhumanists, including me, be all the more alert to traditional delusions dressed in sexy technolust clothes.

Crowd Intelligence

“We find that whole communities suddenly fix their minds upon one object, and go mad in its pursuit; that millions of people become simultaneously impressed with one delusion, and run after it, till their attention is caught by some new folly more captivating than the first. ”

—Charles Mackay

Humans are social animals, surely there must be overall benefits to group intelligence? Well, even our relatives the chimps have group stupidity—they will keep copying the behavior of a leader even if a better strategy appears and they will copy the behavior of the dominant group member even if it results in less rewards than another way that they also learned.

But what about the so-called “wisdom of crowds?” Well, we don’t always have wisdom or higher intelligence emerging from groups because the system has to be set up correctly. The main criteria that James Surowiecki listed in his book The Wisdom of Crowds are:

  • Diversity of opinion
  • Independence
  • Decentralization
  • Aggregation

So, first every person should ideally have some private information or ideas that aren’t shared with the others. Second, people don’t allow others in the group to determine their opinions or decisions. Third, people don’t have to be stuck in a closed central structure of wisdom, they can draw from local knowledge and wisdom. Fourth, there has to be a part of the system that compiles judgements into a decision.

And it is not necessarily easy to achieve all four of those criteria for a given problem and a given group.

The Future of Madness

“Pretend to be mad? Who would notice a madman around here?”
—Capt. Blackadder (from Blackadder Goes Forth, which takes place in the trenches of World War 1)

Robert Silverberg’s essay presents the solemn attitude that humanity keeps on repeating its old mistakes. It sounds like a cliché (although perhaps not a popular enough one); in fact Silverberg says one falls back on clichés in this context. Is the adage of Alphonse Karr that Silverberg quotes, “The more things change, the more they remain the same,” really a behavior of the humanity system?

It’s been said that a true science fiction story is one in which the world is irreversibly changed by the end of the story. But can that happen in real life? Transhumanists very much want to instantiate science fiction’s promises. And it seems like certain technologies have transformed everything, such as the Internet.

Perhaps national delusions, group idiocy, Ponzi schemes, witch hunts, economic bubbles, and so on, are not actually a concern for us in the long run. But, let’s explore the premise that they are, just in case.

So, what would be some therapies for humankind’s collective insanity? One obvious answer for a transhumanist is of course to change the very nature of our intelligence, and that can be at an individual level and/or in networked assistance via computer systems, etc. But, what if we were to imagine reducing madness today, using today’s infrastructure?

Philosopher Daniel Dennett had a concept he called “Super Snopes.” Snopes.com is the popular debunking database which explains the truths and lies behind urban myths, chain emails, etc. Super Snopes would be some sort of more massive collection of truth utilizing the power of the Internet, and which would counter the misinformation on the Web.

But Snopes, and maybe even a Super Snopes (if that’s even possible), don’t really address the serious delusions which can last for years. And how does one even know they have subscribed to a delusion if everybody they talk to (or follow) is deluded in the same way? What would motivate a person to even participate in a (Super) Snopes?

Perhaps one glimmer of hope is that the Internet and social media allow different groups to communicate. Never are all people in the world under the exact same delusion. So if the people of sufficient difference were able to trade criticism in a way that overcame our primitive tribal nature, major delusions could be stopped before total disaster.

I suspect we have just begun to tap the power of constant global connections and social media. As Clay Shirky points out, we have a “cognitive surplus” of billions of hours per year that’s largely been used to watch television. In this century some of that time has shifted from passive consumption of media to video games (especially massively multiplayer games) and to actual content creation such as with Wikipedia and of course blogs.

Yet the memes that run rampant are often not about debunking or skepticism, they are simply viral without regards to truth. Our system presently allows simple amplification of normal human social emotions and behaviors. It doesn’t take much cognitive power to go along with bandwagons: repeating—and buying into—what everybody else says on Twitter or Facebook to a major news event. And of course, as it always has been with news, the social news gives the same attention to celebrity gossip and cat photos as it does to natural disasters and national revolutions. And the lifetime of a meme or popular event can be extremely short lived.

So, we have this powerful worldwide networked community, but could it be used for the good of groups? Could it be used at longer time scales, e.g. to stop a decade-long delusion?

Could we evolve our social web so that humankind (or at least those online) are constantly keep ahead of the game when it comes to tricks and delusions?

Also published in H+ Magazine.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , ,

Music and Machines: Highlights from the MIT Media Lab

Posted in culture, robotics on February 15th, 2011 by Samuel Kenyon

I recently attended “Music | Machines: 50 Years of Music and Technology @ MIT,” part of MIT’s ongoing Festival of Art + Science + Technology (FAST).

One of the most interesting demonstrations was the iPhone Guitar by Rob Morris of the Media Lab. Basically, he makes use of the iPhone’s accelerometer as an input for special effects.

iPhone Guitar by Rob Morris

The iPhone is attached to the guitar, so that certain gestural movements of the guitar in space–especially those that happen during an emotional performance–are detected and used to modulate the sound. The touch screen of the iPhone also comes in handy as an accessible on-guitar interface for selecting effects and inputting other variables.

The Muse

The Muse

Digital music is no longer a new phenomena; in fact, it’s downright ancient when you consider that one of the first digital music contraptions was made in 1972. The Triadex Muse is an algorithmic music generator using digital logic, and was designed by Edward Fredkin and Marvin Minsky at MIT Lincoln Laboratory.

Music, Mind and Meaning

Marvin Minsky at MIT Media Lab, Feb 5, 2011

Speaking of Minsky, he discussed “Music, Mind and Meaning” with Teresa Marrin, Mary Farbood and Mike Hawley. Amongst the anecdotes Minsky mentioned an old concept of goals.

One of the ways human minds might achieve goals is to reduce the difference between what it has and what it wants. Music may utilize some of the same mental components–most music chops time in equal intervals and with equal substructures. These chopped experience windows can be compared, possibly in the same way that you can compare what you have with what you want.

Excerpts from the Concert

Computer based production is normal nowadays. So how would a computer and electronics oriented concert be special? Well, Todd Machover of the Media Lab was able to do that by assembling musicians that make some very unusual sounds and abnormal compositions. They all involve computers and/or electronics, but in innovative ways…and through live performances.

The concert began with a 1976 composition by Barry Vercoe called Synapse for Viola and Computer, an early work from MIT’s Experimental Music Studio. As a restaging of the 1970s performance, the digital accompaniment is inflexible, so it was up to the human soloist, Marcus Thompson, to maintain sync and “express himself within the confines.”

Vercoe – Synapse

Synapse was followed by Synaptogenesis, in which Richard Boulanger performs by triggering sound clips and transformations using a Nintendo WiiMote and a Novation Launchpad.

Boulanger – Synaptogenesis

Programmable drums machines have been around since 1972, but what is rare is to see the machine actuate physical percussion hardware. One such robotic instrument is the Heliphon, originally made by Leila Hasan and Giles Hall, and later redesigned by Bill Tremblay and Andy Cavatorta.

Todd Reynolds, Heliphon the robot, and Evan Zipioryn performing at the MIT Media Lab

The sound from this double helix metallophone is produced via solenoids hammering the metal keys. It also has lights hooked in to give a visual indication of which keys are active.

Heliphon and humans Todd Reynolds (violin) and Evan Ziporyn (clarinet) performed Ziporyn’s Belle Labs – Parts 1 & 3.

Ziporyn, Reynolds, Heliphon – Belle Labs Parts 1 and 3

Heliphon is one of various robotic instruments commissioned by Ensemble Robot, a nonprofit corporation based in Boston, MA. Ensemble Robot also made WhirlyBot, which looks like a turnstile but sounds like a chorus of human-like voices, and Bot(i)Cello, which appears to be a cross between a construction tool and a stringed instrument.

The Future of the Underground

If you’re interested in hearing more electronic music, there is always new stuff (or remixes of old stuff) being made, far below the radar of the mainstream.  You can hear some of it on the web, but being at a live performance or DJ set is a different experience, especially when the DJ modifies the music on the fly.  There are some new tools to enable this, for example, here is DJ/producer Encati demonstrating a Kinect wobble controller for dubstep mutations:

What I would like to see more of are environmental actuations triggered by music, beyond just flashing lights. We have autogenerated visualizers, and we can use MIDI to control lights (and fire cannons), but what about having a room really transform automatically based on the music? I’m taking about dynamic 2D and 3D displays everywhere, autonomous mobile furniture, materials changing shape and color, and so on.


Image credits:

4. MIT
Others by the author.


Cross-posted with H+ Magazine.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , ,

Dennett’s Future of Religion Part 2: Transformation

Posted in culture on November 14th, 2010 by Samuel Kenyon

Just posted on my Science 2.0 blog:

Dennett’s Future of Religion Part 2: Transformation

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , ,

Daniel Dennett’s Super-Snopes and the Future of Religion

Posted in philosophy on October 12th, 2010 by Samuel Kenyon

“We’re all alone, no chaperone”
—Cole Porter

Despite his resemblance to Santa Claus, Daniel Dennett wants to disillusion the believers.  If we’re all adults, why can’t we reveal the truth that God(s), like Santa, are childish fantasies?

Earlier tonight I attended Dennett’s talk “What should replace religion?” at Tufts University, which was kindly hosted by the Tufts’ Freethought Society as part of their Freethought Week.

Atheist groups will have to compete with religion in the realm of social activities such as church services.  People won’t leave churches if they don’t have something else to give them the excitement, the music, the ecstasy, the group affiliation, the team building, the moral community, etc. that churches provide.  Many churches already contain atheists who go for all the other stuff besides the doctrine.  In fact, some of the preachers themselves do not believe the doctrine.

Daniel Dennett

Daniel Dennett @ Tufts, 11 Oct 2010

I won’t go over the entire talk, but I’d like to talk about the truth segment.  Dennett pointed out the various citizen science (although he didn’t use the term citizen science) projects going on, in which random people voluntarily collect or analyze data, such as for bird watching and galaxy classification and report that to central repositories.  But certain other data collection activities have gone down—the mundane types of things such as goings-on in a town.  Town newspapers are dying, and nobody is there to take notes in local affairs (such as education, politics, etc.).  And this lost data might be important, because it is oversight.

The Internet has democratized evidence gathering while also promoting the abuse of misinformation.  So, Dennett proposes, some organizations could start projects as preservers of truth—or perhaps a church replacement could convert lovers of God into lovers of truth.  But it wouldn’t be unconditional love of truth.  The privacy of your own thoughts, for instance, may contain truthful information, but it doesn’t necessarily have to become public.  A scientific (in a broad sense of the word) organization that loves truth would compete with religion’s typically “imperfect” handling of truth.

A serious project of truth preservation could become a sort of Super Snopes.  Snopes is the famous website which debunks and/or proves true various urban legends and the like.  When you get one of those emails such as certain bananas will eat your flesh, check it out on Snopes first before continuing the hoax chain.  Dennett doesn’t define Super Snopes in detail, just that this is a kind of project that would be like Snopes or Wikipedia on an even more massive scale.  And there could be similar or overlapping projects that operate on local scales—perhaps reinstating the town/neighborhood oversight that is now missing.

Of course, something this vague has a chance of happening in the future.  But how it happens could be, as usual, an imperfect evolution from what we have now.  Hopefully secular groups, as Dennett makes the call for, will try to architect and create these projects as soon as possible.

I speculate that the projects that end up working in the future as far as truth preservation will make use of software agents (autonomous programs).  For instance, if people are not interested in taking notes on every little issue in your town/city, especially the mundane ones, then a computer can do that.

Of course, one person’s boring task is another’s hobby.  Some people enjoy collecting the data that they contribute to a central database.  But some will be able to use software agents to act as their minions—the citizen truth gatherer becomes a node, in which they are a small local central repository, which then sends data to the next biggest node, and so on.

The truth needs to be available to people whenever they want.  So the other major part of the technical aspect will be the interfaces and filters that allow humans to digest information, and to choose what streams to digest.  Of course, various web technologies have been increasing this capability (of filtering and choosing streams) for the entire life of the Internet.

Here is my question: could a (or perhaps several) Super Snopes ever evolve beyond truth preservation into actual civilization preservation, for instance like Asimov’s fictional Foundations?

(Cross-posted with Science 2.0.)

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , , ,

Five Ways Machines Could Fix Themselves

Posted in interaction design, robotics on September 30th, 2010 by Samuel Kenyon

Now published on h+ magazine: my article “Five Ways Machines Could Fix Themselves.” Check it out!

As I see cooling fans die and chips fry, as I see half the machines in a laundry room decay into despondent malfunctioning relics, as my car invents new threats every day along the theme of catastrophic failure, and as I hear the horrific clunk of a “smart” phone diving into the sidewalk with a wonderful chance of breakage, I wonder why we put up with it. And why can’t this junk fix itself?

Design guru and psychologist Donald A. Norman has pointed out how most modern machines hide their internal workings from users. Any natural indicators, such as mechanical sounds, and certainly the view of mechanical parts, are muffled and covered. As much machinery as possible has been replaced by electronics which are silent except for the sound of fans whirring. And electronics are even more mysterious to most users than mechanical systems are.

Our interfaces to machines are primarily composed of various kinds of transducers (like buttons), LEDs (those little glowing lights), and display screens. We are, at the very least, one—if not a dozen—degrees removed from the implementation model. As someone who listens to user feedback, I can assure you that a user’s imagining of how a system works is often radically different than how it really works.

Yet with all this hiding away of the dirty reality of machinery, we have not had a proportional increase in machine self support.

Argument: Software, in some cases, does fix itself. Specifically I am thinking about automatic or pushed software updates. And, because that software runs on a box, it is by default also fixing a machine. For instance, console game platforms like XBox 360 and Playstation 3 receive numerous updates for bug fixes, enhancements, and game specific updates. Likewise, with some manual effort from the user, smart phones and even cars can have their firmware updated to get bug fixes and new features (or third-party hacks).

Counterargument: Most machines don’t update their software anywhere close to “automatically.” And none of those software updates actually fix physical problems. Software updates also require a minimal subset of the system to be operational, which is not always the case. The famous Red Ring of Death on the early XBox 360 units could not be fixed except via replacement of hardware. You might be able to flash your car’s engine control unit with new software, but that won’t fix mechanical parts that are already broken. And so on.

Another argument: Many programs and machines can “fail gracefully.” This phrase comforts a user like the phrase “controlled descent into the terrain” comforts the passenger of an airplane. However, it’s certainly the minimum bar that our contraptions should aim for. For example, if the software fails in your car, it should not default to maximum throttle, and preferably it would be able to limp to the nearest garage just in case your cell phone is dead. Another example: I expect my laptop to warn me, and then shutdown, if the internal temperature is too hot, as opposed to igniting the battery into a fireball.

The extreme solution to our modern mechatronic woes is to turn everything into software. If we made our machines out of programmable matter or nanobots that might be possible. Or we could all move into virtual realities, in which we have hooks for the meta—so a software update would actually update the code and data used to generate the representation of a machine (or any object) in our virtual world.

However, even if those technologies become mature, there won’t necessarily be one that is a monopoly or ubiquitous. A solution that is closer and could be integrated into current culture would be a drop-in replacement that utilizes existing infrastructures.

Some ideas that come close:

1. The device fixes itself without any external help. This has the shortcoming that it might be too broken to fix itself, or might not realize it’s broken. In some cases, we already have this in the form of redundant systems as used in aircraft, the Segway, etc.

2. Software updating (via the Internet) combined with 3D printing machines: the 3D printers would produce replacement parts. However, the printer of course needs the raw material but that could be as easy as putting paper in a printer. Perhaps in the future, that raw printer material will become some kind of basic utility, like water and Internet access.

3. Telepresence combined with built-in repair arms (aka “waldoes”). Many companies are currently trying to productize office-compatible telepresence robots. Doctors already use teleoperated robots such as Da Vinci to do remote, minimally-invasive surgery. Why not operate on machines? How to embed this into a room and/or within a machine is another—quite major—problem. Fortunately, with miniaturization of electronics, there might be room for new repair devices embedded in some products. And certainly not all products need general purpose manipulator arms. They could be machine specific devices, designed to repair the highest probability failures.

4. Autonomous telepresence combined with built-in repair arms: A remote server connects to the local machine via the Internet, using the built-in repair arms or device-specific repair mechanism. However, we also might need an automatic meta-repair mechanism. In other words, the fixer itself might break, or the remote server might crash. Now we enter endless recursions. However, this need not go on infinitely. It’s just a matter of having enough self-repair capacity to achieve some threshold of reliability.

5. Nothing is ever repaired, just installed. A FedEx robot appears within fifteen minutes with a replacement device and for an extra fee will set it up for you.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , , , , , , , , ,

H+ Summit @ Harvard

Posted in artificial intelligence, interaction design on June 13th, 2010 by Samuel Kenyon

This weekend I attended the H+ (transhumanist) Summit at Harvard University (live streamed videos here).

We were invited to blog about it on Scientific Blogging which added a special tab for H+ and a category for the theme of the conference, which was Rise of the Citizen Scientist.

So I made a new blog there, In the Eye of the Brainstorm, so far with two three posts related to transhumanism and/or the conference itself:

Kurzweil’s Phenomenological Consciousness

Cat Usability Testing (Wolfram’s Predictions)

Pirate Evolution

Note that SynapticNulship (this site) is still my main blog for artificial intelligence and interaction design writings.

Note for those interested in presentation skills: The most lively talk was by Seth Lloyd.  Interestingly, he did not use a computer.

Post to Twitter Post to Facebook Post to LinkedIn Post to Reddit Post to StumbleUpon

Tags: , , , , ,