Hyping Nonsense: What Happens When Artificial Intelligence Turns On Us?

Posted in culture, transhumanism on January 23rd, 2014 by Samuel Kenyon

The user(s) behind the G+ account Singularity 2045 made an appropriately skeptical post today about the latest Machines-versus-Humans “prediction,” specifically an article “What Happens When Artificial Intelligence Turns On Us” about a new book by James Barrat.

As S2045 says:

Don’t believe the hype. It is utter nonsense to think AI or robots would ever turn on humans. It is a good idea to explore in novels, films, or knee-jerk doomsday philosophizing because disaster themes sell well. Thankfully the fiction or speculation will never translate to reality because it is based upon a failure to recognize how technology erodes scarcity. Scarcity is the root of all conflict.

Smithsonian even includes a quote by the equally clueless Eliezer Yudkowsky:

In the longer term, as experts in my book argue, A.I. approaching human-level intelligence won’t be easily controlled; unfortunately, super-intelligence doesn’t imply benevolence. As A.I. theorist Eliezer Yudkowsky of MIRI [the Machine Intelligence Research Institute] puts it, “The A.I. does not love you, nor does it hate you, but you are made of atoms it can use for something else.” If ethics can’t be built into a machine, then we’ll be creating super-intelligent psychopaths, creatures without moral compasses, and we won’t be their masters for long.

In the G+ comments you can see some arguments about the evidence for or against the prediction. I would like to add a couple arguments in support of Singularity 2045’s conclusion (but not necessarily endorsing his specific arguments):

  1. Despite “future shock” (before Kurzweil and Vinge there was Toffler) from accelerating change in certain avenues, most of these worries about machines-vs-humans battles are so fictional because they assume a discrete transition point: before the machines appeared and after. The only way that could happen is if there was an massive planetary invasion of intelligent robots from another planet. In real life things happen over a period of time with transitions and various arbitrary (e.g. because of politics) diversions and fads…despite any accelerating change.
  2. We have examples of humans living in partial cooperation and simultaneously partial conflict with other species. Insects outnumber us. Millions of cats and dogs live in human homes and get better treatment than the poor and homeless in the world. Meanwhile, crows and parrots are highly intelligent animals often living in symbiosis with humans…except when they become menaces.

If we’re going to map fiction to reality, Michael Crichton techno-thrillers are a bit closer to real technological disasters, which are local specific incidences resulting from the right mixture of human errors and coincidence (and this happens in real life sometimes for instance nuclear reactor disasters). And sometimes those errors are far apart at first like somebody designing a control panel badly which assists in a bad decision by an operator 10 years later during an emergency.

And of course I’ve already talked about the Us-versus-Them dichotomy and the role of interfaces in human-robot technology in my paper “Would You Still Love Me If I Was A Robot?”

Addendum

I doubt we will have anything as clear cut as an us-vs-them new species. And if we maintain civilization (e.g. not the anti-gay anti-atheist witch-hunting segments) then new variations would not be segregated / given less rights and vice-versa they would not segregate / remove our human rights.

As far as I know, there is no such thing as a natural species on Earth that “peacefully coexists.” This may be the nature of the system, and that’s certainly easy to see when looking at the evolutionary arms races constantly happening. Anyway my point is that any attempt to appeal to nature or the mythical peaceful caveman is not the right direction. The fact that humans can even imagine never-ending peace and utopia seems to indicate that we have started to surpass nature’s “cold equations.”

Tags: ,

On “Humanoid robots as ‘The Cultural Other': are we able to love our creations?”

Posted in transhumanism on September 7th, 2013 by Samuel Kenyon

I just noticed a recently published Springer article titled “Humanoid robots as “The Cultural Other”: are we able to love our creations?” by Min-Sun Kim and Eun-Joo Kim [1] which cites my own article Would You Still Love Me If I Was A Robot?[2].

At the moment I do not have access to the full article, but as you can see the first two pages are available for anyone.

Initially, what’s unnerving about this publication is not the subject itself, but weirdnesses like:

It is either aliens or robots, which will get us!

You can see this quote under the Introduction heading. It is funny and appropriate, yet bizarre because it’s attributed to “American expression”. Obviously Americans are always saying that. Except they don’t. Ever. In fact I’m not sure if anybody has every said that exact phrase. Try googling for it.

The other odd thing about this potentially enthralling article is the abstract itself. There are some striking similarities to the abstract of my aforementioned paper.

From my abstract:

The likely scenario is the latter, which is compatible with an optimistic posthuman world.

From their abstract:

The likely and preferable scenario is the last one, which is compatible with an optimistic posthuman world in our evolutionary future.

Hmmmmmm…

Their abstract indicates a different theme than mine, perhaps inspired by my title (“Would You Still Love Me If I Was A Robot?”):

We imagine whether humans will meet the challenge of loving all living and non-living beings (including mechanical entities) might be the key to the co-evolution of both species and the ultimate happiness.

My theme, on the other hand, was that an interface point of view is critical for understanding and designing the future artificial-biological spectrum of humans, cyborgs, and robots.

I hope that their paper was in fact original in theme and purposely so as opposed to a misunderstanding of my article.

Update

I figured out how to easily download the entire article from SpringerLink as PNG files. It appears that the article is in fact different in theme than mine, but it breaks basic academic integrity rules about how to properly quote sources.

For instance, they use this very unique sentence verbatim from my own article, and cite me, but don’t present it as a quote:

it appears that the notion of evil AI—which is always accompanied by murderous robots—has been filtered into the collective
mindset, regurgitated and re-swallowed several times (perhaps more so in certain countries like the United States than in others).

As another example, my unique sentence:

Somehow the tribal notion of Us-Versus-Them co-exists with the contradictory cultural attraction to robots.

is ripped off verbatim without even a citation, let alone quotation formatting.

References

[1] M.-S. Kim and E.-J. Kim, “Humanoid robots as ‘The Cultural Other’: are we able to love our creations?,” AI & Soc, vol. 28, no. 3, pp. 309–318, Aug. 2013.
[2] S. Kenyon, “Would You Still Love Me If I Was A Robot?” Journal of Evolution and Technology, Vol. 19, Issue 1, Sept. 2008, pp. 17-27.

Tags:

Is Humanism False?

Posted in culture, transhumanism on January 11th, 2013 by Samuel Kenyon

Ryan Norbauer is pretty sure that:

Not only are all religions manifestly false, but so too are all the secular narratives (humanism, positivism, liberalism, libertarianism) that, like religions, attempt to craft a system of positive values out of the epistemologically questionable notion that something can be transcendently and meaningfully true merely because it would be nice if that were the case. Reasoning by appeal to platitude or an implausible alternate-universe utopia is not reasoning at all. These facts may not delight us overmuch; they are still true.

Of course I agree with the religious part of that statement. Yet he also kills off humanism. I’m certainly not the kind of gung-ho replacement-religion humanist like Greg Epstein, but perhaps whatever humanism appeals to is better than the alternatives for society as a whole, even if an individual need not believe in any narratives.

And I’m not sure if humanism is a narrative. Of course, I’m not really a scholar in humanism—my Renaissance Man development is at the early stage of Renaissance Boy. I.e., I don’t go around claiming to be a polymath, but I claim to strive to be a polymath.

Certainly transhumanism is a narrative of the future—really several stories. A lot of transhumanists convert science fiction into prophecy and follow it religiously, thus reducing it to Norbauer’s description. Should we instead look to narratives of the past?

Read more »

Tags: ,

The Seed and the Flower

Posted in artificial intelligence, robotics, transhumanism on December 30th, 2011 by Samuel Kenyon

Right now I’m reading an architecture book from the 1970s called The Timeless Way of Building.  So far it has to do with theories of how towns and buildings and other things seem more “alive” than others, and how to achieve this quality—the “quality without a name”.

This of course goes far beyond merely architecture; indeed this book was brought to my attention not by an architect but by people in the UX (user experience) design community. Anyway, this blog post only covers a couple pages out of the book.

The author, Christopher Alexander, says that we have come to think of buildings, towns, and works of art as “creations.” And that “creation” is thought of as a monumental design task, “brought to birth, suddenly, in a single act, whose inner workings cannot be explained, whose substance relies ultimately on the ego of the creator.”

I would interject that the creator might understand the inner workings, but even then, for a complicated project attempted in a process with this mindset, the end result would probably not be completely understandable by the creator. More on that in a minute…

As Alexander writes:

The quality without a name cannot be made like this.

Imagine, by contrast, a system of simple rules, not complicated, patiently applied, until they gradually form a thing. The thing may be formed gradually and built all at once, or built gradually over time—but it is formed, essentially, by a process no more complicated than the process by which the Samoans shape their canoe.

And if you’re thinking that this sounds very much like how biology works, then you have predicted the next key statement on the same page:

The same thing, exactly, is true of a living organism.

An organism cannot be made. It cannot be conceived, by a willful act of creation, and then built, according to the blueprint of the creator. It is far too complex, far too subtle, to be born from a bolt of lightning in the creator’s mind. It has a thousand billion cells, each one adapted perfectly to its conditions—and this can only happen because the organism is not “made” but generated by a process which allows the gradual adaptation of these cells to happen hour by hour….

And Alexander claims that there is no other way. Of course, as a transhumanist and a roboticist and an occasional cognitive architect (oh, maybe there is architecture here after all!) I want to be able to create and modify life forms. I want to make artificial organisms, and interfaces between the organic and the non-organic.

Read more »

Tags: , , , ,