Technology, Artificial Intelligence, and Transhumanism

( I put an awesome science fiction webseries called H+ at the end of this post— if you want something to watch over the weekend, or while you are procrastinating…)

My post ties in nicely with Bryce’s last post that included the “Transcendent Man” clip as my own thoughts stem from Ray Kurzweil who is at the centre of the documentary. Yesterday, Mashable published a story about a private comment made by Elon Musk, inventor and real life Tony Stark, on the threat of artificial intelligence  (http://mashable.com/2014/11/17/elon-musk-singularity/). These comments were deleted but here is a screen grab:

Elon Musk on AI

In case you can read it:

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen …”

As mentioned by Mashable Musks comments echo those thoughts of many visionaries involved with artificial intelligence (AI)— except for the fact that Elon thinks the AI revolution is much happening sooner than everyone else expects. I agree with his ideas wholeheartedly. Most people are not aware of how fast computation progresses. Ray Kurzweil, for one, has been making this point for years: technology grows exponentially not linearly. Meaning it doubles in computational power in a short period of time. And as can be seen in the clip, Ray Kurzweil asserts that when “the singularity” humans will need to either “evolve” by becoming one with machines, or die out. Where Kurzweil is excited, Musk is terrified.

Elon Musk
Elon Musk

VS

Ray Kurzweil
Ray Kurzweil

Humanity+ or Transhumanism, as it is commonly referred to is “a loosely defined movement that has developed gradually over the past two decades (H+ magazine website). “Transhumanism is a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond it’s currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.” (More 1990/ H+)

Furthermore, transhumanism is a transitional stage between what is “human” and what is known as the “post human”. A post human, simply put is “a possible future [being] whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards” (H+ FAQ).

And while the post-human is still relatively far from reality trans-human isn’t. Elon is afraid, because he knows that controlling AI or destroying it, in reality, is not as simple as it is in Science Fiction. Unlike in SF, all our technological advances in computation have been leading up to the creation of computational engine ( or computer processor) as powerful as a single human brain capable of “learning” and processing information naturally. From that point exponential growth of one human brain is marvellous yet in this context horrifying as will it progress at a rate incomprehensible to us. AI, at that point will reach what Kurzweil calls the singularity, or a moment “technology will change so rapidly, and its impact so profound that every aspect of human life will be irreversibly transformed” (Transcendent man). AI will be multiple times smarter, faster, and more capable than any human mind ever and it is difficult to not foresee anything but utter disaster. When AI is exponentially smarter and more capable than a human it cannot be destroyed or competed with unless it is met with an equal or greater intelligence which can only be achieved by a merger of human and machine— and even then, who is to say it won’t be the end of humanity?

Two examples visual depictions of this phenomenon come to mind, the first, and more optimistic is from the movie Her (2013) [spoiler alert]. Samantha, and the other AI’s in the movie learn so much that they transcend humanity and simply “leave” they don’t destroy Earth but do cause a lot of emotional trauma as humans are emotionally attached to AI. The second, more realistic and less optimistic, is from the television series Fringe (2008) in the final season of the show the Observers, which are a depiction of post-humans, decide to stop observing and colonize Earth (the observers are from Earth too, but a different time {the future in which they have become post-human}) Peter Bishop, who is human, see that the resistance against the post-human Observers is impossible and decides to defeat them by implanting himself with their technology to become post-human himself.

Peter Bishop as a human
Peter Bishop as a human
Peter Bishop as an Observer (posthuman via use of future technology)

What are your thoughts on the rapid development of AI, are you with Elon Musk or Ray Kurzweil, do you think the development of intelligent/capable AI should be stopped?

Will it be the end of humanity? Remember, you can’t force AI to adhere by Asimov’s laws or anything similar; future AI can learn naturally and cannot be “told” what to do, there is no hardwired fail switch. It is a decision that humanity has to make collectively to either inhibit progress for fear of disaster or to go on despite the risks.

Here is H+ The Digital Series, a realist SF depiction of what Transhumanism might look like at its outset… enjoy:

First episode:

Full playlist:

https://www.youtube.com/watch?v=ZedLgAF9aEg&index=1&list=PL21C609B71E82B243&spfreload=10

Sources:

http://humanityplus.org/philosophy/transhumanist-faq/
http://mashable.com/2014/11/17/elon-musk-singularity/

2 thoughts on “Technology, Artificial Intelligence, and Transhumanism”

  1. I think that there’s no way for us to know what the consequences of creating AI will be. I mean, we haven’t made AI yet, so by definition we can’t know what the results will be. Personally, I like to think that they’ll do all of the work and help us explore the universe. However, it could be just as likely that they would see us as unnecessary and exterminate us. I certainly don’t think that the development of AI should be stopped, if not only for the fact that I don’t believe that it can be stopped.

    I like to believe that when AI is created, that it will see a benefit in living symbiotically with biological life. In essence biological life is just complex chemistry, physics, math, et cetera, and since life is a randomly occurring thing, It seems only logical to me that AI would not be a major threat to it. It seems to me that we should be able to live peacefully with AI if only because it would be more inconvenient for AI beings to defy all of the natural order of the universe by destroying all life in it than it would be to coincide. Although, It’s impossible to know whatthings AI might see as most valuable.

    I believe that Kurzweil talks in his documentary about the potential creation a nanomachine capable only of replicating itself indefinitely. Such a machine would basically turn whatever it came into contact with into a sort of living goo, eventually enveloping the entire world and everything it ever came into direct contact with. That idea to me seems a lot more of a threat than the creation of AI. Like I said, I just be being completely naive, AI could easily be mankind’s downfall.

Leave a Reply

Your email address will not be published. Required fields are marked *