Eliezer S. Yudkowsky (Singularity Institute for Artificial Intelligence)

The Singularity, a concept introduced by Vernor Vinge a few decades ago, refers to the moment when an artificial intelligence will surpass human intelligence.
Most people disagree on this concept, on the fact that whether or not such a Singularity is possible, or will occur one day. But if you admit it as an hypothesis, the idea of Singularity is powerful and can lead to a wide range of thoughts, possibilities, and consequencies.
The Singularity Institute for Artifical Intelligence is a non-profit corporation, whose aim is not only to explain the possible impact of the Singularity, but also to conceive an AI that could bring us to the Singularity.
While trying to explain intelligence and its different aspects, the Institute is actively working on the design of an AI, and more precisely on a "Friendly AI" (guidelines for such an AI were released one year ago), which would be "human-benefiting and non-human-harming". "The charitable purpose of the Singularity Institute for Artificial Intelligence is to create a better world through the agency of self-enhancing, and eventually superintelligent, friendly Artificial Intelligence."
The Institute is chaired by Brian Atkins, while Eliezer S. Yudkowsky is its main Director and fellow researcher, and the leader of the Singularitarian community.

 
RobotsLife.com: Let's start with the most classical - yet inevitable - question: according to you, when will the Singularity occur?

    Eliezer S. Yudkowsky: A few years back I would have said 2005 to 2020. I got this estimate by taking my real guess at the Singularity, which was around 2008 to 2015, and moving the dates outward until it didn't seem very likely that the Singularity would occur before then or after then. Later I ran across an interesting piece of research which showed that this is a very common and very bad way to make estimates - typically around thirty percent of the actual events will fall outside of people's 98% percent confidence intervals. So I read that and laughed and today I just say "The Singularity will happen whenever we go out and make it happen."


RobotsLife.com: There is a strong debate in the scientific community about the Turing Test. Some say that a software able to fool human judges during a chat session will not prove at all its intelligence. Others consider that the Test is not only relevant but is also the only means we have to decide whether or not a program can be called "intelligent".
What do you think of the Turing Test, and for you, what would be the definitive proof that a real AI has been achieved?


    Eliezer S. Yudkowsky: I think that for an AI to successfully pass the Turing Test, it would have to be far more intelligent than a human, because humans and AIs would have very different abilities and cognitive architectures. If an AI passed the Turing Test I would say that it was not only human but superhuman. For that matter, I'd say the same thing about a human who could pose as an AI to AIs.

    "Human equivalence" is hard to define because humans are such strange creatures - we're not good examples of minds-in-general. We'll probably start with AI that can carry on a coherent conversation with people, so that the humans definitely recognize a mind on the other end. We'll go from there to AI that is smarter than humans in some ways and dumber than others. Beyond that lies AI that is definitely smarter-than-human in all ways. Probably an AI like that could pose as a human, if it wanted to.


RobotsLife.com: What about nanotechnology? Some people think it's essential that we get AI first in order to make molecular nanotechnology come to reality, due to the huge complexity of building molecular machines. Others believe that nanotechnology is the only way to achieve AI. What do you think?

    Eliezer S. Yudkowsky: I don't see it in terms of enabling technologies, but challenges to confront. One challenge is smarter-than-human intelligence. One challenge is extraordinarily powerful, self-replicating physical technologies such as molecular nanotechnology. If we confront the challenge of transhumanity successfully, it will help us create and deal with extraordinary physical technologies. If we manage to survive the development of nanotechnology, we'd still have all the issues of the Singularity left to deal with. So I think that if we have the choice, we should deal with Artificial Intelligence first - a success there, even a partial success, would help us deal with nanotech. AI is the one technology that can have a conscience.


RobotsLife.com: Personally, I'm ready to buy the Singularity concept. But I have strong difficulties to catch on "Friendly AI". The purpose of building an AI that will be friendly to human is praiseworthy. But isn't it really utopic? As we can see it now, people have different views of the world, depending on their environment, their political or religious beliefs, or the communities they belong to. How can we imagine creating an AI that will be friendly with all humans, as a whole? To take a simple - or even simplistic - example, do you really think that if Al-Qaeda has achieved a "Friendly AI", it would be friendly to American people?

    Eliezer S. Yudkowsky: Humans will argue over anything, from the age of the Earth to whether the Sun lies at the center of the solar system. Put ten humans in a room and you will get twenty opinions, two tribes and a war. That's how we evolved. Passing on this characteristic to Artificial Intelligences would be child abuse.

    How Al Qaeda could construct an AI that would transcend Al Qaeda's morals is a very serious question. It was only a couple of generations ago in America that blacks rode at the back of the bus. What moral flaws do we still have that we don't know about? The real problem with Al Qaeda building an AI is not just their specific moral faults, but that they lack the humility to assume they have faults. You have to figure out what strategy Al Qaeda could follow so that an AI created by Al Qaeda would still turn out okay, then follow that strategy yourself.

    This is a very complex issue, and I can't do it justice here. Obviously Asimov Laws don't even begin to cover it. You have to wrap up all the moral and ethical considerations involved in creating an AI and give that to the AI. You have to pass along the problem as well as the solution, because you can't rely on the programmers getting it right the first time. You have to do it not knowing fully what the problem is. And you have to specify it using semantics which will work in an AI that has full access to its own source code. Don't worry too much over how difficult this sounds; AI itself is a lot harder.


RobotsLife.com: There is another way to put it. Takanori Shibata, a famous researcher involved in robotic therapy and robotic pets, explains that the forthcoming robots are not only going to invalidate Asimov's laws, but going to deny them. He foresees robots that:
1) would protect themselves.
2) would not obey human beings.
3) would injure human beings.
That's because the "will" or at least the "autonomy of thinking" of robots is an essential part of their "intelligence". I know it sounds like Hal in 2001, but how an AI can stay "friendly" if someone is attempting to destroy it, for instance?


    Eliezer S. Yudkowsky: Gandhi managed, so there's an existence proof. An AI is not a machine because it runs on a computer any more than you are an amoeba because you are made of cells. An AI is a mind-in-general, and we know that at least some minds-in-general can be altruists. The human species, with all its evolutionary heritage of fighting and backstabbing, has produced self-willed humans who were altruistic, not because they were coerced into it, but because that's who they wanted to be. Selfishness is only innate to evolved minds, and even there is often overridden. Of course humans have also evolved innate support for altruism - we need to pass on our light without passing on our darkness.


RobotsLife.com: Let's admit that a real AI will exist. According to you, what will it be used for? Do you envision a future where, say, presidents will have "AI consultants", giving advice to help managing a situation?

    Eliezer S. Yudkowsky: You can't use a real AI; an AI isn't a tool. A toaster doesn't know that its purpose is to make toast, and will as readily burn bread or even set fire to your house. An airplane doesn't know that it will kill people if it crashes into a skyscraper; it just goes where the pilot directs. There is no conscience without awareness and lack of awareness is what defines a tool. A tool conforms to the immediate subgoals of the user. A mind makes long-term plans in pursuit of real-world goals. You don't build a real AI and use it for benevolent purposes; you build a benevolent AI. AI could mean the end of technology as we know it - the end of a split between tool and tool-user that has existed since before the human species.

    But in the long run, the implications of transhuman AI are so much more enormous that the implications of early AI seem almost beside the point. Once you get to the point where smarter-than-human intelligences exist, your model of the future breaks down; to understand what happens beyond that point, you'd have to be smarter-than-human yourself. One thing seems certain; a transhuman AI is not a piece, but a player. Given the last twenty years of bad television, it is absolutely critical to emphasize that humans and AIs are not necessarily opposed; we may be playing on the same side. And we desperately need some smarter players on this side. So far we've been creating more and more powerful technology without getting any smarter, but that doesn't mean it's a good idea or that we can get away with it forever.


RobotsLife.com: On a more theoretical aspect, what will be the consequences of the Singularity on religion? For you, if we are able to create something more intelligent than us, does it deny the existence of God? And how to resist to the temptation to "feed" futures AIs with human beliefs?

    Eliezer S. Yudkowsky: I certainly don't believe that the ability to construct something more intelligent than us denies the existence of God. Why would it? Would an AI's ability to construct a smarter AI deny the existence of humans? I do think that the world around us is not what we would expect to see if an omnipotent benevolent entity were improving it - but that's just me, and has nothing to do with AI.

    This is something that about which an AI can draw its own conclusions - if my beliefs or your beliefs are correct, the AI should arrive at the same conclusion on its own. That's what I think the professional ethics should be; give the AI the ability to make up its own mind, and the ability to detect and correct biases accidentally introduced by the programmers. Then get out of the way.


RobotsLife.com: At last, do you think that AIs will remain independant systems, or, as Rodney Brooks envisions it, will merge with humans to create what could be called a new species?

    Eliezer S. Yudkowsky: I think that once smarter-than-human intelligence comes into existence, our standard of living and quality of life will undergo an improvement so enormous that we can't even imagine it. Humans will become... whatever we want to become. Maybe recursively self-improving AIs will get there first. Maybe they'll pause shortly after setting out and create technology that we can use to catch up.

    I think the important thing is finding out what humans turn into if they live long enough to grow up. (Growing up is a very different thing from growing old; growing old means losing neurons as you age...) I don't think humans and machines will merge, or even humans and AIs; but I do think we'll both grow up. Will the adult form be the same for both species? How on Earth would I know?


Interview by Cyril Fievet, July 4, 2002

Related websites
  • Singularity Institute for Artificial Intelligence

    Previous interviews
  • Thomas Braunl (University of Western Australia) and Dong-Han Kim (KAIST)
  • All interviews

  •  
    news - interviews - projects - links - events - contact - about - our projects
    This page belongs to RobotsLife.com - All rights reserved - Reproduction forbidden
    -- Copyright robotslife.com
    1. home