Robotics The Tech
     
Universal Robots: the history and workings of robotics  
Introduction Working Playing Exploring
Moving Sensing Thinking Living
Page 13 of 13

Living

 

The Turing Test

In 1950 British mathematician Alan Turing proposed a test to answer the question, "Can Machines think?" The test goes something like this:

A human interrogator can ask any question of two subjects, who are both in other rooms. One of these subjects is another person, one is a machine such as a computer–the interrogator must decide which is which. The answers are typewritten, so there are no clues from voice or appearance. What’s more, the subjects don’t have to tell the truth, so the computer can pretend to be human. If the human interrogator can’t tell the difference between the person and the machine, then the answer to the question "Can machines think?" has to be yes.

 

Alan Turing

 

Alan Turning [Click for a larger image.]

Can a robot be conscious? Can it be not only intelligent, but aware in the way that we are? So far, no artificially intelligent computer has ever shown such signs of life. However, if robots eventually think like us, detect and express emotions, pursue their own interests (whatever those are programmed to be) and even make copies of themselves, it will be increasingly difficult to draw the line between machines and living things.


 

Isaac Asimov

  Issac Asimov [Click for a larger image.]

Asimov’s Three
Laws of Robotics

Scientist-turned-writer Isaac Asimov wrote many science fiction tales that featured robots as characters. In Asimov’s stories, the robots were guided by a set of rules, called "The Three laws of Robotics," which prevented robots from harming people. They are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.

3. A robot must protect its own existence, as long as this does not conflict with the first two laws.

Although Asimov wrote these laws as fiction in the 1940’s, before robots existed, they reflect ongoing concerns that some people have about robots. Technically, destructive technologies like "smart" cruise missiles (which can be considered robots) are already violating Asimov’s laws.

 

Some see silicon-based life forms as the next step in evolution, replacing carbon-based life forms like us. Talk of robots taking over sounds a little kooky, yet many respectable scientists (Hans Moravec, Ray Kurzweil, Bill Joy) think it likely that robots will play a growing and even dominant role in the future. Imagine: robots become so intelligent and so capable that we come to rely on them for everything. Useless and unnecessary, humans are gradually pushed aside.

Fear of machines wising up and taking control is nothing new. Likewise, there’s no shortage of movies that depict super-intelligent robots that turn on their human creators: The Matrix, Terminator, and 2001: A Space Odyssey are a few. However, if robots do develop consciousness, they may also develop conscience, and choose to be kind to their human creators. In the meantime, we may want to remember where the "off" button is. . .just in case.

  Page 13 of 13 back to toptop
Back Robotics
  ©2005 The Tech Museum of Innovation