Robotics The Tech
     
Universal Robots: the history and workings of robotics  
Introduction Working Playing Exploring
Moving Sensing Thinking Living
Page 9 of 13

Sensing

 

What the computer saw to make the image below

  This is what you see when you head for the doorway…
  3-D Map made from Stereoscopic Image - 1997
  …and this is what a robot might see. This 3-D map was produced in 1997 by a stereoscopic vision system. Photos courtesy of Hans Moravec, Carnegie Mellon University.

Robots rely on sensors to get information about their surroundings. In general, a sensor measures an aspect of the environment and produces a proportional electric signal. Many of a robots sensors mimic aspects of our own senses, but not all of them. Robots can also sense that things we can’t, like magnetic fields or ultrasonic sound waves.

Robotic light sensors come in many different forms–photoresistors, photodiodes, phototransistors–but they all have roughly the same result. When light falls on them, they respond by creating or modifying an electric signal. A filter put in front of a light sensor can be used to create a selective response, so the robot only "sees" a certain color.

Light sensors can also be used for simple navigation, for example, by allowing a robot to follow a white line. Other robots navigate using infrared light (the same invisible light used in your TV remote control). The robot sends out a beam of infrared light, some of which bounces off of an obstacle and returns to a light sensor on the robot.

Hans Moravec

 

Hans Moravec explains some of the difficulties involved in making robots do simple tasks. (5.5MB) [Need help?]
 

For more elaborate vision systems, simple light sensors are not enough. Robots like the ones that find and remove imperfect products from a conveyor belt need to be able toresolve complex, changing images–quickly. In these situations, the image from a camera "eye" must be broken down and analyzed by a computer program

Robotic vision has proved to be one of the greatest challenges for engineers. The difficulty lies in programming a robot to see what’s important and ignore what isn’t; a robot has trouble interpreting things like glare, lighting changes, and shadows. Also, for a robot to have depth perception, it needs stereoscopic vision like our own. Resolving two slightly different images to make one 3-D image can be a computational nightmare, requiring large amounts of computer memory.

  Page 9 of 13 back to toptop
Back Robotics Next
  ©2005 The Tech Museum of Innovation