Google+ more

Exploration of Human Psychology and Social Robots at The Tech

  • by Michelle Maranowski on September 26, 2013
    If you have been to The Tech recently, you know that we have opened a new exhibition on the bottom floor of the museum that explores Social Robots. Right now, the exhibition is a design challenge where our visitors can build a robot using input blocks (sensors) and output blocks (actuators) that have been specially designed, fabricated, and patented by The Tech. Later this year, we will add a variety of new exhibits to Social Robots, including programming tables, where visitors can program their robots using a brain block, a dissection of the Pleo dinosaur robot, and a larger-than-life similar to the Furby® robot that visitors can control. Both the Pleo and Furby® exhibits talk about input sensors and outputs, adding another dimension to the design challenge.
     
    Sensors, actuators, and control electronics are all part of robotics that we can wrap our brains around to some degree. But, what about the intangible side of robotics? If we let our minds move to the future, where and how do we see robots fitting in? What will make it easier for robots to fit into human society? How are present day robot designers planning for a robotic future? One thing is for sure; robot designers must take human psychology and behavior into account when designing robots. Why? Would you be willing to have a robot that you felt looked creepy take care of your children or pets? What use would an unintelligent robot car be? Would Iron Man be able to function if JARVIS, his robot butler, couldn't communicate with him? In order to make better robots designers need to understand how humans think and perceive.
     
    At The Tech we are exploring the psychology of human-robot interaction by addressing three different psychology-based concepts: (1) The Turing Test, (2) The Uncanny Valley, and (3) Facial Expressions.
     
    The Turing Test is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In a journal article from 1950, mathematician Alan Turing devised a test of intelligence in which a human “converses” with two entities — one human and one computer program — over a text-only channel (i.e., a computer keyboard/screen), and then tries to determine which is the human and which is the computer. If after five minutes of testing, the majority of human testers are unable to determine which is human and is which computer, the computer system could claim to have achieved a certain level of intelligence. The Loebner Prize is an annual competition that awards prizes to machines in conversational intelligence. A variety of chatter bots (chat bots), Cleverbot and Mitsuku, for example, have won the Loebner prize.
     
    Turing himself was very careful to state that the test is a test of whether a machine (computer) can imitate human behavior, but the wording of the test has morphed into whether the human judge can detect which entity is human and which is a machine. This is where a bit of controversy comes into play.  Some detractors say that the test can fail to measure intelligence in two ways: (1) because some human behavior is unintelligent and (2) some intelligent behavior can be inhuman.
     
    As we look toward the future and think about how robots will figure in it, robot designers must start planning now for machine/artificial intelligence.
     
    In 1970, robotics professor Masahiro Mori coined the phrase “uncanny valley” to describe a theory that he developed in the area of human aesthetics. In essence, the hypothesis states that as robots become more human-like in appearance, our emotional response to them grows more positive. That is, until the robot appears just shy of being human; at that point we experience a feeling of creepiness and even revulsion. This feeling is amplified if the robot (or entity) is in motion.

    Figure 1. Graph of the Uncanny Valley.
     
    The uncanny valley doesn’t just apply to robots but is also seen in motion pictures and video games. As humanoid figures in animated films are rendered more human like, they fall prey to the uncanny valley.  Robot designers, movie animators, and video game designers have to understand human psychology in order to design their products successfully.  They need to understand what we humans find “cute” or “creepy”. Examples of cute animated characters include Wall-E (from the Disney-Pixar movie, “Wall-E”), and the Incredibles (from the Disney-Pixar movie, “The Incredibles”). On the other hand, characters from the movies The Polar Express and Tin Tin can make viewers feel uncomfortable because of how human-like they look and move.
     
    What makes a robot creepy? What makes a robot cute? Visitors will grapple with these questions at the upcoming Uncanny Valley exhibit and use their experience to inform their robot building activity in the design challenge.
     
    Human interactions are comprised of verbal and nonverbal communication. Nonverbal communication includes body posture and facial expressions.  A smile can indicate approval, excitement, or happiness, while narrowed eyes can signal anger or unhappiness. In some cases, our facial expressions may reveal our true feelings about a particular situation. While you may say that you are feeling fine, the look on your face and your body posture may indicate the opposite. Tracking eye movements is a natural and important part of the communication process. Some common things you may note is whether people are making direct eye contact or averting their gaze, how much they are blinking, or if their pupils are dilated. Pursed lips may indicate distaste, disapproval, or distrust.
     
    In the late 1960s, Dr. Paul Ekmann and Dr. Carroll Izard linked facial expressions to a basic set of emotions. Dr. Ekmann showed that facial expressions of the basic emotions are the same across human cultures.  A smile is a smile and indicates happiness around the world. Some researchers claim that there is no one-to-one correspondence between facial expressions and emotions. They argue that there is no evidence to support a link between what appears on someone's face and how they feel inside. However, there is agreement that the face, along with the voice, body posture and hand gestures, forecast to outside observers what people will do next.
     
    Why is this bit of psychology important to robot designers? For social robots to be successful in human society they need to be able to communicate in a fashion that is somewhat familiar with the way we humans do. Robot designers need to design robots to have expressions that we instinctively understand. In order to get our visitors to think along these lines at the exhibit, they will be led through expression making exercises.
     
    We purposely designed the exhibition to link human psychology to social robotics. This exhibition is just the beginning of this exploration. We plan to carry this link through the next two exhibitions opening next year: Human Data and Cyber Security.

    Michelle Maranowski, PhD. is an Exhibit Developer at The Tech Museum of Innovation