Don’t trust robots blindly in emergencies

In an experiment conducted recently by robotic engineers at Georgia Tech, they have discovered something quite surprising. They found that people blindly trusted a robot to save them and take them out of a burning building, even if the robot led them in circles or arrived only a few minutes prior to the emergency.

In an interview with The Christian Science Monitor, Paul Robinette, a Georgia Tech research engineer, who led the study, said they thought a few people might trust the robot as a guide, but they didn’t expect such an action would be followed by 100% of study participants.

The experiment is among an increasing number of studies into human-robot relationships, raising significant questions regarding how much trust people should do on computers, mainly at such a point of time when self-driving cars and autonomous weapons systems have reached quite closer to reality.

The new paper authors wrote, “This over trust gives preliminary evidence that robots interacting with humans in dangerous situations must either work perfectly at all times and in all situations, or clearly indicate when they are malfunctioning”.

The paper will be presented at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in Christchurch, New Zealand, on March 9.

For the study, Georgia Tech Research Institute engineers recruited 42 participants most of them college-age test subjects and told them that they will be following a robot into a conference room where in they would read and be tested on the basis of their knowledge of an article. The research team clearly told them that they were also testing the ability of a robot to guide people to a room.

The researchers mentioned that it was concerning that the study participants were apparently willing to believe in the robot’s stated purpose, although they were already aware that the robot made mistakes in a related task.