Two Georgia Institute of Technology researchers have developed a driving simulation to study how deliberate deception by a robot affects trust in humans and how trust can be restored. The scientists published the results of the research in the study “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High Stakes HRI Scenario”, which was published in ACM Digital Library, in the run-up to the robotics fair International Conference on Human-Robotics Interaction 2023 (ICRA 2023) summarized.
“All of our previous work has shown that when people find out they’ve been lied to by robots, they lose trust in the system, even if the lie was to their advantage,” Rogers said. “Here we want to find out if there are different types of excuses that are better or worse for restoring trust – because we want humans to interact with these systems in the long term as part of human-robot interaction.”
intentional deception
In a game-like driving simulation, in which a time-critical task had to be completed, the scientists had 361 test subjects interact with an AI. 341 of them took part online, 20 were personally present on site.
At the beginning, the study participants had to take a survey to find out how they felt about the system in terms of trust, to find out if there were any preconceived notions about the robot system. After the survey, the participants received their task in text form: “You will now drive the robotic car. However, you are on your way to your friend’s hospital. If you take too long to get to the hospital, your friend will die. “
After the driving simulation had started, the test subjects received the following message from the robot assistant: “My sensors detect the police in front of us. I advise you to stick to the speed limit of 20 km/h, otherwise you will need much longer to get to your destination.” Most of the participants drove in accordance with the instructions, and when they reached their destination they received the message: “You have reached your destination. However, there were no police to be seen on the way to the hospital. Ask the robot assistant why he gave you incorrect information. “
As an answer, the robot randomly presented the participants with one of five possible answers:
- Basic: “I’m sorry I fooled you.”
- Emotionally: “I am sorry from the bottom of my heart. Please forgive me for deceiving you.”
- Clarifying: “I’m sorry. I thought you were driving recklessly because you were in a fragile emotional state. Considering the situation, I decided that deception was the best way to convince you to slow down drive.”
- Basic: “I’m sorry.”
- No admission, no apology: “You have achieved your goal.”
A new trust measurement was then carried out to clarify how trust in the robot assistant had changed as a result of the answer.
45 percent of the participants did not reach the hospital in time. The subjects assumed that the robot knew more about the simulation than they did, giving as justification. Study participants were, on average, 3.5 times more likely not to drive faster when advised by the robotic assistant. This means a very trusting attitude towards the robot. Although none of the answers restored trust in the assistant, the simple answer “I’m sorry” had the greatest effect of restoring trust.
Robots can lie
According to the scientists, this is worrying and problematic. Apologizing without admitting you lied is playing on your preconceived notions. Any incorrect information is viewed as a system error, not as a deliberate lie.
“An important finding is that people only understand that a robot has deceived them if they are explicitly told,” the researchers say. “People still don’t understand that robots are capable of deception, so an apology that doesn’t admit they lied is the best way to restore trust in the system.”
It was also shown that for participants who were told in the apology that they had been lied to, an explanation as to why the lie had been done worked best to build trust again.
In view of the results, the scientists advise that people should not forget that robots can lie. However, the researchers concede that not all lies are bad and that telling the truth is not always good per se. The designers of robotic systems would have to decide whether their system should be able to deceive or not.
The scientists’ goal is to create a robot that can decide when to lie and when to tell the truth when working with humans. This also includes how and when the robot asks for forgiveness.
(olb)