NEW YORK — For humans, it’s easy to predict what our friends and loved ones will do once we see them do it countless times. For robots, their thinking is a little more black and white and not very adaptive to change. Now however, researchers at Columbia University say they’ve taught a robot to show a glimmer of empathy for its fellow automaton.
Study authors say their robot was able to learn and predict the future actions of another robot after watching that machine struggle with an obstacle test several times. The study notes this is a skill that makes it easy for people to live and work together in the real world. Unfortunately, robots have been notoriously incapable of reproducing this form of social communication.
A team from Columbia Engineering’s Creative Machines Lab set out to endow their machines with the ability to both understand and anticipate the plans of other robots just through visual learning alone.
Empathy and obstacles
Researchers built a small robot and placed it in a three-foot by two-foot playpen. This robot was programed to look for and move to a green circle inside the arena. To make it more difficult, study authors also placed a red box in the playpen that blocked the robot’s view. When it could not see a green circle behind the box, it would move to a different circle or wouldn’t move at all.
While this was going on, another robot was watching its partner’s movements in the arena. After watching for two hours, the observing robot began to figure out where its partner would move to depending on the location of the green circles. Eventually, researchers say the android could predict the playpen robot’s path 98 out of 100 times.
“Our initial results are very exciting,” says lead author Boyuan Chen in a university release. “Our findings begin to demonstrate how robots can see the world from another robot’s perspective. The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy.”
Columbia researchers add it was no surprise to them their observer robot eventually learned what the playpen bot was doing. What did surprise the team was how successful their observer robot was despite viewing the playpen droid’s movements for only a few seconds.
The study admits while this behavior shows hints of empathy, the robot’s actions are much simpler than what humans display. Despite this, the team believes this may be the first step towards endowing robots with “Theory of Mind” (ToM).
This behavior develops in children around the age of three. ToM describes when children begin to realize others around them have different goals and motivations than they do. This plays out in playful games like hide-and-seek or complex behaviors like learning to lie. ToM is also a factor in social behaviors like cooperation, deception, and empathy.
Will robots catch up to humans in manipulative behavior?
The study finds humans still have the edge on robots when it comes to predictive behavior and feeling empathy. They also note, however, that their robots now share the ability to view another peer’s troubles and respond to them. “We humans also think visually sometimes. We frequently imagine the future in our mind’s eyes, not in words,” Mechanical Engineering Professor Hod Lipson explains.
Lipson adds this brings up many ethical questions regarding how much of this behavior robots should really learn. While it could make robots more adaptive and useful, it could also make robots capable of manipulating humans too.
“We recognize that robots aren’t going to remain passive instruction-following machines for long,” Lipson concludes. “Like other forms of advanced AI, we hope that policymakers can help keep this kind of technology in check, so that we can all benefit.”
The study appears in Scientific Reports.