The decision to bring a robot into the workplace involves considerations beyond whether it can perform specific tasks in a safe and cost-effective manner—there is also the question of whether the workplace ecosystem will adapt to the robot’s presence in a positive way. Will employees trust it or fear it? Will it be viewed as an ally or a threat? The best-case scenarios are those in which robots and humans work together, providing a wealth of benefits beyond what either can do alone. Both robot manufacturers and the companies choosing to use them can take steps to maximize the likelihood that this will happen. This is certainly true when it comes to robots joining the security workforce.
The study of Human-Robot Interactions (HRI) has become a hot area for researchers, combining elements of psychology, social sciences, and AI, as well as engineering, computer science, and robotics. The annual HRI global conference, a forum for academics and researchers committed to the discipline, has tripled in size since its inception in 2006. Findings from their research have wide-ranging implications for the industry. For example, at companies like Cobalt, it’s important to understand how a robot’s physical appearance, movement, and other factors affect the way it’s perceived by fellow human coworkers. How can we design a robot that’s seen as friendly, approachable and helpful rather than creepy or disconcerting?
A common obstacle that robots face within the workplace is that employees distrust them to make “smart” decisions, especially when it comes to differentiating between “right” and “wrong.” Robots’ decision-making capabilities may be fundamentally more logical than most humans’ but can lack the intuition and emotional intelligence of humans, even if the robot continues to learn through artificial intelligence.
Psychology research1 has shown that when people assess the character of other people, they are more distrustful of individuals who use a calculating, cost-benefit analysis when making decisions—the same style of decision-making used by computers and robots. By contrast, people are more trusting of decisions that include the use of human “instinct.”
While employees may trust a security robot to detect anomalies within the workspace, they are less likely to trust a robot to react appropriately when it finds something. For example, employees may worry about whether a robot can tell the difference between an employee working late and an intruder? A lost visitor versus something more sinister? Having a real human seamlessly intervene in these situations to assist in decision making, using the robot as a conduit, can go far to allaying these concerns.
Here’s an interesting question: why do we name our robots, but not our computers, copy machines or other office equipment? HRI behavioral scientists would say that this is an indication that we see similarities between robots and ourselves. This dynamic also works in reverse.
Studies show that when robots are given names, employees are more likely to think of them as colleagues and to feel affection for them. This is true even if the robots don’t look anything like us. For example, soldiers who make use of robotic devices to dismantle bombs become attached to them. They give them names, care for them, and experience feelings of sadness when they are destroyed in an operation.
Human resource managers looking to introduce a security robot into the workplace can learn from this. In fact, we bet that the many Cobalt robots out in the field have been christened with some pretty creative names that reflect the cultures where they work.
What would you call your Cobalt robot and why? We’d love to hear from you.