Safety, like any other aptitude, must be built and trained into the artificial intelligence that animates robotic intelligence. No one will tolerate robots that routinely smash into people, endanger passengers riding in autonomous vehicles, or order products online without their owners’ authorization.

Controlled trial and error is how most robotics, edge computing, and self-driving vehicle solutions will acquire and evolve their AI smarts. As the brains behind autonomous devices, AI can help robots master their assigned tasks so well and perform them so inconspicuously that we never give them a second thought.

Training robotic AI for safe operation is not a pretty process. As a robot searches for the optimal sequence of actions to achieve its intended outcome, it will of necessity take more counterproductive actions than optimal paths. Leveraging RL (reinforcement learning) as a key AI training approach, robots can discover which automated actions may protect humans and which can kill, sicken, or otherwise endanger them.

What robots need to learn

Developers must incorporate the following scenarios into their RL procedures before they release their AI-powered robots into the wider world:

Geospatial awareness: Real-world operating environments can be very tricky for general-purpose robots to navigate successfully. The right RL could have helped the AI algorithms in this security robot learn the range of locomotion challenges in the indoor and outdoor environments it was designed to patrol. Equipping the robot with a built-in video camera and thermal imaging wasn’t enough. No amount of trained AI could salvage it after it had rolled over into a public fountain.

Collision avoidance: Robots can be a hazard as much as a helper in many real-world environments. This is obvious with autonomous vehicles, but it’s just as relevant for retail, office, residential, and other environments where people might let their guard down. There’s every reason for society to expect that AI-driven safeguards will be built into everyday robots so that toddlers, the disabled, and the rest of us have no need to fear that they’ll crash into us when we least expect it. Collision avoidance—a prime RL challenge—should be a standard, highly accurate algorithm in every robot. Very likely, laws and regulators will demand this in most jurisdictions before long.

Copyright © 2021 IDG Communications, Inc.