Context-Aware Safety: The New Standard for Unlocking Agile Robotics in Logistics

While these next-generation robots unlock massive gains in efficiency and productivity across warehouses, trucks, and ports, they also pose unprecedented challenges for safety and risk mitigation that traditional protocols simply cannot handle.

Fort Robotics Remote Control Warehouse Environments
FORT Robotics

Driven by technological innovation, advanced robotics, and the growing role of artificial intelligence (AI), the supply chain industry's safeguarding focus is undergoing a profound shift. The challenge is no longer securing established, fixed mobile automation, but addressing the new risks introduced by complex, agile, and integrated systems. While these next-generation robots unlock massive gains in efficiency and productivity across warehouses, trucks, and ports, they also pose unprecedented challenges for safety and risk mitigation that traditional protocols simply cannot handle.

With the introduction of more versatile robotics, there is a shift toward more adaptive, multi-mission systems and more human-robot interaction. Autonomous mobile robots (AMRs) are being widely tested and deployed, with interest in humanoids accelerating. An important trend is a solution in between these two – not fixed but not a full humanoid either, a more agile manipulator on a mobile base. A key enabling technology for this development is the ability to apply safety measures to these complex systems and ensure a frictionless collaboration between humans and robots. 

In addition, while the integration of AI and machine learning into systems presents new opportunities, it also presents multifaceted safeguarding challenges that traditional testing methods cannot handle.


The role of AI in safeguarding

Traditionally, AI has not had a place in any safety system.  By its very nature, AI is largely or completely non-deterministic, making it impossible to apply existing functional safety analysis techniques.

Robotics developers understand that safeguarding begins at the development stage. AI presents a unique challenge in that the quality of the model is depends heavily on the quality of the training data used. What is the integrity of this data – are there errors in data sets, such as poorly labeled data or wrong time stamps – that can lead to models that create a poor safety scenario and indirectly cause harm? Moreover, is the data used secure and are the insights and information gathered from a warehouse floor protected? Cybersecurity is a growing challenge in IP safeguarding. Data is now stored in many different formats including the cloud but also the robot itself. Physically, this robot could be in a logistics customer’s possession, owning a lot of valuable information, and robotics developers need secure protection systems in place. 

However, the biggest challenge lies in that the validation of AI and machine learning driven behavior remains extremely difficult. Proving that a robot using AI will meet existing safety standards using traditional testing methodologies is largely impossible. It is impossible to “test your way to safety” when the combination of the robot’s behavior and environmental conditions is effectively unbounded. Simulation is a powerful tool that can be leveraged to both speed this process as well as test scenarios and edge cases that would be impractical to create in the real world.

The goal for AI is to make validation of behavior smarter and easier and empower solutions with context-aware safety – the ability for robots to understand their environment and respond more adaptively to the situation. So far, robotics and automation solutions have been built and deployed for one use case where safety requirements were mapped out in advance. AI enables more flexible robotics solutions that are able to take on multiple tasks, making safeguarding extremely challenging without awareness of the context within which the robot is currently operating. Awareness of their surroundings in significant, real-time detail enables them to ensure safety is maintained, even when the use cases are more ambiguous during the design and commissioning process.

Mitigating risk from third-party integration

A common concern for supply chain managers is the impact of integrated systems, such as fleet management software or cloud-based analytics, on assessing and mitigating safeguarding risks. Capable and deeply integrated systems mean that third-party tools can have an impact on the overall safety case. The reliance on the robot itself to manage safety is becoming less possible as systems become more flexible and deeply integrated.

The solution lies in mirroring how humans are trained. The focus shifts to defining and enforcing rulesets integrated with fleet and workflow management. For example, defining the same kinds of guidelines we would enforce on a human in the same environment: max speed one should move in, proper use of different extensions and attachments, max lifting capacity. The goal is to teach robots these rules based on the tasks they are given and ensure they never violate them. However, these rules depend on the context, thus the rising trend of context-aware safety where robots apply different actions in different cases.

Upcoming standards

While development on several standards to address these emerging trends is underway, with committees actively working on updates to industrial and service robot safety standards, robotic manufacturers and developers shouldn’t wait for safety to be mandated. Safety should be considered an integral part of developing or improving a solution to unlock its full potential. Safety guidelines should be a component from the very beginning, and not an add-on once the solution is ready. Companies who aren’t prepared to consider safety upfront risk falling behind their competitors in the supply chain space.