“Principles for the Development and Assurance of Autonomous Systems for Safe Use in Hazardous Environments” White Paper Published

Dr. Matt Luckcuck, Dr. Louise Dennis, and Prof. Michael Fisher have been collaborating with partners at the UK’s nuclear regulator, the Office for Nuclear Regulation (ONR). Together they have produced a white paper entitled “Principles for the Development and Assurance of Autonomous Systems for Safe Use in Hazardous Environments“, available as a PDF via this link and can be cited using its DOI 10.5281/zenodo.5012322.

The white paper provides guidance on designing and assuring autonomous systems used in hazardous environments, such as in the nuclear industry. This guidance is to be considered alongside existing standards and regulations aimed at, for example, robotics, electronic systems, control systems, and safety-critical software.

Autonomous systems use software to make decisions without the need for human control. They are often embedded in a robotic system, to enable interaction with the real world. This means that autonomous robotic systems are often safety-critical, where failures can cause human harm or death. For autonomous robotic systems used in hazardous environments, like the nuclear industry, the risk of harm is likely to fall upon human workers (the system’s users or operators). Autonomous systems also raise issues of security and data privacy, both because of the sensitive data that the system might process and because a security failure can cause a safety failure.

The white paper describes in-depth principles for safety-critical, human-controlled, and autonomous robotic systems. These principles are summarised in seven high-level recommendations:

  1. Remember both the hardware and software components during system assurance,
  2. Hazard assessments should include risks that have an ethical impact, as well as those that have safety and security impacts,
  3. Adopt both a corroborative and a mixed-criticality approach to Verification & Validation,
  4. Autonomous components should be as transparent and verifiable as possible,
  5. Tasks and missions that the system will perform should be clearly defined,
  6. Dynamic Verification & Validation should be used to complement static Verification & Validation,
  7. System requirements should be clearly traceable through the design, the development processes, and into the deployed system.

The guidance laid out in the white paper has been carefully discussed with partners at the ONR, so already provides useful principles for developing safer autonomous robotic systems. The white paper is also intended to spark discussion between academia, industry, and the ONR to develop concrete techniques to realise these principles.

We uses cookies to ensure you get the best experience on our website.