Explainable AI and Autonomy for the Maritime Domain

Academic Institution: Heriot-Watt University

Academic Lead: Professor Helen Hastie

Industry Partner: SeeByte Ltd

PhD Student: Konstantinos Gavriilidis

Summary

Autonomous robots are frequently deployed in the maritime domain, to automate tasks such as oceanography and pipeline inspection. From a distance, operators supervise their activity and in case a vehicle finds itself in a dire situation, they intervene to secure it. Concurrently, operators retrieve mission updates in real-time and occasionally send requests to the robots, to change their goals or task prioritisation. Establishing efficient communication between the vessels and the operators is of paramount importance for Human-In-The-Loop applications. This requires the transformation of vehicle data with incoherent format into natural language explanations which describe events with a causal format (i.e., The vehicle changed its trajectory, because an obstacle blocks its way). To solve this problem, and improve coordination for autonomous missions, this project focuses on the development of an explanation framework, that is reusable across different robotic platforms and has the capacity to explain autonomous decision-making.

Adoption and deployment of robotic and autonomous systems in industry are currently hindered by the lack of transparency, required for safety and accountability. To overcome increased mental load, operators need explanations to improve their situation awareness and effectively coordinate with robots. As a result, methods for transforming heterogeneous data into natural language are required. One way to achieve this, is to focus on common data (i.e., sensor data) across different robotic platforms and combine them with domain knowledge. Then, recorded events, can be verbalised with a Natural Language Generation module. A second option is to design a domain-agnostic explanation framework through the use of surrogate models that can be easily updated. Through its use, we extract generic knowledge representations that describe any outcome across different application domains. Future work will involve training a Large Language Model (LLM) on sets of generic representations and their corresponding explanations to learn how to explain mission outcomes.

Key Results/Outcomes:

  • Successfully combined sensor readings with domain knowledge to describe vehicle faults and how they affect the initial plan of a vehicle.

  • The focus of this method was the utilisation of specific data coming from vehicle hardware to get an overall idea of vehicle health. Then with an ontology, we can detect the specific error and understand which alterations should be made to the plan. A surface realiser made with the SimpleNLG package retrieves information about vehicle actions and in real-time generates syntactically correct explanations.

  • Designed a surrogate model framework to capture the causality of behaviour activations for autonomous vehicles. The decision tree model was able to approximate the underlying policy of the agent with 90% accuracy in simulations. The recorded performance during the real trial was higher (99%), however, only a subset of the simulated objectives was used in the real setup.

  • Moving forward, we will use this method to generate a dataset and train an LLM..

Publications

  • Gavriilidis, K., Carreno, Y., Munafo, A., Pang, W., Petrick, R.P. and Hastie, H., 2021, August. Plan Verbalisation for Robots Acting in Dynamic Environments. In ICAPS 2021 Workshop on Knowledge Engineering for Planning and Scheduling.

    Presented an approach that combines sensor readings with domain knowledge to derive content about errors/warnings and replanning. At the end of this pipeline implementation, a template-based surface realiser generates explanations that describe performed actions and replanning.

  • Gavriilidis, K., Munafo, A., Hastie, H., Cesar, C., DeFilippo, M. and Benjamin, M.R., 2022. Towards Explaining Autonomy with Verbalised Decision Tree States. arXiv preprint arXiv:2209.13985.

    This extended abstract was presented at the 2022 IEEE OES AUV Symposium. It describes our approach for explaining behaviour activations of an Unmanned Surface Vehicle (USV) with a surrogate model from which we also derive the causality behind behaviours. Our framework has been tested during a real trial at Charles River in Boston.

  • Gavriilidis, K., Munafo, A., Pang, W., Hastie, H., 2023. A Surrogate Model Framework for Explainable Autonomous Behaviour.

    This is the complete work from my internship that makes use of surrogate models to approximate an agent’s policy. This paper is currently under review.

Collaboration:

So far, I have collaborated with the Ocean Systems Lab of Heriot-Watt University to gain access to maritime robots and perform trials with real vehicles to evaluate my implementations. The outcome of this first collaboration was the workshop paper that I published back in 2021. Additionally, during my six-month internship with SeeByte Ltd, I travelled to Boston and performed a trial with the help of the Autonomous Underwater Vehicle Lab of MIT. The output of this collaboration was an extended abstract, however, we have also submitted a full paper, which is under review.

Contact Information:

Konstantinos Gavriilidis

Postgraduate Research Student, Heriot Watt University

kg47@hw.ac.uk

Professor Helen Hastie,

Director of the EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems and Academic Co-Lead for The National Robotarium,
H.Hastie@hw.ac.uk

SRPe