Real-Time Out-of-Distribution Failure Prevention via Multi-Modal Reasoning

1Stanford University, 2NVIDIA Research

FORTRESS is a framework that generates and reasons about semantically safe fallback strategies in real time to prevent out-of-distribution (OOD) failures in open-world environments. Our algorithm identifies goals to implement semantic descriptions of fallback strategies, anticipates failures modes, and constructs semantic safety cost functions that perceive dangerous state spaces. When a safety response is needed, FORTRESS rapidly produces semantically safe fallback plans.

Abstract

Foundation models can provide robust high-level reasoning on appropriate safety interventions in hazardous scenarios beyond a robot's training data, i.e. out-of-distribution (OOD) failures. However, due to the high inference latency of Large Vision and Language Models, current methods rely on manually defined intervention policies to enact fallbacks, thereby lacking the ability to plan generalizable, semantically safe motions. To overcome these challenges we present FORTRESS, a framework that generates and reasons about semantically safe fallback strategies in real time to prevent OOD failures.

At a low frequency in nominal operations, FORTRESS uses multi-modal reasoners to identify goals and anticipate failure modes. When a runtime monitor triggers a fallback response, FORTRESS rapidly synthesizes plans to fallback goals while inferring and avoiding semantically unsafe regions in real time. By bridging open-world, multi-modal reasoning with dynamics-aware planning, we eliminate the need for hard-coded fallbacks and human safety interventions. FORTRESS outperforms on-the-fly prompting of slow reasoning models in safety classification accuracy on synthetic benchmarks and real-world ANYmal robot data, and further improves system safety and planning success in simulation and on quadrotor hardware for urban navigation.

Quadrotor Drone Hardware Demo

Semantic Safety Reasoning

We anticipate semantic failure modes using foundation model reasoners and calibrate semantic safety cost functions using text embedding models. We detect semantic OOD failures for an ANYmal robot in a room under construction.

Fallback Strategy Goals

We translate semantic fallback strategies into relevant goal points using a VLM. We show goals identified for the strategy "land on building rooftop" for a drone navigating in an urban setting in the CARLA simulator.