Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments

1Carnegie Mellon University
2Department of Aerospace Engineering, Pennsylvania State University
3Department of Computer Science and Engineering, State University of New York at Buffalo

Abstract

Autonomous exploration has many important applications. However, classic information gain-based or frontier- based exploration only relies on the robot current state to determine the immediate exploration goal, which lacks the capability of predicting the value of future states and thus leads to inefficient exploration decisions. This paper presents a method to learn how “good” states are, measured by the state value function, to provide a guidance for robot exploration in real-world challenging environments. We formulate our work as a off-policy evaluation (OPE) problem for robot exploration (OPERE). It consists of offline Monte-Carlo training on real- world data and performs Temporal Difference (TD) online adaptation to optimize the trained value estimator. We also design an intrinsic reward function based on sensor information coverage to enable the robot to gain more information with sparse extrinsic rewards. Results demonstrate that our method enables the robot to predict the value of future states so as to better guide robot exploration. The proposed algorithm achieves better prediction performance compared with other state-of- the-art OPE methods. To the best of our knowledge, this work for the first time demonstrates value function prediction on real- world dataset for robot exploration in challenging subterranean and urban environments.

Intro Video

Method Overview


Our method consisits of offline learning and online adaptation. First we collect datasets which consist of camera images and projected map images. Then we feed the data to the value function network and perform offline MC learning, where the camera image and map projection image are sent to the encoders in parallel and then aggregated together to obtain the state value function. During the online deployment, we perform one additional TD adaptation step and get the refined value function.

Datasets Collection Environments


Snapshots of the data collection environments. Here we show the 3D reconstructed occupancy grid map as well as the captured images captured (At the corner of each subfigure) during exploration. From left to right and top to bottom: Auditorium corridor, Large open room, Limestone mine and Natural cave.

Experimental Results

Regret Analysis in Corridor Environment

With the learned value function, robot could make better decisions.

 

In Cave Environment

Quantitative Results


We compare our approach with other SOTA OPE methods.

Real Robot Experiments

Robot Explores with Learned Value Function

Third person views

Bag files replay

Exploration Behaviors Compared with Frontier-based Method

Ours with Learned Value

Frontier-based Method

With learned value function, our method is able to explore high value regions which frontier-based method fails.

BibTeX

@article{2022opere,
  author    = {Yafei Hu and Junyi Geng and Chen Wang and John Keller and Sebastian Scherer},
  title     = {Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments},
  booktitle = {IEEE Robotics and Automation Letters (RA-L) and IROS},
  year      = {2023},
}