Offline Policy Comparison with Confidence : Benchmark and Baselines#

Authors

Anurag Koul*, Mariano Phielipp**, Alan Fern*

*Oregon State University, **Intel Labs

TL;DR

It's a benchmark containing "policy comparison queries"(pcq) to evaluate uncertainty estimation in offline reinforcement learning.

Info

This work was accepted at Offline RL workshop, NeurIPS 2022 and is under review in a journal.

Abstract#

Decision makers often wish to use offline historical data to compare sequential-action policies at various world states. Importantly, computational tools should produce confidence values for such offline policy comparison (OPC) to account for statistical variance and limited data coverage. Nevertheless, there is little work that directly evaluates the quality of confidence values for OPC. In this work, we address this issue by creating benchmarks for OPC with Confidence (OPCC), derived by adding sets of policy comparison queries to datasets from offline reinforcement learning. In addition, we present an empirical evaluation of the risk versus coverage trade-off for a class of model-based baselines. In particular, the baselines learn ensembles of dynamics models, which are used in various ways to produce simulations for answering queries with confidence values. While our results suggest advantages for certain baseline variations, there appears to be significant room for improvement in future work.

Slides#

Contents#

Bibtex#

@article{koul2022offline,
  title={Offline Policy Comparison with Confidence: Benchmarks and Baselines},
  author={Koul, Anurag and Phielipp, Mariano and Fern, Alan},
  journal={arXiv preprint arXiv:2205.10739},
  year={2022}
}

Contact#

If you have any questions or suggestions , please open an issue on this GitHub repository.

Indices and tables#