Benchmarks for Offline Reinforcement Learning

Most of the successes of RL rely closely on repeated on-line interactions of an agent with an setting, which we name on-line RL. Regardless of its success in simulation, the uptake of RL for real-world functions has been restricted. Energy crops, robots, healthcare methods, or self-driving automobiles are costly to run and inappropriate controls can have harmful penalties. They aren’t simply suitable with the essential thought of exploration in RL and the info necessities of on-line RL algorithms. However, most real-world methods produce giant quantities of knowledge as a part of their regular operation, and the purpose of offline RL to study a coverage immediately from that logged information with out interacting with the setting.

Offline RL strategies (e.g Agarwal et al., 2020; Fujimoto et al., 2018) have proven promising outcomes on well-known benchmark domains. Nonetheless, non-standardised analysis protocols, differing datasets, and ack of baselines make algorithmic comparisons troublesome. However, some vital properties of potential real-world utility domains corresponding to partial observability, high-dimensional sensory streams (i.e., photographs), various motion areas, exploration issues, non-stationarity, and stochasticity, are underrepresented within the present offline RL literature.


We introduce a novel assortment of activity domains and related datasets along with a transparent analysis protocol. We embrace widely-used domains such because the DM Management Suite (Tassa et al., 2018) and Atari 2600 video games (Bellemare et al., 2013), but in addition domains which are nonetheless difficult for robust on-line RL algorithms corresponding to real-world RL (RWRL) suite duties (Dulac-Arnold et al., 2020) and DM Locomotion duties (Heess et al., 2017; Merel et al., 2019a,b, 2020). By standardizing the environments, datasets, and analysis protocols, we hope to make analysis in offline RL extra reproducible and accessible. We name our suite of benchmarks “RL Unplugged”, as a result of offline RL strategies can use it with none actors interacting with the setting. Our paper provides 4 major contributions: (i) a unified API for datasets (ii) a different set of environments (iii) clear analysis protocols for offline RL analysis, and (iv) reference efficiency baselines.

RL Unplugged: Benchmarks for Offline Reinforcement Studying

Leave a Comment