Reinforcement Studying (RL) is a strong paradigm for fixing many issues of curiosity in AI, akin to controlling autonomous autos, digital assistants, and useful resource allocation to call a number of. We’ve seen over the past 5 years that, when supplied with an extrinsic reward perform, RL brokers can grasp very advanced duties like taking part in Go, Starcraft, and dextrous robotic manipulation. Whereas large-scale RL brokers can obtain gorgeous outcomes, even one of the best RL brokers as we speak are slender. Most RL algorithms as we speak can solely clear up the one activity they had been skilled on and don’t exhibit cross-task or cross-domain generalization capabilities.
A side-effect of the narrowness of as we speak’s RL techniques is that as we speak’s RL brokers are additionally very knowledge inefficient. If we had been to coach AlphaGo-like brokers on many duties every agent would seemingly require billions of coaching steps as a result of as we speak’s RL brokers don’t have the capabilities to reuse prior data to resolve new duties extra effectively. RL as we all know it’s supervised – brokers overfit to a selected extrinsic reward which limits their capability to generalize.
So far, probably the most promising path towards generalist AI techniques in language and imaginative and prescient has been by way of unsupervised pre-training. Masked informal and bi-directional transformers have emerged as scalable strategies for pre-training language fashions which have proven unprecedented generalization capabilities. Siamese architectures and extra just lately masked auto-encoders have additionally change into state-of-the-art strategies for reaching quick downstream activity adaptation in imaginative and prescient.
If we imagine that pre-training is a strong method in the direction of growing generalist AI brokers, then it’s pure to ask whether or not there exist self-supervised goals that might permit us to pre-train RL brokers. Not like imaginative and prescient and language fashions which act on static knowledge, RL algorithms actively affect their very own knowledge distribution. Like in imaginative and prescient and language, illustration studying is a vital facet for RL as nicely however the unsupervised drawback that’s distinctive to RL is how brokers can themselves generate attention-grabbing and numerous knowledge trough self-supervised goals. That is the unsupervised RL drawback – how will we study helpful behaviors with out supervision after which adapt them to resolve downstream duties shortly?
Unsupervised RL is similar to supervised RL. Each assume that the underlying surroundings is described by a Markov Resolution Course of (MDP) or a Partially Noticed MDP, and each intention to maximise rewards. The primary distinction is that supervised RL assumes that supervision is offered by the surroundings by way of an extrinsic reward whereas unsupervised RL defines an intrinsic reward by way of a self-supervised activity. Like supervision in NLP and imaginative and prescient, supervised rewards are both engineered or offered as labels by human operators that are onerous to scale and restrict the generalization of RL algorithms to particular duties.
On the Robotic Studying Lab (RLL), we’ve been taking steps towards making unsupervised RL a believable method towards growing RL brokers able to generalization. To this finish, we developed and launched a benchmark for unsupervised RL with open-sourced PyTorch code for 8 main or standard baselines.
The Unsupervised Reinforcement Studying Benchmark (URLB)
Whereas a wide range of unsupervised RL algorithms have been proposed over the previous few years, it has been unimaginable to match them pretty on account of variations in analysis, environments, and optimization. For that reason, we constructed URLB which gives standardized analysis procedures, domains, downstream duties, and optimization for unsupervised RL algorithms
URLB splits coaching into two phases – a protracted unsupervised pre-training section adopted by a brief supervised fine-tuning section. The preliminary launch consists of three domains with 4 duties every for a complete of twelve downstream duties for analysis.
Most unsupervised RL algorithms identified thus far may be categorised into three classes – knowledge-based, data-based, and competence-based. Data-based strategies maximize the prediction error or uncertainty of a predictive mannequin (e.g. Curiosity, Disagreement, RND), data-based strategies maximize the variety of noticed knowledge (e.g. APT, ProtoRL), competence-based strategies maximize the mutual info between states and a few latent vector sometimes called the “talent” or “activity” vector (e.g. DIAYN, SMM, APS).
Beforehand these algorithms had been applied utilizing totally different optimization algorithms (Rainbow DQN, DDPG, PPO, SAC, and so forth). In consequence, unsupervised RL algorithms have been onerous to match. In our implementations we standardize the optimization algorithm such that the one distinction between numerous baselines is the self-supervised goal.
We applied and launched code for eight main algorithms supporting each state and pixel-based observations on domains primarily based on the DeepMind Management Suite.
By standardizing domains, analysis, and optimization throughout all applied baselines in URLB, the result’s a primary direct and truthful comparability between these three various kinds of algorithms.
Above, we present combination statistics of fine-tuning runs throughout all 12 downstream duties with 10 seeds every after pre-training on the goal area for 2M steps. We discover that presently data-based strategies (APT, ProtoRL) and RND are the main approaches on URLB.
We’ve additionally recognized a lot of promising instructions for future analysis primarily based on benchmarking current strategies. For instance, competence-based exploration as a complete underperforms knowledge and knowledge-based exploration. Understanding why that is the case is an attention-grabbing line for additional analysis. For added insights and instructions for future analysis in unsupervised RL, we refer the reader to the URLB paper.
Unsupervised RL is a promising path towards growing generalist RL brokers. We’ve launched a benchmark (URLB) for evaluating the efficiency of such brokers. We’ve open-sourced code for each URLB and hope this permits different researchers to shortly prototype and consider unsupervised RL algorithms.
Paper: URLB: Unsupervised Reinforcement Studying Benchmark
Michael Laskin*, Denis Yarats*, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel, NeurIPS, 2021, these authors contributed equally