Transferable Reinforcement Learning via Generalized Occupancy Models

1University of Washington
Given an unlabeled offline dataset, we learn a generalized occupancy model that models both ``what outcomes can happen?" and ``how to achieve a particular outcome?" This is used for quick adaptation to new downstream tasks without re-planning or test-time policy optimization.

Abstract

Intelligent agents must be generalists, capable of quickly adapting to various tasks. In reinforcement learning (RL), model-based RL learns a dynamics model of the world, in principle enabling transfer to arbitrary reward functions through planning. However, autoregressive model rollouts suffer from compounding error, making model-based RL ineffective for long-horizon problems. Successor features offer an alternative by modeling a policy's long-term state occupancy, reducing policy evaluation under new tasks to linear reward regression. Yet, policy improvement with successor features can be challenging. This work proposes a novel class of models, i.e., generalized occupancy models (GOMs), that learn a distribution of successor features from a stationary dataset, along with a policy that acts to realize different successor features. These models can quickly select the optimal action for arbitrary new tasks. By directly modeling long-term outcomes in the dataset, GOMs avoid compounding error while enabling rapid transfer across reward functions. We present a practical instantiation of GOMs using diffusion models and show their efficacy as a new class of transferable models, both theoretically and empirically across various simulated robotics problems.

Method

Learning Generalized Occupancy Models

Learning GOMs.

A generalized occupancy model consists of:

  1. A distribution of all possible outcomes in the dataset, represented by discounted sums of state features along trajectories (successor features).
  2. A readout policy that generates an action to achieve a particular outcome.
Modeling outcomes as successor features enables the quick evaluation of outcomes under arbitrary rewards, while modeling the distribution of all possible outcomes enables the extraction of optimal policies. Hence, GOMs retain the generality of model-based RL while avoiding compounding error.

Adaptation and planning via GOMs

Learning GOMs.

Assuming a linear dependence of rewards on cumulants, transferring to downstream tasks reduces to performing linear regression and solving a simple optimization problem for the optimal possible outcome. This is passed into the readout policy to generate an action.

Experiments

Multitask Transfer

We evaluate GOMs' ability to transfer to challenging downstream tasks on the D4RL benchmark. GOMs show superior performance transferring to the hardest tasks compared to model-based RL, successor features, and goal-conditioned baselines with misspecified goal distributions.

D4RL experiments.

To demonstrate GOMs' broad transferability, we plot the normalized returns for reaching various goals in antmaze, where each tile corresponds to the task of navigating the robot to reach that particular tile. GOMs successfully transfer across a majority of tasks, whereas model-based RL struggles on longer-horizon tasks.

D4RL transfer experiments.

Transferring to Arbitrary Rewards

We show that GOMs can adapt to aribitrary rewards beyond goal-reaching in an antmaze preference environment, where the agent has to take a pariticular path to reach the goal according to human preference (specified as reward functions). GOMs and model-based RL are able to complete the task according to the human preference, whereas goal-conditioned RL baselines do not conform to perferences.

Preference antmaze experiments.

We further demonstrate GOMs' arbitrary transfer capability by training an agent to track various trajectories as denoted by the colored cells. All these runs share the same outcome model and policy, only differing in the reward regression weights.

Trajectory Stitching

Since GOMs are trained with distributional Bellman backup, they are able to perform "trajectory stitching," i.e. recovering optimal trajectories by combining suboptimal trajectories. We validate GOMs' stitching capability on the roboverse benchmark, where each task consists of two subtasks, but the dataset only contains trajectories for each individual subtask. GOMs can complete the tasks by stitching subtrajectories, whereas Monte-Carlo style baselines cannot.

Roboverse experiments.

BibTeX

@article{zhu2024gom,
    author    = {Zhu, Chuning and Wang, Xinqi and Han, Tyler and Du, Simon Shaolei and Gupta, Abhishek},
    title     = {Transferable Reinforcement Learning via Generalized Occupancy Models},
    booktitle = {ArXiv Preprint},
    year      = {2024},
}