Distributional Successor Features consists of:
DiSPOs enable zero-shot policy optimization for arbitrary rewards without further training at test-time. Assuming a linear dependence of rewards on cumulants, transferring to downstream tasks reduces to performing a linear regression and solving a simple optimization problem for the optimal possible outcome, which is then passed into the readout policy to generate an action.
We evaluate DiSPOs' ability to transfer to challenging downstream tasks on the D4RL benchmark. DiSPOs show superior performance transferring to the hardest tasks compared to model-based RL, successor features, and goal-conditioned baselines with misspecified goal distributions.
To demonstrate DiSPOs' broad transferability, we plot the normalized returns for reaching various goals in antmaze, where each tile corresponds to the task of navigating the robot to reach that particular tile. DiSPOs successfully transfer across a majority of tasks, whereas model-based RL struggles on longer-horizon tasks.
We show that DiSPOs can transfer to aribitrary rewards beyond goal-reaching in an antmaze preference environment, where the agent has to take a pariticular path to reach the goal according to human preference (specified as reward functions). DiSPOs and model-based RL are able to complete the task according to the human preference, whereas goal-conditioned RL baselines do not conform to perferences.
We further demonstrate DiSPOs' arbitrary transfer capability by training an agent to track various trajectories as denoted by the colored cells. All these runs share the same outcome model and policy, only differing in the reward regression weights.
Since DiSPOs are trained with distributional Bellman backup, they are able to perform "trajectory stitching," i.e. recovering optimal trajectories by combining suboptimal trajectories. We validate DiSPOs' stitching capability on the roboverse benchmark, where each task consists of two subtasks, but the dataset only contains trajectories for each individual subtask. DiSPOs can complete the tasks by stitching subtrajectories, whereas Monte-Carlo style baselines cannot.
@article{zhu2024dispo,
author = {Zhu, Chuning and Wang, Xinqi and Han, Tyler and Du, Simon Shaolei and Gupta, Abhishek},
title = {Distributional Successor Features Enable Zero-Shot Policy Optimization},
booktitle = {ArXiv Preprint},
year = {2024},
}