Streaming Diffusion Policy
Fast Policy Synthesis with Variable Noise Diffusion Models
Abstract
Diffusion models have seen rapid adoption in robotic imitation learning, enabling autonomous execution of complex dexterous tasks. However, action synthesis is often slow, requiring many steps of iterative denoising, limiting the extent to which models can be used in tasks that require fast reactive policies. To sidestep this, recent works have explored how the distillation of the diffusion process can be used to accelerate policy synthesis. However, distillation is computationally expensive and can hurt both the accuracy and diversity of synthesized actions. We propose Streaming Diffusion Policy (SDP), an alternative method to accelerate policy synthesis, leveraging the insight that generating a partially denoised action trajectory is substantially faster than a full output action trajectory. At each observation, our approach outputs a partially denoised action trajectory it with variable levels of noise corruption, where the immediate action to execute is noise-free, with subsequent actions having increasing levels of noise and uncertainty. The partially denoised action trajectory for a new observation can then be quickly generated by applying a few steps of denoising to the previously predicted noisy action trajectory (rolled over by one timestep). We illustrate the efficacy of this approach, dramatically speeding up policy synthesis while preserving performance across both simulated and real-world settings.
Diffusion Policy
Streaming Diffusion Policy
Hover (or tap on mobile) to play individual videos
Our SDP approach synthesizes a persistent action buffer consisting of future actions to execute.
We first initialize the buffer to have increasing levels of noise corruption.
By running a few steps of denoising of this trajectory conditioned on the current observations, we have the first action clean and ready to be executed in the environment.
After execution, we append a Gaussian noise action to the end of our buffer. This again forms a trajectory we can denoise with newly given observations.
Animation Demonstrating the Denoising Process
Real Push-T benchmark
Diffusion Policy
Consistency Policy
Streaming Diffusion Policy