KL-Regularized Reinforcement Learning is designed to Mode Collapse

Best AI papers explained - A podcast by Enoch H. Kang

Podcast artwork

Categories:

The academic paper investigates the common belief that Kullback-Leibler (KL) regularized reinforcement learning (RL) objectives, particularly when used for post-training large language models (LLMs), inherently promote or inhibit output diversity based on the choice between reverse and forward KL divergence. The authors challenge this intuition, demonstrating both mathematically and empirically that mode coverage and diversity primarily depend on factors like regularization strength and the relative scales of rewards and reference probabilities, rather than the specific type of KL divergence. They prove that typical RL settings often construct an optimal solution that is unimodal by design, leading to an inevitable diversity collapse. To counter this, the paper proposes a new method called Mode Anchored Reward Augmentation (MARA), a theoretically justified algorithm that modifies the reward function to directly optimize for a target distribution that maintains high, uniform probability across all high-quality sampling modes, demonstrating success in LLM and chemical language model tasks.

Visit the podcast's native language site