Unifying RLHF Objectives
Reinforcement learning from human preferences (RLHF) tries to teach language models to optimize for human preferences, rather than the supervised perplexity from pretraining. It does so by collecting a dataset of language model outputs, and then having humans rate which output is better (Do you prefer answer A or B?). Here, I describe different commonly-used RLHF algorithms in terms of their gradient.
Consider the problem of optimizing a language model
We can view different RLHF algorithms by considering the gradient of their loss function:
Intuitively, these algorithms typically increase the probability of the chosen completion, and decrease the probability of the rejected completion.
Different algorithms are differentiated by their choice of
Summary
Supervised (weight on
Unpaired (increase
-
Vanila Policy Gradient (VPG):
-
Proximal Policy Optimization (PPO):
-
Advantage-Induced Policy Alignment (APA):
-
Kahneman-Tversky Optimization (KTO):
Paired (push
-
Unlikelihood:
-
Reward Modeling (RM):
-
Direct Preference Optimization (DPO):
-
Rank Responses to Align Human Feedback (RRHF) / SLiC:
Note that unpaired methods may also have negative weights when
PPO derivation
I only include the derivations for PPO and RM as illustrative examples.
PPO starts from a policy
which immediately sets the derivative equal to zero when
Then, take the derivative for the other case:
We use the “policy gradient trick” from the chain rule,
One can perform a similar derivation for the
We can see that, compared to APA, PPO maintains a positive
Reward modeling derivation
In this case, we are considering only the task of training the reward model
Using the Bradley-Terry model for pairwise comparisons (where
We utilize some useful properties of the sigmoid function:
(by applying (1)) (by applying the chain rule and (2))
This thus yields:
which completes the derivation with
We can see DPO has a very similar formulation to APA, where both aim to softly increase
In contrast, C-RLFT / Decision Transformer-style methods do not “push down” the
Commentary
I chose the above methods because they have been used to train top models on the Chatbot Arena benchmark:
- SFT is present in many models, including Hermes
- PPO is used in top pretrained foundation models such as ChatGPT and Gemini
- C-RLFT is used in OpenChat, the top 7B model as of Feb 2024 (used to initialize Starling)
- APA is the final stage of Starling, which builds on OpenChat
- DPO is popular in the open-source community, but performs relatively poorly in Chatbot Arena, with its best 7B model being Zephyr
- The authors of RRHF went on to build Qwen, which at the time of writing is the top open-source model on the leaderboard
Ultimately their objective functions are conceptually very similar and performant after tuning, and obviously the real power is in the dataset (and the weighting of it).