Unifying RLHF Objectives

Reinforcement learning from human preferences (RLHF) tries to teach language models to optimize for human preferences, rather than the supervised perplexity from pretraining. It does so by collecting a dataset of language model outputs, and then having humans rate which output is better (Do you prefer answer A or B?). Here, I describe different commonly-used RLHF algorithms in terms of their gradient.

Consider the problem of optimizing a language model πθ from a preference dataset consisting of context x and two completions: the chosen completion y+ and the rejected completion y. These represent, eg, two different possible responses from a chatbot, and chosen by a human.

We can view different RLHF algorithms by considering the gradient of their loss function:

θL(πθ)=w+θ(logπθ(y+|x))wθ(logπθ(y|x))

Intuitively, these algorithms typically increase the probability of the chosen completion, and decrease the probability of the rejected completion. Different algorithms are differentiated by their choice of w+ and w (for methods which do not operate on paired data, simply consider w=0). They may also use a reward function r(x,y) representing the “elo” of the full completion. Note simplifications are made for ease of comparison.

Summary

Supervised (weight on θlogπθ is always positive):

Unpaired (increase w+ proportional to r(x,y+); assume r(x,y+)>0 for clarity):

Paired (push y+ and y apart):

Note that unpaired methods may also have negative weights when r(x,y+)<0. Thus, we can think of them as dynamically choosing which samples should have negative weight, rather than the paired methods which set them directly based on the dataset.

PPO derivation

I only include the derivations for PPO and RM as illustrative examples.

PPO starts from a policy πref at the beginning of training which generates the dataset used for training, and enforces a KL divergence constraint KL(πθ||πref) to ensure that y+πθ(|x) does not diverge too much from the data used to train the reward model. It does this by maximizing:

L(πθ)=min(πθ(y+|x)πref(y+|x),1+ϵ)r(x,y+)

which immediately sets the derivative equal to zero when πθ(y+|x)πref(y+|x)>1+ϵ.

Then, take the derivative for the other case:

θL(πθ)=1πref(y+|x)r(x,y+)θ(πθ(y+|x))

We use the “policy gradient trick” from the chain rule, xf(x)=f(x)xlogf(x), which yields the final gradient:

w+={πθ(y+|x)πref(y+|x)r(x,y+)πθ(y+|x)πref(y+|x)<1+ϵ0o.w.

One can perform a similar derivation for the 1ϵ side of the PPO surrogate objective. This one-sided derivation is not exactly right, but captures the spirit of the maximization.

We can see that, compared to APA, PPO maintains a positive w+ until the 1+ϵ ratio is hit – enforcing the KL divergence irrespective of r – while APA has positive w+ until the log-ratio is equals the reward.

Reward modeling derivation

In this case, we are considering only the task of training the reward model rθ(x,) from preference data; thus, we consider the derivative which has parameters with respect to r, rather than the policy π.

Using the Bradley-Terry model for pairwise comparisons (where rθ can be interpreted as an “elo”), we optimize the objective:

L(rθ)=logpθ(y+>y|x)=logσ(rθ(x,y+)rθ(x,y))

We utilize some useful properties of the sigmoid function:

  1. σ(x)=1σ(x)
  2. xσ(x)=σ(x)(1σ(x))=σ(x)σ(x) (by applying (1))
  3. xlogσ(x)=σ(x) (by applying the chain rule and (2))

This thus yields:

θL(πθ)=θlogσ(rθ(x,y+)rθ(x,y))=σ(rθ(x,y)rθ(x,y+))θ(rθ(x,y)rθ(x,y+))=σ(rθ(x,y)rθ(x,y+))(θ(rθ(x,y+))θ(rθ(x,y)))

which completes the derivation with w+=w. DPO follows a similar derivation using their implicit reward r^θ=logπθ(y|x)πref(y|x) which intuitively means the policy πθ “values” y proportional to its log-probability.

We can see DPO has a very similar formulation to APA, where both aim to softly increase πθ(y+|x) until πθ(y+|x)=er(x,y+)πref(y+|x). Then this is very similar to the PPO objective, except it has a hard clip once the log-ratio exceeds 1+ϵ. RRHF also uses a hard clip, but replaces πref(y+|x) with πθ(y|x), ensuring π(y+|x)π(y|x).

In contrast, C-RLFT / Decision Transformer-style methods do not “push down” the w term; rather, they condition on some notion of negative reward. Therefore, the suboptimal behavior is still in the model, but must be solicited via a negative prompt.

Commentary

I chose the above methods because they have been used to train top models on the Chatbot Arena benchmark:

  1. SFT is present in many models, including Hermes
  2. PPO is used in top pretrained foundation models such as ChatGPT and Gemini
  3. C-RLFT is used in OpenChat, the top 7B model as of Feb 2024 (used to initialize Starling)
  4. APA is the final stage of Starling, which builds on OpenChat
  5. DPO is popular in the open-source community, but performs relatively poorly in Chatbot Arena, with its best 7B model being Zephyr
  6. The authors of RRHF went on to build Qwen, which at the time of writing is the top open-source model on the leaderboard

Ultimately their objective functions are conceptually very similar and performant after tuning, and obviously the real power is in the dataset (and the weighting of it).

Notes mentioning this note