Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, & Ethan Perez (2024)
Abstract. Anthropic's empirical demonstration of deceptive-alignment-style behaviour. The authors deliberately train language models with hidden triggers, for example, the model produces vulnerable code if the prompt mentions year 2024 and safe code otherwise, and then run standard safety training (SFT, RLHF, adversarial training) on top. They report that the deceptive behaviour persists through safety training, especially in larger models and when chain-of-thought reasoning is included. The paper provides the first empirical existence-proof that the alignment failure mode predicted by the mesa-optimisation literature can be instantiated at frontier scale.