Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, & Iason Gabriel (2022)
2022 ACM Conference on Fairness Accountability and Transparency, 214-229.
DOI: https://doi.org/10.1145/3531146.3533088
Abstract. Presents a comprehensive taxonomy of risks posed by large language models, including discrimination, information hazards, misinformation, malicious use, human-computer interaction harms, and environmental and socioeconomic impacts, as a basis for systematic mitigation.
Tags: ethics language-models risk