Paul Christiano (researcher)

Paul Christiano
Education
Known for
Scientific career
Institutions
ThesisManipulation-resistant online learning (2017)
Doctoral advisorUmesh Vazirani
Websitepaulfchristiano.com

Paul Christiano is an American researcher in the field of artificial intelligence (AI), with a specific focus on AI alignment, which is the subfield of AI safety research that aims to steer AI systems toward human interests.[1] He formerly led the language model alignment team at OpenAI and became founder and head of the non-profit Alignment Research Center (ARC), which works on theoretical AI alignment and evaluations of machine learning models.[2][3] In 2023, Christiano was named as one of the TIME 100 Most Influential People in AI (TIME100 AI).[3][4]

In September 2023, Christiano was appointed to the UK government's Frontier AI Taskforce advisory board.[5] He is also an initial trustee on Anthropic's Long-Term Benefit Trust.[6]

  1. ^ Cite error: The named reference :0 was invoked but never defined (see the help page).
  2. ^ Piper, Kelsey (March 29, 2023). "How to test what an AI model can — and shouldn't — do". Vox. Retrieved August 4, 2023.
  3. ^ a b Henshall, Will (September 7, 2023). "Paul Christiano – Founder, Alignment Research Center". TIME magazine. Retrieved November 16, 2023.
  4. ^ Sibley, Jess (September 10, 2023). "The Future Is Now". Time magazine. Vol. 202, no. 11/12. Retrieved November 16, 2023 – via EBSCOHost.
  5. ^ Skelton, Sebastian Klovig (September 7, 2023). "Government AI taskforce appoints new advisory board members". ComputerWeekly.com. Retrieved November 16, 2023.
  6. ^ Matthews, Dylan (September 25, 2023). "The $1 billion gamble to ensure AI doesn't destroy humanity". Vox. Retrieved November 16, 2023.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search