AI Safety Institute

An AI Safety Institute (AISI), in general, is a state-backed institute aiming to evaluate and ensure the safety of the most advanced artificial intelligence (AI) models, also called frontier AI models.[1]

AI safety gained prominence in 2023, notably with public declarations about potential existential risks from AI. During the AI Safety Summit in November 2023, the United Kingdom (UK) and the United States (US) both created their own AISI. During the AI Summit Seoul in May 2024, international leaders agreed to form a network of AI Safety Institutes, comprising institutes from the UK, the US, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada and the European Union.[2]

  1. ^ "Safety institutes to form 'international network' to boost AI research and tests". The Independent. 2024-05-21. Retrieved 2024-07-06.
  2. ^ Desmarais, Anna (2024-05-22). "World leaders agree to launch network of AI safety institutes". euronews. Retrieved 2024-06-15.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search