Artificial general intelligence

Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence capable of performing the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans.[1][2]

Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved.[3] AGI is conceptually distinct from artificial superintelligence (ASI), which would outperform the best human abilities across every domain by a wide margin.[4] AGI is considered one of the definitions of strong AI.

Unlike artificial narrow intelligence (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved.[5]

Creating AGI is a primary goal of AI research and of companies such as OpenAI,[6] Google,[7] and Meta.[8] A 2020 survey identified 72 active AGI research and development projects across 37 countries.[9]

The timeline for achieving human‑level intelligence AI remains deeply contested. Recent surveys of AI researchers give median forecasts ranging from the early 2030s to mid‑century, while still recording significant numbers who expect arrival much sooner—or never at all.[10][11]

Perspectives span four broad camps. One group argues AGI could emerge within years or decades; another projects a century or more; a third believes it may never be built; and a vocal minority claims it already exists, pointing to the broad competencies shown by systems such as GPT‑4 and other large language models.[12][13]

There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI.[14] AGI is a common topic in science fiction and futures studies.[15][16]

Contention exists over whether AGI represents an existential risk.[17][18][19] Many AI experts have stated that mitigating the risk of human extinction posed by AGI should be a global priority.[20][21] Others find the development of AGI to be in too remote a stage to present such a risk.[22][23]

  1. ^ Goertzel, Ben (2014). "Artificial General Intelligence: Concept, State of the Art, and Future Prospects". Journal of Artificial General Intelligence. 5 (1): 1–48. Bibcode:2014JAGI....5....1G. doi:10.2478/jagi-2014-0001.
  2. ^ Lake, Brenden; Ullman, Tom; Tenenbaum, Joshua; Gershman, Samuel (2017). "Building machines that learn and think like people". Behavioral and Brain Sciences. 40: e253. arXiv:1604.00289. doi:10.1017/S0140525X16001837. PMID 27881212.
  3. ^ Bubeck, Sébastien (2023). "Sparks of Artificial General Intelligence: Early Experiments with GPT‑4". arXiv:2303.12712 [cs.CL].
  4. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  5. ^ Legg, Shane (2023). Why AGI Might Not Need Agency. Proceedings of the Conference on Artificial General Intelligence.
  6. ^ "OpenAI Charter". OpenAI. Retrieved 6 April 2023. Our mission is to ensure that artificial general intelligence benefits all of humanity.
  7. ^ Grant, Nico (27 February 2025). "Google's Sergey Brin Asks Workers to Spend More Time In the Office". The New York Times. ISSN 0362-4331. Retrieved 1 March 2025.
  8. ^ Heath, Alex (18 January 2024). "Mark Zuckerberg's new goal is creating artificial general intelligence". The Verge. Retrieved 13 June 2024. Our vision is to build AI that is better than human-level at all of the human senses.
  9. ^ Baum, Seth D. (2020). A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF) (Report). Global Catastrophic Risk Institute. Retrieved 28 November 2024. 72 AGI R&D projects were identified as being active in 2020.
  10. ^ "Shrinking AGI timelines: a review of expert forecasts". 80,000 Hours. 21 March 2025. Retrieved 18 April 2025.
  11. ^ "How the U.S. Public and AI Experts View Artificial Intelligence". Pew Research Center. 3 April 2025. Retrieved 18 April 2025.
  12. ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Yin Tat Lee; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Marco Tulio Ribeiro; Zhang, Yi (22 March 2023). "Sparks of Artificial General Intelligence: Early Experiments with GPT-4". arXiv:2303.12712 [cs.CL].
  13. ^ "AI timelines: What do experts in artificial intelligence expect for the future?". Our World in Data. 7 February 2023. Retrieved 18 April 2025.
  14. ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric (2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL]. GPT-4 shows sparks of AGI.
  15. ^ Butler, Octavia E. (1993). Parable of the Sower. Grand Central Publishing. ISBN 978-0-4466-7550-5. All that you touch you change. All that you change changes you.
  16. ^ Vinge, Vernor (1992). A Fire Upon the Deep. Tor Books. ISBN 978-0-8125-1528-2. The Singularity is coming.
  17. ^ Morozov, Evgeny (30 June 2023). "The True Threat of Artificial Intelligence". The New York Times. The real threat is not AI itself but the way we deploy it.
  18. ^ "Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks". ABC News. 23 March 2023. Retrieved 6 April 2023. AGI could pose existential risks to humanity.
  19. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN 978-0-1996-7811-2. The first superintelligence will be the last invention that humanity needs to make.
  20. ^ Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. Mitigating the risk of extinction from AI should be a global priority.
  21. ^ "Statement on AI Risk". Center for AI Safety. Retrieved 1 March 2024. AI experts warn of risk of extinction from AI.
  22. ^ Mitchell, Melanie (30 May 2023). "Are AI's Doomsday Scenarios Worth Taking Seriously?". The New York Times. We are far from creating machines that can outthink us in general ways.
  23. ^ LeCun, Yann (June 2023). "AGI does not present an existential risk". Medium. There is no reason to fear AI as an existential threat.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search