A Sora-generated video of the Glenfinnan Viaduct, incorrectly showing a second track whereas the real viaduct has only one, a second chimney on its interpretation of the train The Jacobite, and some carriages much longer than others
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting,[1][2]confabulation[3] or delusion[4]) is a response generated by AI that contains false or misleading information presented as fact.[5][6] This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneously constructed responses (confabulation), rather than perceptual experiences.[6]
For example, a chatbot powered by large language models (LLMs), like ChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time,[7] with factual errors present in 46% of generated texts.[8] Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios.[9][7][8] Some people believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[3]
^Ortega, Pedro A.; Kunesch, Markus; Delétang, Grégoire; Genewein, Tim; Grau-Moya, Jordi; Veness, Joel; Buchli, Jonas; Degrave, Jonas; Piot, Bilal; Perolat, Julien; Everitt, Tom; Tallec, Corentin; Parisotto, Emilio; Erez, Tom; Chen, Yutian; Reed, Scott; Hutter, Marcus; Nando de Freitas; Legg, Shane (2021). Shaking the foundations: Delusions in sequence models for interaction and control (Preprint). arXiv:2110.10819.
^Maynez, Joshua; Narayan, Shashi; Bohnet, Bernd; McDonald, Ryan (2020). "On Faithfulness and Factuality in Abstractive Summarization". Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 1906–1919. doi:10.18653/v1/2020.acl-main.173.
^ abJi, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Ye Jin; Madotto, Andrea; Fung, Pascale (31 December 2023). "Survey of Hallucination in Natural Language Generation". ACM Computing Surveys. 55 (12): 1–38. arXiv:2202.03629. doi:10.1145/3571730.