A screenshot from a video generated by artificial intelligence Sora. The image contains a mistake: it shows the Glenfinnan Viaduct, a famous bridge, but with an extra train track added that is not there in reality. The train itself resembles a real train called The Jacobite, but it has an extra chimney that should not be there.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting,[1]confabulation[2] or delusion[3]) is a response generated by AI which contains false or misleading information presented as fact.[4][5][6] This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with unjustified responses or beliefs rather than perceptual experiences.[6]
For example, a chatbot powered by large language models (LLMs), like ChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses. Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios.[7][8][9] Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[2]