Hallucination
When an AI model generates confident, plausible-sounding information that is factually incorrect, fabricated, or not grounded in its training data or provided context.
Hallucination refers to AI-generated outputs that appear authoritative and coherent but contain fabricated facts, false claims, or information not supported by the model's training data or context. This phenomenon is particularly problematic in high-stakes applications like legal research, medical advice, or government services, where users may trust incorrect information. Reducing hallucinations remains an active area of AI safety research, with approaches including retrieval-augmented generation, improved training techniques, and uncertainty quantification.
Also known as
AI hallucination, model hallucination, confabulation