AI hallucinations are a feature of LLM design, not a bug
Your news feature outlines how designers of large language models (LLMs) struggle to stop them from hallucinating (see Nature 637, 778–780; 2025). But AI confabulations are integral to how these models work. They are a feature, not a bug.
Competing Interests
The authors declare no competing interests.
Advertising by Adpathway




