As an Amazon Associate we earn from qualifying purchases.

Writer Fuel: There’s a Cure for AI Hallucinations, But It Would Probably Kill AI Use

AI / Artificial Intelligence - deposit photos

OpenAI’s latest research paper diagnoses exactly why ChatGPT and other large language models can make things up — known in the world of artificial intelligence as “hallucination”. It also reveals why the problem may be unfixable, at least as far as consumers are concerned.

The paper provides the most rigorous mathematical explanation yet for why these models confidently state falsehoods. It demonstrates that these aren’t just an unfortunate side effect of the way that AIs are currently trained, but are mathematically inevitable.

The issue can partly be explained by mistakes in the underlying data used to train the AIs. But using mathematical analysis of how AI systems learn, the researchers prove that even with perfect training data, the problem still exists.

“Writer Fuel” is a series of cool real-world stories that might inspire your little writer heart. Check out our Writer Fuel page on the LimFic blog for more inspiration.

Full Story From Live Science