As an Amazon Associate we earn from qualifying purchases.

Writer Fuel: As AI Becomes More Advanced, It Hallucinates More

artificial intelligence - AI - deposit photos

The more advanced artificial intelligence (AI) gets, the more it “hallucinates” and provides incorrect and inaccurate information.

Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI’s PersonQA benchmark. That’s more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations.

This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University.

“Writer Fuel” is a series of cool real-world stories that might inspire your little writer heart. Check out our Writer Fuel page on the LimFic blog for more inspiration.

Full Story From Live Science