The more accurate we try to make AI models, the bigger their carbon footprint — with some prompts producing up to 50 times more carbon dioxide emissions than others, a new study has revealed.
Reasoning models, such as Anthropic’s Claude, OpenAI’s o3 and DeepSeek’s R1, are specialized large language models (LLMs) that dedicate more time and computing power to produce more accurate responses than their predecessors.
Yet, aside from some impressive results, these models have been shown to face severe limitations in their ability to crack complex problems. Now, a team of researchers has highlighted another constraint on the models’ performance — their exorbitant carbon footprint. They published their findings June 19 in the journal Frontiers in Communication.
l“Writer Fuel” is a series of cool real-world stories that might inspire your little writer heart. Check out our Writer Fuel page on the LimFic blog for more inspiration.

