Home 9 Changing World 9 Technology 9 Oxford Researchers Develop Method to Identify AI Hallucinations

Oxford Researchers Develop Method to Identify AI Hallucinations

by | 22 June 2024

A team from the University of Oxford has developed a statistical model that can detect when large language models (LLMs) used in generative AI chatbots are likely to produce incorrect answers. This phenomenon, known as hallucination, occurs when AI fabricates information in response to a query.

Their findings, published in the journal Nature, address a major concern in the use of generative AI, where advanced models, like those powering ChatGPT, can present made-up information as fact. This issue is particularly pressing as more students and professionals turn to AI for research and assignment help.

Dr. Sebastian Farquhar, one of the study’s authors, explained that the new method differentiates between a model’s uncertainty about what to say and how to say it. Despite this progress, he acknowledged that further work is necessary to address systematic errors in AI outputs.

The research suggests that while semantic uncertainty can enhance reliability in specific cases, continuous improvements are needed to tackle AI’s systematic and confident mistakes.

Subscribe

Sign-up to receive our newsletter

More from Technology