OpenAI’s Nick Turley has urged users to treat ChatGPT as a “second opinion” rather than a primary source of truth. Despite GPT-5’s major improvements, the AI still produces errors about 10% of the time. OpenAI is working to reduce hallucinations while expanding its AI-driven digital ambitions.
OpenAI’s most advanced artificial intelligence model, GPT-5, has been making headlines for its improved accuracy, natural responses, and wider capabilities. Yet despite these advances, the company is cautioning users against over-relying on its popular chatbot, ChatGPT, as their primary source of factual information.
Nick Turley, Head of ChatGPT at OpenAI, emphasized in an interview with The Verge that while GPT-5 represents a major leap forward in AI development, it is still prone to what experts call “hallucinations”—cases where the system generates content that appears convincing but is factually incorrect.
Turley explained that the problem of hallucinations has been reduced significantly in GPT-5 compared to earlier models, but it remains a persistent challenge. According to OpenAI’s internal estimates, the chatbot still delivers incorrect answers around 10 percent of the time.
“Until we are provably more reliable than a human expert across all domains, we’ll continue to advise users to double-check the answers,” Turley said. “I think people are going to continue to leverage ChatGPT as a second opinion, versus necessarily their primary source of fact.”
Why OpenAI Advises Caution
Large language models like GPT-5 are built on vast amounts of data and trained to predict the most likely sequence of words in a sentence. This makes them highly effective at generating fluent and contextually appropriate responses, but it also introduces a risk: when confronted with unfamiliar or ambiguous topics, they may “hallucinate” facts.
For everyday users, this means that while ChatGPT can be a helpful guide, assistant, or brainstorming tool, it should not be mistaken for an infallible expert. Even with continuous improvements, the reliability gap between AI models and domain experts remains.
OpenAI acknowledges this limitation and has integrated external search capabilities into ChatGPT, allowing users to verify responses against real-time information from trusted sources. This update is designed to reduce misinformation and give users greater confidence in the answers they receive.
The Challenge of 100% Accuracy
AI researchers have long acknowledged that achieving absolute reliability in machine learning systems is extremely difficult. Turley noted that the goal of completely eliminating hallucinations is still some way off.
“I’m confident we’ll eventually solve hallucinations, and I’m confident we’re not going to do it in the next quarter,” he remarked.
This realistic assessment highlights the complexity of AI development. Even with billions of parameters and advanced training methods, language models are still probabilistic systems—not definitive knowledge engines.
OpenAI’s Expanding Ambitions
Despite these ongoing challenges, OpenAI is far from slowing down. Reports suggest the company is working on developing its own web browser to integrate AI-powered tools more directly with users’ internet experience.
In a surprising remark, OpenAI CEO Sam Altman even hinted that the company would consider acquiring Google Chrome if the opportunity ever arose. While such a scenario remains hypothetical, it underscores the growing ambitions of OpenAI as it continues to reshape the digital ecosystem.
The Road Ahead
As AI becomes increasingly integrated into daily life, the need for responsible use grows equally important. OpenAI’s warning to treat ChatGPT as a “second opinion” reflects a broader conversation about balancing the power of AI with its risks.
| Also Read: TTP strikes across Khyber Pakhtunkhwa: Pakistan |
For now, GPT-5 offers users a more powerful, accurate, and user-friendly tool than its predecessors, but it is not without flaws. Whether for education, professional use, or everyday problem-solving, ChatGPT can be a valuable assistant—provided users continue to cross-check its responses with trusted human and digital sources.
As Turley summed it up, the mission is clear: to build a tool that informs and assists, but not one that replaces critical thinking. Until then, ChatGPT remains best used as a reliable partner for brainstorming and guidance, rather than the ultimate authority on truth.