Sam Altman Cautions ChatGPT Users: Don’t Trust It Blindly



logo : | Updated On: 02-Jul-2025 @ 3:06 pm
SHARE 

Sam Altman, CEO of OpenAI, has issued a clear warning to users of ChatGPT: do not trust it blindly. Speaking in the inaugural episode of OpenAI’s official podcast, Altman addressed a growing concern—users are placing too much faith in ChatGPT’s responses despite the system’s well-documented imperfections. He emphasized that while ChatGPT is a powerful tool, it is prone to "hallucinations"—a term used in the AI community to describe AI-generated information that may be inaccurate, misleading, or entirely fictional.

Altman found it surprising how confidently users rely on ChatGPT for a variety of tasks such as writing, research, and even parenting advice. He noted that users need to manage their expectations and remain critical of the output provided. “People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates,” Altman said. He warned that such technology, though advanced, should not be treated as fully reliable and infallible. “It should be the tech that you don’t trust that much,” he added.

The core issue lies in how ChatGPT works—it predicts the next word in a sequence based on patterns learned during training on massive datasets. Unlike humans, it lacks real-world understanding and context. As a result, it can fabricate information or provide incorrect answers with convincing language, which may mislead users who aren't vigilant.

Altman highlighted the importance of transparency and honesty from developers and companies about what AI can and cannot do. He mentioned that while ChatGPT enjoys widespread usage with millions of interactions daily, its limitations must be openly acknowledged to avoid overreliance.

He also discussed upcoming features being considered for ChatGPT, such as persistent memory and ad-supported models. These features aim to enhance user experience and monetization but also raise concerns regarding user privacy and data protection. As ChatGPT becomes more personalized, it is crucial for users to understand what data is stored and how it is used.

Altman’s remarks align with broader concerns within the AI community. Geoffrey Hinton, a pioneering AI researcher often referred to as the “godfather of AI,” echoed similar views in a recent CBS interview. Hinton admitted that despite being one of the earliest voices warning about the risks of superintelligent AI, he finds himself trusting GPT-4 more than he probably should.

To illustrate the model’s limitations, Hinton posed a basic riddle to GPT-4: “Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have?” GPT-4 answered incorrectly. The correct answer is one—Sally herself. Hinton expressed surprise that the model still made such errors and suggested that future models like GPT-5 might overcome these basic reasoning shortcomings.

Both Altman and Hinton agree that while AI is a transformative tool with significant potential, it should not be considered an ultimate authority. Their shared message is cautionary: as AI becomes more deeply embedded in everyday life, users must remain critical thinkers. The golden rule they advocate is clear—trust, but verify.




Read less Translate in Assamese


Comments


Contact Us

House. No. : 163, Second Floor Haridev Rd, near Puberun Path, Hatigaon,
Guwahati, Assam 781038.

E-mail : assaminkcontact@gmail.com

Contact : +91 8811887662

Enquiry




×

Reporter Login


×

Reporter Registration


To the top © AssamInk, 2021 | Powered by Prism Infosys