When teacher Harriet Mays of London realized that perhaps her son, aged 12, had been indulging AI a little too much with his homework. She double-checked Pluto is and continues to remain a dwarf planet.
“Impressive how confident it sounded,”
she said,
“but incorrect. I almost did fall for it myself too.”
These types of anecdotes are now commonplace in English-speaking homes and classrooms as increasingly more individuals are incorporating ChatGPT into their daily lives. But OpenAI CEO Sam Altman, who built the chatbot, has a surprising warning: Don’t trust it quite so much.
Speaking on the OpenAI Podcast between June 18–20, 2025, Altman shared a paradox that even he finds baffling:
“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don’t trust that much.”
He went on to admit that he’s among those who use ChatGPT frequently especially for parenting advice during his newborn son’s first weeks.
“It was always on, helping me decide everything from nap routines to what to do about diaper rash,”
Altman said in the podcast.
“But I had to remind myself it doesn’t always get it right.”
This is where the contradiction lies: even the creator of one of the world’s most powerful AI tools says we should keep our guard up.
Launched on November 30, 2022, ChatGPT reached 100 million users within two months and set a consumer software deployment record. Built by OpenAI, which was founded in 2015 by Altman, Elon Musk, Ilya Sutskever, and others, the chatbot quickly embedded itself in the lives of students, parents, professionals, and businesses.
But underneath the smooth colloquial tone there is a problem AI researchers call hallucination: an AI generating false or misleading material that sounds perfectly believable.
According to ArXiv research, as many as 46% of ChatGPT’s outputs may include some kind of factual inaccuracy. And in some advanced versions, the hallucination rate climbs to 79%, as noted by The New York Times on May 6, 2025.
“Only 14% of sources cited by the AI were real,” another study from ArXiv found. Even worse, around 17.7% of the sentences generated by AI were internally inconsistent contradicting themselves without users noticing.
Despite these issues, people continue to treat ChatGPT like an expert in everything. Why?
A tweet from X.
A mix of fluency, speed, and convenience, says Dr. Melissa Tran, an AI ethicist at the University of Toronto.
“It speaks like a confident human. That alone makes people feel like it knows what it’s talking about even when it doesn’t.”
This has led to troubling consequences: over-reliance, dependency, and even parasocial relationships, where people treat AI like a trusted friend or advisor.
Altman, who experienced this himself during those sleep-deprived nights as a new dad, now warns of a bigger picture:
“We need societal guardrails. We’re at the start of something powerful, and if we’re not careful, trust will outpace reliability.”
Altman’s comments come at a time when OpenAI is already under scrutiny.
In 2023, the company was sued by several authors and media houses over the use of copyrighted content to train ChatGPT. Then came monetization debates like the possibility of adding ads into the platform and Altman’s brief ouster as CEO in November 2023, before being reinstated five days later.
His June podcast was a moment of rare candor.
“He’s being transparent about ChatGPT’s weaknesses,”
said Dr. Ian Wallace, an AI policy researcher.
“That’s not common in Big Tech. It shows some willingness to engage with the public critically.”
Back in London, Harriet has a new rule in her house: “Check everything twice once with ChatGPT, and once with Google.”
“AI is amazing. But it’s not magic. And it definitely isn’t always right.”
Altman would probably agree. His message is clear: ChatGPT is useful, but flawed. Treat it like a helpful assistant, not an infallible oracle.