
OpenAI has unveiled the latest and most anticipated version of its artificial intelligence chatbot, GPT-5, claiming it offers doctoral-level expertise.
Described as “smarter, faster, and more useful,” OpenAI co-founder and CEO Sam Altman described the new model as the beginning of a new era for ChatGPT.
“I think something like GPT-5 would be almost unimaginable in any other period of human history,” Altman said before Thursday’s launch.
GPT-5 comes with claims of doctoral-level skills in areas like programming and writing, at a time when tech companies continue the race to create the most advanced artificial intelligence chatbot.
Elon Musk has recently made similar claims about his chatbot, Grok, which is integrated with the X platform (formerly Twitter). During the launch of the latest version last month, Musk described it as “better than a PhD at everything,” calling it the smartest AI in the world.
Meanwhile, Altman said GPT-5 will have fewer “hallucinations” – the phenomenon when large language models invent answers – and will be less deceptive.
OpenAI is also promoting GPT-5 as a capable assistant for programmers, following the trend of other American AI developers, including Anthropic with the Claude Code model.
What can GPT-5 do?
OpenAI emphasizes that GPT-5 is capable of completely creating software and demonstrating better reasoning – providing answers with logical steps and deductions.
The company claims that the model has been trained to be more honest and provide more accurate answers, while overall "feeling more human."
According to Altman, the model is “significantly better” than its predecessors:
“GPT-3 felt like talking to a high school student… GPT-4 like a university student. GPT-5 is the first time it feels like talking to a real expert on any subject – a PhD-level expert.”
However, for Professor Carissa Véliz from the Institute for the Ethics of Artificial Intelligence, this launch may not be as significant as the marketing makes it out to be.
“These systems, as impressive as they are, have yet to become truly profitable,” she said, emphasizing that AIs only mimic – and do not truly represent – human reasoning.
“There is a fear that the 'fire of enthusiasm' must be kept high, otherwise the bubble may burst, and perhaps all this is more marketing than revolution.”
Other experts have warned that the increasing power of AIs like GPT-5 is widening the gap between the technology's capabilities and our ability to govern it according to public expectations.
“The more powerful these patterns become, the more urgent the need for comprehensive regulation becomes,” said Gaia Marcus, director at the Ada Lovelace Institute.
BBC AI correspondent Marc Cieslak had exclusive access to GPT-5 ahead of its official launch.
“Apart from minor cosmetic changes, the experience was similar to using the previous chatbot: you give it tasks or ask it questions via text,” he said.
“It now uses what is called a ‘reasoning model,’ which means it tries harder to solve problems – but this feels more like an evolution, not a revolution.”
GPT-5 also has implications for businesses, which are concerned about AI using their content.
“The more compelling AI-generated content becomes, the more we need to ask ourselves – are we protecting the people and creativity behind what we see every day?” said Grant Farhall, head of product at Getty Images.
According to him, it is important to examine how these models are trained and to ensure that creators are compensated if their work is used.
The new model will be rolled out to all users starting Thursday.
In the coming days it will become clearer whether GPT-5 is really as good as Sam Altman claims.