Photo: Camilo Concha/Shutterstock.com
OpenAI has announced the launch of ‘Deep Research,’ a new feature in ChatGPT designed to conduct multi-step investigations on the internet. Unlike standard AI-generated summaries, Deep Research independently discovers analyses and synthesises insights from hundreds of sources, producing detailed reports.
The new feature will leverage OpenAI’s upcoming ‘o3 model,’ optimised for web browsing and data analysis. This model is different from traditional search engines that provide fragmented results, as it consolidates research into structured reports.
According to OpenAI, this capability is crucial for knowledge workers who need extensive exploration of domain-specific topics, ensuring that every claim is well-supported and verified.
“Powered by a version of the upcoming OpenAI o3 model that’s optimised for web browsing and data analysis, it leverages reasoning to search, interpret, and analyse massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters,” OpenAI said.

Deep Research is currently being rolled out to Pro users, with an allocation of up to 100 monthly queries. OpenAI plans to expand access to Plus and Team uses next, followed by an Enterprise rollout. However, availability in regions such as the United Kingdom, Switzerland, and the European Economic Area is still pending due to infrastructure and regulatory considerations.
OpenAI also explained the difference between GPT-4o, which is optimised for real-time, multimodal conversations while Deep Research focuses on exhaustive exploration and documentation.
Deep Research scored 26.6% accuracy in the newly released Humanity’s Last Exam, an AI performance that assesses more than 100 subjects, from rocket science to humanities. Similarly, the model set a new state-of-the-art (SOTA) score on the GAIA benchmark.
Despite its advanced capabilities, Deep Research is not without limitations. OpenAI acknowledges that the tool can still generate occasional inaccuracies or hallucinate facts, albeit at a reduced rate compared to the previous model. Additionally, it may struggle with distinguishing authoritative sources from unreliable information and sometimes fails to convey uncertainty accurately.
In the News: Australia cracksdown on online extremism, imposes sanctions