Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
OPENAI now shows more details on the reasoning process of O3-Mini, its latest model of reasoning. The change was announced on OPENAI X Account and comes as the AI laboratory is under increased pressure By Deepseek-R1, an open rival model that fully displays its reasoning tokens.
![](https://venturebeat.com/wp-content/uploads/2025/02/image_ce3f8f.png?w=477)
Models like O3 and R1 undergo a long process of “thought chain” (COT) in which they generate additional tokens to decompose the problem, reason and test different responses and reach a final solution. Previously, Openai’s reasoning models hid their chain of thought and produced only a high level overview of the reasoning stages. This made it difficult for users and developers to understand the logic of model reasoning and modify their instructions and invite them to direct it in the right direction.
Openai considered the chain of thought as a competitive advantage and hid it to prevent competitors from copying to train their models. But with R1 and other open models showing their trace of complete reasoningThe lack of transparency becomes a drawback for Openai.
The new version of O3-Mini shows a more detailed version of COT. Although we still didn’t see raw tokens, it gives much more clarity on the reasoning process.
![](https://venturebeat.com/wp-content/uploads/2025/02/image_264b8a.png?w=418)
Why is it important for applications
In our previous experiences On O1 and R1, we found that the O1 was slightly better to solve the problems of data analysis and reasoning. However, one of the main limitations was that there was no way to understand why the model made mistakes – and he often made mistakes in the face of real disorderly data obtained on the web. On the other hand, the R1 channel of thought allowed us to solve the problems and modify our invites to improve reasoning.
For example, in one of our experiences, the two models failed to provide the right answer. But thanks to the detailed channel of thought of R1, we were able to discover that the problem was not with the model itself but with the recovery stage which collected information on the web. In other experiences, R1’s thought channel has been able to provide us with clues when it has not analyzed the information we provided, while O1 gave us only a very approximate overview of the way he made his answer.
We tested the new O3-Mini model on a variant of a previous experience that we have executed with O1. We provided the model a text file containing prices of various actions from January 2024 to January 2025. The file was noisy and not formatted, a mixture of raw text and HTML elements. We then asked the model to calculate the value of a portfolio which invested $ 140 in the magnificent 7 actions on the first day of each month from January 2024 to January 2025, distributed uniformly on all actions (we used the term ” Mag 7 »In the invite to make it a little more difficult).
The O3-Mini cot bed was really useful this time. First, the model reasoned on what MAG 7 was, filtered the data to keep only the relevant actions (to make the problem difficult, we added some non-mag 7 actions to the data), calculated the monthly amount to Invest in each stock, and made the final calculations to provide the right answer (the portfolio was worth around $ 2,200 at the last moment recorded in the data we provided to the model).
![](https://venturebeat.com/wp-content/uploads/2025/02/image_133321.png?w=800)
It will take many more tests to see the limits of the new chain of thought, because Openai always hides a lot of details. But in our room checks, it seems that the new format is much more useful.
What it means for Openai
When Deepseek-R1 was published, he had three clear advantages compared to OpenAi reasoning models: he was open, cheap and transparent.
Since then, Openai has managed to shorten the gap. While O1 costs $ 60 per million production tokens, O3-Mini costs only $ 4.40, while surpassing the O1 on many reasoning marks. R1 costs about $ 7 and $ 8 per million tokens on American suppliers. (Deepseek offers R1 at $ 2.19 per million tokens on its own servers, but many organizations will not be able to use it because it is hosted in China.)
With the new change in the COT release, Openai managed to get around the transparency problem.
It remains to be seen what Optai will do for the opening of its models. Since its release, R1 has already been adapted, forked and hosted by many different laboratories and companies, which makes it the favorite reasoning model for companies. Openai CEO Sam Altman recently admitted that he was “bad side of historyIn an open source debate. We will have to see how this achievement will manifest itself in the future versions of Openai.