Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
The release of the Deepseek R1 reasoning model caused shock waves in the technological industrywith the most obvious sign being suddenly Sale of the main AI actions. The advantage of well -funded AI laboratories such as Openai and Anthropic no longer seems very solid, because Deepseek could have developed their competitor O1 to a fraction of the cost.
While some AI laboratories are currently crisisRegarding the business sector, it is above all good news.
Cheaper applications, more applications
As we have already said here, one of the Trends deserve to be watched in 2025 is the continuous drop in the cost of using AI models. Companies should experience and create prototypes with the latest AI models regardless of the price, knowing that continuous prices reduction will ultimately allow them to deploy their large -scale applications.
This trend has just seen a huge change of steps. OPENAI O1 costs $ 60 per million production tokens against $ 2.19 per million for Deepseek R1. And, if you are concerned about Sending your data to Chinese serversYou can access R1 on American suppliers such as Set And Fireworkswhere it is at the price of $ 8 and $ 9 per million tokens, respectively – always a huge good deal compared to the O1.
To be fair, O1 always has the advantage over R1, but not so much to justify such a huge price difference. In addition, R1 capabilities will be sufficient for most business applications. And, we can expect more advanced and capable models to be published in the coming months.
We can also expect second -rate effects on the AI global market. For example, the CEO of Openai, Sam Altman, announced that free chatgpt users will soon have access to O3-Mini. Although it did not explicitly mention R1 as a reason, the fact that the announcement was made shortly after the Liberation of R1 is revealing.
More innovation
R1 always leaves a lot of unanswered questions – for example, there are several reports that Deepseek has formed the model on the outputs of large language openai models (LLMS). But if his article and technical report are correct, Deepseek has been able to create a model that almost corresponds to the cutting edge of technology while reducing costs and deleting some of the technical steps that require a lot of manual labor.
If others can reproduce the results of Deepseek, this can be good news for the laboratories and companies of the AI that have been sidelined by financial obstacles to innovation in the field. Companies can expect faster innovation and more AI products to supply their applications.
What will happen to the billions of dollars that large technological companies have spent to acquire material accelerators? We have still not reached the ceiling of what is possible with AI, so the main technological companies will be able to do more with their resources. The more affordable AI will actually increase request in the medium and long term.
But more importantly, R1 is proof that everything is not linked to calculation clusters and larger data sets. With good engineering chops and good talents, you can push the limits of what is possible.
Open Source for victory
To be clear, R1 is not entirely open source, because Deepseek has only published weights, but not the code or complete details of the training data. Nevertheless, it is a great victory for the open source community. Since the release of Deepseek R1, more than 500 derivatives have been published on Hugging Face, and the model has been downloaded millions of times.
This will also give companies more flexibility on the place where to manage their models. In addition to the complete model of 671 billion parameters, there are distilled versions of R1, ranging from 1.5 billion to 70 billion parameters, allowing companies to execute the model on a variety of equipment. Moreover, Unlike O1R1 reveals his complete reflection chain, giving developers a better understanding of the behavior of the model and the ability to direct it in the desired direction.
With the open source catching up with closed models, we can hope for a renewal of commitment to share knowledge and research so that everyone can benefit from AI progress.