Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
Mistral aiThe startup of European artificial intelligence which quickly climbs, today unveiled a new language model which affirms that it corresponds to the performance of the models three times its size while considerably reducing computer costs – a development which could reshape the Economy of advanced deployment of AI.
The new model, called Mistral Small 3At 24 billion parameters and reached 81% precision on standard benchmarks while treating 150 tokens per second. The company publishes it under the Apache 2.0 licenseallowing companies to modify it and deploy it freely.
“We believe that this is the best model among all models of less than 70 billion parameters,” said Guillaume Lample, director of sciences of Mistral, in an exclusive interview with Venturebeat. “We believe that it is essentially equally with the Meta’s Llama 3.3 70B that was published a few months ago, which is a model three times greater.”
The announcement arrives in the middle Intense examination AI development fees following complaints from the Chinese startup Deepseek that he formed a competitive model for Just $ 5.6 million – Affirmations that suffered Nearly $ 600 billion From Nvidia’s market value this week, investors questioned the massive investments made by the giants of American technology.

How a French startup built an AI model that competes with Big Tech to a fraction of the size
Mistral’s approach focuses on efficiency rather than the scale. The company has produced its performance gains mainly thanks to improved training techniques rather than launching more computing power to the problem.
“What has changed is essentially training optimization techniques,” said Lample at Venturebeat. “The way we form the model was a little different, a different way of optimizing it, to modify weights during free learning.”
The model was formed on 8 billions of tokens, against 15 billions for comparable models, according to Lample. This efficiency could make advanced AI capabilities more accessible to companies concerned with IT costs.
Notably, Mistral Small 3 has been developed without learning to strengthen or synthetic training data, techniques commonly used by competitors. Lample said that this “gross” approach helps to avoid incorporating unwanted biases that may be difficult to detect later.

Confidentiality and business: why companies are considering smaller AI models for critical mission tasks
The model is particularly intended for companies requiring on -site deployment for reasons of confidentiality and reliability, including financial services, health care and manufacturing companies. It can operate on a single GPU and manage 80 to 90% of typical commercial use cases, depending on the company.
“Many of our customers want a solution on site because they care about confidentiality and reliability,” said Lample. “They don’t want critical services to be based on systems that they do not completely control.”

The European AI champion opens the way to the domination of the open source while intellectual property is looming
Liberation comes as Mistral, Evaluated at $ 6 billionis positioned as European champion in the world race on AI. The company has recently taken investments from Microsoft and is preparing for a Possible iPoAccording to CEO Arthur Mensch.
Industry observers say that the accent put by Mistral on smaller and more effective models could prove to be premonitory as the AI industry matures. The approach contrasts with companies like OPENAI And Anthropic which focused on the development of increasingly large and expensive models.
“We are probably going to see the same thing we saw in 2024, but perhaps even more than that, which is essentially many open source models with very authorized licenses,” predicted Lample. “We believe that it is very likely that this conditional model has become a kind of goods.”
As competition intensifies and the effectiveness of emerging efficiency, Mistral’s strategy to optimize small models could help democratize access to the advanced AI – potentially accelerating adoption in industries While reducing computer infrastructure costs.
The company says it will publish additional models with an improvement Reasoning capacities In the coming weeks, the implementation of an interesting test to know if its approach focused on efficiency can continue to correspond to the much larger systems of systems.