2025 has already brought us the most performant AI ever: What can we do with these supercharged capabilities (and what’s next)?

MT HANNACH
11 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more


The latest large -language model outings (LLM), such as Claude 3.7 from Anthropic and Grok 3 from XAI, are Perform often At doctoral levels – at least according to certain benchmarks. This achievement marks the next step towards what the former CEO of Google Eric Schmidt consider: A world where everyone has access to “a large polymathe”, an AI capable of relying on large bodies of knowledge to solve complex problems between disciplines.

Professor Ethan Mollick of the Wharton Business School noted on his A useful thing Blog according to which these latest models were formed using much more computing power than GPT-4 when it was launched two years ago, with Grok 3 formed up to 10 times more calculation. He added that it would make Grok 3 the first model of AI “Gen 3”, stressing that “this new generation of AIS is smarter, and capacity jump is striking”.

For example, Claude 3.7 shows emerging capacities, such as the anticipation of user needs and the ability to consider new angles in problem solving. According to Anthropic, this is the first hybrid Reasoning model, combining a traditional LLM for rapid responses with advanced reasoning capacities to solve complex problems.

Mollick attributed these advances to two convergent trends: rapid expansion of the computing power for LLMS training and the growing AI capacity to tackle the solving complex problems (often described as reasoning or thought). He concluded that these two trends are “supercharged AI capacities”.

What can we do with this supercharged AI?

In an important step, Openai spear His AI agent “deep research” in early February. In his review on PlatformCasey Newton said in -depth research seemed “impressive”. Newton noted that in -depth research and similar tools could significantly accelerate research, analysis and other forms of knowledge work, although their reliability in complex fields is always an open question.

Based on a variant of the still unprecedented O3 reasoning model, in -depth research can engage in prolonged reasoning on long periods. He does so using the reasoning of the chain of thoughts (COT), decomposing complex tasks into several logical steps, just as a human researcher could refine their approach. He can also search for the web, which allows him to access more up -to -date information than what is in model training data.

Timothy Lee wrote Understanding AI About several test experts has done in-depth research, noting that “its performance demonstrates the impressive capacities of the underlying O3 model”. A test required instructions on how to build a hydrogen electrolysis plant. Commenting on the quality of production, a mechanical engineer “estimated that it would take an experienced professional for a week to create something as good as the 4,000 words report Openai generated in four minutes. »»

But wait, there is more …

Google Deepmind also recently released “AI Co-scientific”, a multi-agent AI system built on its Gemini 2.0 LLM. It is designed to help scientists create new hypotheses and research plans. Already, Imperial College London has proven the value of this tool. According to Professor José R. Penadés, his The team has spent years Receiving why some superpusses are resistant to antibiotics. AI reproduced their results in just 48 hours. While the AI ​​has dramatically accelerated the generation of hypotheses, human scientists were still necessary to confirm the results. Nevertheless, Penadés said The new application of AI “has the potential to overcome science”.

What would that mean to overcome science?

Last October, the CEO of Anthropic, Dario Amodei, wrote in his “Loving grace machines“Blog he expected to” powerful AI ” – his term for what most of the artificial intelligence (AG) would lead to” the next 50 to 100 years of biological [research] progress in 5 to 10 years. Four months ago, the idea of ​​compressing up to a century of scientific progress in a single decade seemed extremely optimistic. With the recent advances in AI models, notably Anthropic Claude 3.7, OpenAi Deep Research and Google AI Co-scientific, which AMODEI called a short-term “radical transformation” is starting to be much more plausible.

However, although AI can accelerate scientific discovery, biology, at least, is always linked by constraints of the real world – experimental validation, regulatory approval and clinical trials. The question is no longer whether the AI ​​will transform science (as it will certainly do), but rather the speed with which its impact will be achieved.

In a February 9 blush Post, the CEO of Openai, Sam Altman, said that “the systems that start to point towards Act are in sight”. He described acts as “a system that can tackle the increasingly complex problems, in human level, in many areas”.

Altman thinks that the realization of this stage could unlock an almost Utopian future in which “economic growth in front of us seems amazing, and we can now imagine a world where we heal all diseases, have much more time to take advantage of our families and can fully achieve our creative potential.”

A dose of humility

These AI advances are extremely important and prefer a very different future in a brief period. However, the dazzling rise of AI was not without stumbling. Consider the recent fall in the AI ​​human spindle – a device was a smartphone replacement after a buzzworthy Ted Talk. Barely a year later, the company collapsed and its Reminders have been sold OFF for a fraction of their evaluation once the highest.

The real world’s AI applications are often faced with significant obstacles for many reasons, the lack of expertise relevant to the limitations of infrastructure. It was certainly the experience of Sensei AG, a startup supported by one of the richest investors in the world. The company has decided to apply AI to agriculture by raising varieties of improved crops and using robots for harvesting, but has encountered major obstacles. According to At the Wall Street Journal, the startup was faced with many setbacks, from technical challenges to unexpected logistical difficulties, highlighting the gap between the potential of AI and its practical implementation.

What comes next?

As we look in the near future, science is at the dawn of a new golden age of discovery, AI becoming an increasingly competent partner in research. Depth learning algorithms working in tandem with human curiosity could disentangle complex problems at record speed as AI systems, large data sides, models of punctuals invisible to humans and suggest interdisciplinary hypotheses.

Already, scientists use AI to compress research times – predict protein structures, scan literature and reduce years of work to months or even days – unlocking possibilities through the fields of climate science to medicine.

However, as the potential for radical transformation becomes clearer, the same is true for the imminent risks of disturbance and instability. Altman himself recognized in his blog that “the balance of powers between capital and labor could easily spoil”, a subtle but significant warning that the economic impact of the AI ​​could be destabilizing.

This concern already materializes, as demonstrated in Hong Kong, as the city recently Cut 10,000 Public service jobs while simultaneously increasing AI investments. If such tendencies continue and become more extensive, we could see upheavals of generalized labor, an increase in social disorders and exert intense pressure on institutions and governments around the world.

Adapt to a world powered by AI

The growing capacities of AI in scientific discovery, reasoning and decision -making mark a deep change which presents both extraordinary promises and formidable challenges. Although the path to follow can be marked by economic disturbances and institutional strains, history has shown that societies can adapt to technological revolutions, although not always easily or without consequences.

To successfully navigate this transformation, companies must invest in the governance, education and adaptation of the workforce to ensure that the advantages of AI are fairly distributed. Even if AI regulations are faced with political resistance, scientists, decision -makers and business leaders must collaborate to build ethical executives, apply transparency and craft policies that mitigate risks while amplifying the AI ​​transformer impact. If we take up this challenge with foresight and responsibility, people and AI can take up the biggest challenges in the world, inaugurating a new era with breakthroughs that once seemed impossible.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *