AI systems are unlikely to make the scientific discoveries some leading labs are hoping for, Hugging Face’s top scientist says

MT HANNACH
4 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Thomas Wolf, Hugging Face scientist, says that current AI systems are unlikely to make scientific discoveries that certain leading laboratories hoped.

Talk to Fortune At Viva Technology in Paris, the co -owner of the Câlins said that if the models of large languages ​​(LLM) have shown an impressive ability to find answers to questions, they fail when they try to ask for good – something that Wolf considers the most complex part of real scientific progress.

“In science, asking the question is the difficult part is not finding the answer,” said Wolf. “Once the question is asked, the answer is often quite obvious, but the difficult part is really to ask the question, and the models are very bad for asking big questions.”

Wolf said he arrived at the conclusion after reading a blog article widely disseminated by the CEO of Anthropic Dario Amodei called Loving grace machines. In this document, Amodei maintains that the world is about to see the 21st century “tablet” in a few years while AI accelerates science radically.

Wolf said that he had initially found the inspiring play, but began to doubt the idealistic vision of Amodei of the future after second reading.

“He said that AI was going to solve cancer and that it will solve mental health problems-it will even bring peace in the world, but then I reread it and I realized that there is something that seems very bad on this subject, and I do not believe it,” he said.

For Wolf, the problem is not that AI lacks knowledge, but that it does not have the capacity to challenge our existing knowledge framework. AI models are formed to predict probable continuations, for example, the next word of a sentence, and although today’s models excellent to imitate human reasoning, they are not below any real original thought.

“The models are just trying to predict the most likely thing,” said Wolf. “But in almost all major cases of discovery or art, it is not really the most likely work of art you want to see, but it is the most interesting.”

Using the example of Go’s game, a board game that has become an important step in the history of AI when Deepmind’s alphago beaten the world champions in 2016, Wolf argued that, while mastering Go’s rules is impressive, the biggest challenge lies in the invent of such a complex game in the first place. In science, he said, the equivalent of inventing the game asks these really original questions.

Wolf first suggested this idea in a blog article entitled The Einstein AI modelpublished earlier this year. In this document, he wrote: “To create an Einstein in a data center, we do not only need a system that knows all the answers, but rather the one who can ask questions that no one else thought or dared to ask.”

He maintains that what we have in place are models that behave like “yes -men on servers” – it is perfectly pleasant, but unlikely to challenge hypotheses or rethink fundamental ideas.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *