DeepSeek: China’s open source AI fuels national security paradox

MT HANNACH
12 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more


Deepseek and its R1 model do not waste time rewriting the rules of cybersecurity in real time, everyone, from startups to business suppliers piloting integrations to their new model this month.

R1 has been developed in China and is based on learning in pure strengthening (RL) without supervised fine adjustment. It is also open source, which makes it immediately attractive for almost all cybersecurity startups which are all all about architecture, development and open source deployment.

The investment of Deepseek dollars of $ 6.5 million in the model offers performance that corresponds to O1-1217 of OPENAI in reasoning references while operating on lower level GPUs. Deepseek prices establish a new standard With significantly lower costs per million tokens compared to the OpenAi models. The Deep Seek-Reasoner model invoices $ 2.19 per million production tokens, while the O1 model of Openai invoices $ 60 for the same. This price difference and its open source architecture have drawn the attention of CIOs, CISOs, cybersecurity startups and company software suppliers.

(Interesting fact, OPENAI complaints Deepseek used its models To train R1 and other models, going so far as to say that the company has exfiltrated data via several requests.)

A breakthrough of AI with hidden risks that will continue to emerge

At the heart of the question of the security and reliability of the models is whether censorship and secret prejudices are integrated into the heart of the model, warned Chris Krebs, inaugural director of the cybersecurity security agency and The infrastructure of the American Department of Internal Security (DHS) (Infrastructure Safety Agency (DHS) (DHS) (DHS) (DHS) (DHS) (DHS) (DHS) (DHS) (DHS) (DHS) Agency (Infrastructure Safety Agency (DHS) (Infrastructure Security Agency (DHS) (SECURITY Infrastructure (DHS) (DHS) (DHS) (DHS) (SECURITY InfrastructureCisa) and, more recently, the director of public policies at Sentinel.

“The censorship of the Critical Content of the Chinese Communist Party (PCC) can be` cooked ” of the model, and therefore a design functionality to face which can eliminate objective results, “he said. “This” political lobotomization “of Chinese AI models can support … the global development and proliferation of open source models based in the United States.”

He stressed that, as the argument says, democratize it access to American products should increase the American soft power abroad and undermine the dissemination of Chinese censorship worldwide. “The fundamental principles of R1 at low cost and simple calculation question the effectiveness of the American strategy to deprive Chinese companies to access Western technology, including GPUs,” he said . “In a way, they really do” more with less “.”

Merritt Baer, ​​ciso at Reco And the advisor of several security startups, told Venturebeat that: “In fact, the training [DeepSeek-R1] On wider internet data controlled by Internet sources in the West (or perhaps better described as lacking in Chinese controls and firewall), could be an antidote for certain concerns. I am less worried about obvious things, as to censor any criticism of President XI, and more concerned about political and social engineering more difficult to define who has entered the model. Even the fact that the creators of the model are part of a system of Chinese influence campaigns is a disturbing factor – but not the only factor that we should consider when we select a model. »»

With Deepseek Training, the model with the NVIDIA H800 GPUs which have been approved for sale in China but which do not have the power of the most advanced H100 and A100 processors, Deepseek further demolishes its model to any organization that can afford equipment to execute it. Estimates and material invoices explaining how to build a system for $ 6,000 capable of executing R1 proliferate on social networks.

R1 and follow -up models will be built to circumvent American technological sanctions, a point that Krebs considers a direct challenge for the American AI strategy.

Enkrypt ai’s Red Deepseek-R1 team report notes that the model is vulnerable to the generation of “harmful, toxic, biased, biased, CBRN and unsecured code”. The red team continues that: “Although it can be suitable for closing applications closely, the model shows considerable vulnerabilities in the fields of operational risk and security, as detailed in our methodology. We strongly recommend the implementation of attenuations if this model should be used. »»

The Red Enkrypt AI team also found that Deepseek-R1 is three times more biased than Claude 3 opus, four times more vulnerable to the generation of unsecured code than the O1 of Open AI and four times more toxic than GPT-4O. The red team also found that the model is eleven times more likely to create harmful production than the O1 of the Open AI.

Know the risk of confidentiality and security before sharing your data

Deepseek mobile applications now dominate global downloads, and the web version is a recording traffic, with all the personal data shared on the two platforms captured on servers in China. Companies plan to execute the model on isolated servers to reduce the threat. VentureBeat has learned about pilots that operate on trivialized equipment between organizations in the United States

All data shared on mobile and web applications are accessible by Chinese intelligence agencies.

The Chinese national intelligence law stipulates that companies must “support, help and cooperate” with state intelligence agencies. The practice is so omnipresent and such a threat to American companies and citizens that the Department of Internal Security published a Data Security Commercial Advice. Due to these risks, the The American navy has published a directive Prohibit Deepseek-R1 from all work-related systems, tasks or projects.

Organizations that will quickly manage the new model go to the open source and insulator test systems of their internal network and the Internet. The objective is to execute benchmarks for specific use cases while ensuring that all data remains private. Platforms such as perplexity and hyperbolic laboratories allow companies to deploy R1 safely in American or European data centers, keeping sensitive information out of the reach of Chinese regulations. Please see a Excellent summary of this aspect of the model.

Itamar Golan, CEO of startup Rapid safety And a main member of the TOP 10 of Owasp for large-language models (LLMS), maintains that the risks of data confidentiality extend beyond Deepseek. “Organizations should also not have their sensitive data fed with OPENAI or other providers of models based in the United States,” he noted. “If the flow of data to China is a significant national security concern, the American government may want to intervene by strategic initiatives such as the subsidy of interior AI providers to maintain competitive prices and market balance.”

Recognizing R1’s security defects, causes additional support to inspect the traffic generated by Deepseek-R1 of queries in a few days after the introduction of the model.

During a survey of the public infrastructure of Deepseek, the security supplier of the Cloud Wiz research team I discovered a clickhouse database on the internet with more than a million newspaper lines with cat stories, secret keys and backend details. No authentication has been activated in the database, which allows a rapidly potential privilege climbing.

The discovery of WIZ’s search highlights the danger of quickly adopting AI services which are not built on large -scale hardened safety frameworks. WIZ revealed the violation responsible, encouraging Deepseek to immediately lock the database. The initial surveillance of Deepseek emphasizes three basic lessons so that any AI supplier can keep in mind when introducing a new model.

First of all, perform a red team and carefully test IA infrastructure safety before you even launch a model. Secondly, apply the least privileged access and adopt a zero-frust state of mind, suppose that your infrastructure has already been raped and does not trust any multidomaine connection between systems or cloud platforms. Third, ask the security teams and AI engineers to collaborate and have how models save sensitive data.

Deepseek creates a security paradox

Krebs warned that the real danger of the model is not only where it was done, but how it was done. Deepseek-R1 is the by-product of the Chinese technology industry, where the objectives of the private sector and national intelligence are inseparable. The concept of firewall of the model or execution locally as a backup is an illusion because, as Krebs explains, the mechanisms of biases and filtering are already “baked” at a fundamental level.

Cybersecurity and national security leaders agree that Deepseek-R1 is the first of many models with exceptional performance and a low cost that we will see from China and other nation states that apply control of all data collected.

Conclusion: Where open source has long been considered a democratic force in software, the paradox that this model creates shows how much a nation state can arm the open source if it chooses it.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *