Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
Not so long ago, humans wrote almost the whole code of application. But this is no longer the case: the use of AI tools to write code has expanded considerably. Some experts, such as the anthropogenic CEO Dario Amodei, expect AI to write 90% of all code in the next 6 months.
In this context, what is the impact for companies? Code development practices have traditionally involved different levels of control, surveillance and governance to ensure quality, compliance and security. With the code developed by AI, do organizations have the same insurance? Most importantly, perhaps, organizations should know which models have generated their AI code.
Understanding where the code comes from is not a new challenge for businesses. This is where the Source Code analysis tools (SCA) integrate. Historically, the SCA tools have not given an AI preview, but it changes now. Several sellers, including Sonar,, Endor laboratories And Sonat type Now provide different types of information that can help companies with code developed by AI.
“Each client we are talking about now is interested in the way they should be responsible for using AI code generators,” Tariq Shaukat, CEO of Sonar, at Venturebeat.
Financial company suffers from a breakdown per week due to a code developed by AI
AI tools are not infallible. Many organizations have learned that the lesson at the beginning of the moment when content development tools provided inaccurate results called hallucinations.
The same basic lesson applies to the code developed by AI. While organizations go from experimental mode to production mode, they are increasingly achieved that the code is very buggy. Shaukat noted that the code developed by AI can also cause safety and reliability problems. The impact is real and it is not trivial either.
“I had a CTO, for example, of a financial service company about six months ago, tell me that they were undergoing a breakdown per week due to the code generated by AI,” said Shaukat.
When he asked his client if he made code opinions, the answer was yes. That said, the developers did not feel anywhere responsible for the code and did not spend as much time and rigor on this subject, as they had done before.
The reasons for the code end up being buggy, especially for large companies, can be variable. A common problem, however, is that companies often have large code bases which can have complex architectures that AI tool may not know. In the opinion of Shaukat, the AI code generators generally do not deal with the complexity of the larger and more sophisticated code bases.
“Our biggest client analyzes more than 2 billion code lines,” said Shaukat. “You start to deal with these code bases, and they are much more complex, they have many more technological debts and they have a lot of dependencies.”
AI challenges have developed the code
For Mitchell Johnson, director of product development at Sonatype, it is also very clear that the code developed by AI is there to stay.
Software developers must follow what he calls the oath of engineering hippocrat. That is, not to harm the basis of code. This rigorously means examining, understanding and validating each line of code generated by AI before committing it – just as the developers would do with manually written or open -source code.
“AI is a powerful tool, but it does not replace human judgment in terms of security, governance and quality,” Johnson told Venturebeat.
According to Johnson, the greatest risks of the code generated by AI are:
- Security risks: AI is formed on massive open source data sets, including a vulnerable or malicious code. If it is not controlled, it can introduce security defects in the software supply chain.
- Blind confidence: Developers, in particular the least experienced, can assume that the code generated by AI is correct and secure without appropriate validation, leading to uncontrolled vulnerabilities.
- Compliance and context gaps: AI lacks awareness of the logic of business, security policies and legal requirements, making compromises of conformity and performance risky.
- Governance: The code generated by AI can spread out unattended. Organizations need automated railings to follow, audit and secure the code created by large scale AI.
“Despite these risks, speed and security should not be a compromise,” said Johnson. “With the right tools, automation and governance based on data, organizations can use AI safely – accelerate innovation while ensuring safety and compliance.”
The models count: identify the open source model model for code development
There are a variety of models that organizations use to generate code. Anthopic Claude 3.7For example, is a particularly powerful option. Google Assist Code,, O3 of Openai And GPT-4O models are also viable choices.
Then there is open source. Sellers such as Meta and Qodo Offer open source models, and there is an apparently endless range of options available on Huggingface. Karl Mattson, sleeping Ciso, warned that these models pose security challenges to which many companies are not prepared.
“Systematic risk is the use of open source LLMS,” Mattson told Venturebeat. “Developers using open source models create a whole new suite of problems. They introduce into their code base using a kind of models not evaluated or not evaluated and not proven. »»
Unlike the commercial offers of companies like Anthropic or Openai, which Mattson describes as “high -quality security and governance programs”, open source models like Hugging Face can vary considerably in quality and security posture. Mattson stressed that rather than trying to prohibit the use of open source models for the generation of code, organizations must understand potential risks and choose appropriately.
Endor Labs can help organizations to detect when the Open-Source AI models, in particular by hugging the face, are used in code standards. The company’s technology also assesses these models through 10 risk attributes, including operational security, ownership, use and update frequency to establish a risk base.
Specialized detection technologies emerge
To cope with emerging challenges, SCA suppliers have published a number of different capacities.
For example, Sonar has developed an AI code insurance capacity which can identify code models specific to the generation of machines. The system can detect when the code was probably generated by AI, even without direct integration with the coding assistant. Sonar then applies a specialized examination to these sections, looking for hallucinated dependencies and architectural problems that would not appear in the code written by man.
Endor Labs and Sonatype adopt a different technical approach, focusing on the origin of the model. The Sonatype platform can be used to identify, follow and govern AI models alongside their software components. Endor Labs can also identify when the Open-Source AI models are used in code standards and assess the potential risk.
When implementing the code generated by AI in corporate environments, organizations need structured approaches to mitigate risks while maximizing advantages.
There are several key practices that companies should consider, in particular:
- Implement rigorous verification processes: Shaukat recommends to organizations A rigorous process to understand where code generators are used in a specific part of the code base. This is necessary to guarantee the right level of responsibility and examination of the code generated.
- Recognize the limits of AI with complex code bases: Although the code generated by AI can easily manage simple scripts, it can sometimes be somewhat limited with regard to complex code bases which have a lot of dependencies.
- Understand the unique problems in the code generated by AI: Shaukat noted that WHile AI avoids current syntax errors, it tends to create more serious architectural problems by hallucinations. Code hallucinations may include the composition of a variable name or a library that does not really exist.
- Require the responsibility of developers: Johnson underlines that the code generated by AI is not intrinsically secure. The developers must examine, understand and validate each line before committing it.
- Rationalize AI approval: Johnson also warns against the risk of shadow AI or the uncontrolled use of AI tools. Many organizations prohibit AI (that employees ignore) or create so complex approval processes that employees are bypassing them. Instead, he suggests that companies create a clear and effective framework for assessing and AI tools in green light, ensuring safe adoption without unnecessary roadblocks.
What it means for businesses
The risk of development of the AI shadow code is real.
The volume of code that organizations can produce with the help of AI increases considerably and may soon understand the majority of all code.
The challenges are particularly high for complex corporate applications where a single hallucinated dependence can cause catastrophic failures. For organizations that seek to adopt AI coding tools while maintaining reliability, the implementation of specialized code analysis tools goes from the option to the essentials.
“If you authorize the code generated by AI in production without detection and specialized validation, you mainly fly blind,” warned Mattson. “The types of chess that we see are not only bugs – these are architectural failures that can lower whole systems.”