Invisible, autonomous and hackable: The AI agent dilemma no one saw coming

MT HANNACH
7 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

This article is part of Venturebeat’s special issue, “The Cyber ​​Resilience Playbook: navigation of the new threat era”. Learn more about this Special number here.

The generative AI poses interesting security questions and as companies move in the agenic world, these safety problems are increasing.

When AI agents enter workflows, they must be able to access sensitive data and documents to do their job – by doing a significant risk for many Companies in mind of security.

“The growing use of multi-agent systems will introduce new attacks and vulnerabilities that could be used if they are not properly secure from the start,” said Nicole Carignan, vice-president of Cyber ​​IA Strategic DarkTrace. “But the impacts and damage to these vulnerabilities could be even greater due to the growing volume of connection points and interfaces that multi-agent systems have.”

Why AI agents present such a high security risk

AI agents – or an autonomous AI that performs actions on behalf of users – have become extremely popular in recent months. Ideally, they can be connected to tedious workflows and can do any task, from something as simple as finding information based on internal documents to make recommendations to human employees.

But they present an interesting problem for business security professionals: they must access data that makes them effective, without opening or accidentally sending other information to others. Agents performing more tasks than human employees carried out, the question of accuracy and responsibility comes into play, potentially becoming a headache for safety and compliance teams.

Chris Betz, ciso of AWStold VentureBeat that the generation of recovery (RAG) and the agent use cases “are a fascinating and interesting angle” in security.

“Organizations will have to think about what the default sharing looks like in their organization, because an agent will find in research everything that will support his mission,” said Betz. “And if you eclipse the documents, you must think of the default sharing policy in your organization.”

Security professionals must then request whether agents should be considered as digital employees or software. What access should agents have? How should they be identified?

Vulnerabilities of AI agents

Gen Ai has made more awareness of many companies potential vulnerabilitiesBut the agents could open them to even more problems.

“The attacks we see today have an impact on single agent systems, such as data poisoning, rapid injection or social engineering to influence the behavior of agents, could all be vulnerabilities to within a multi-agent system, “said Carignan.

Companies must pay attention to the fact that agents can access to guarantee that data security remains strong.

Betz stressed that many safety problems Access to surrounding human employees can extend to agents. Therefore, it “amounts to making sure people have access to good things and only to good things”. He added that when it comes to agent workflows with several stages, “each of these steps is an opportunity” for the pirates.

Give agents an identity

An answer could be to deliver specific access identities to agents.

A world where models of reasons for problems over the days are “a world where we must think more about the registration of the identity of the agent as well as the identity of the human responsible for this request to ‘agent everywhere in our organization, ”said Jason Clinton, Ciso of the model supplier Anthropic.

The identification of human employees is something that companies have been doing for a very long time. They have specific jobs; They have an e-mail address they use to connect to the accounts and be followed by IT administrators; They have physical laptops with accounts that can be locked. They obtain an individual authorization to access certain data.

A variation in this type of access and identification of employees could be deployed to agents.

Betz and Clinton think that this process can encourage business leaders to rethink the way they provide access to information to users. This could even lead organizations to revise their workflows.

“The use of an agency workflow actually offers you the possibility of linking use cases for each step along the way to the data it needs within the framework of the cloth, but only the data it needs, ”said Betz.

He added that agent workflows “can help respond to some of these concerns about the starting”, as companies must consider accessible data to complete the actions. Clinton added that in a workflow designed around a specific set of operations, “there is no reason why the step is a choice to have access to the same data that the seven step needs.”

The old -fashioned audit is not enough

Companies can also look for agent platforms that allow them to take a look within the operation of the agents. For example, Don Schuerman, CTO of the workflow automation provider Pegasaid his company helps to provide agency security by telling the user what the agent does.

“Our platform is already used to audit the work that humans do, so we can also audit each step that a agent does,” Schuerman told Venturebeat.

The new Pega product, AgentxAllows human users to switch to a screen describing the steps that an agent undertakes. Users can see where the workflow agent is, the agent is and obtain a reading of his specific actions.

Audits, deadlines and identification are not perfect solutions to the security problems presented by AI agents. But as companies explore the potential of agents and start deploying them, more targeted answers could occur as AI experimentation continues.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *