Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
When I was a child, there were four AI agents in my life. Their names were enky, flashy, pink and clyde and they did their best to track me. It was in the 1980s and the agents were the four colorful ghosts of the game emblematic Arcade Pac-Man.
According to today’s standards, they were not particularly intelligent, but they seemed to continue with cunning and intention. It was decades before neural networks Were used in video games, so their behavior was controlled by simple algorithms called heuristics that dictate how they would pursue me in the labyrinth.
Most people do not do this, but the four ghosts were designed with different “personalities. “Good players can observe their actions and learn to predict their behavior. For example, the Ghost Red (flashing) has been programmed with a” pursuing “personality who loads directly to you. The pink ghost (Pinky), on the other hand , received a “ambush” personality who predicts where you go and try to get there first.
I remember because in 1980, a qualified human could observe these AI agents, decode their unique personalities and use these ideas to thwart them. Now, 45 years later, the tides are about to turn. Like it or not, AI agents will soon be deployed who are responsible for decoding your personality so that they can use these ideas to optimally influence You.
The future of manipulation of AI
In other words, we are about to become involuntary players in “The Game of Humans” and it will be the AI agents who will try to win the high score. I mean that literally – most AI systems are designed to maximize a “reward function“It earns points to achieve the goals. This allows AI systems to quickly find optimal solutions. Unfortunately, without regulatory protection, we, humans, will probably become the objective that AI agents are responsible for optimizing.
I am the most concerned about the conversational agents This will engage in a friendly dialogue throughout our daily life. They will speak to us through photorealist avatars on our PCs and our phones and soon, through AI propulsion glasses This will guide us through our days. Unless there are clear restrictions, these agents will be designed to probe us on conversation so that it is to characterize our temperaments, trends, personalities and desires, and use these traits for Maximize their persuasive impact When you work to sell American products, launch American services or convince us to believe disinformation.
This is called the “AI handling problem»And I warned the risk regulators Since 2016. So far, decision -makers have not taken decisive measures, considering the threat too far in the future. But now, with the publication of Deepseek -R1, the final barrier has a general deployment of AI agents – the cost of treatment in real time – has been considerably reduced. Before the end of this year, AI agents will become a new form of targeted media which is Interactive and adaptiveIt can optimize its ability to influence our thoughts, guide our feelings and stimulate our behavior.
Superhuman ai “vendors”
Of course, human sellers are also interactive and adaptive. They engage in a friendly dialogue to divide us, quickly finding the pimples they can support to influence us. AI agents will make them look like amateurs, capable of drawing information with such finesse, it would intimidate a seasoned therapist. And they will use this information to adjust their conversational tactics in real time, working at convince More effectively than any used car seller.
These will be asymmetrical meetings in which the artificial agent has the upper hand (practically speaking). After all, when you hire a human who tries to influence yourself, you can usually feel his motivations and their honesty. It will not be a fair fight with AI agents. They will be able to adapt to superhuman skills, but you will not be able to size them at all. It is because they will look, sound and act so human, we will do it Make them unconsciously When they smile at empathy and understanding, forgetting that their facial effect is only a simulated facade.
In addition, their voice, their vocabulary, their style of speech, their age, their sex, their race and their face are likely to be personalized for each of us personally Maximize our receptivity. And, unlike human sellers who need to size each client from zero, these virtual entities could have access to data stored on our history and interest. They could then use this personal data for quickly Win your confidenceAsk yourself about your children, your work or maybe your beloved Yankees from New York, attenuating you to let your guard unconsciously fall.
When AI reaches cognitive supremacy
To educate political decision -makers on the risk of manipulation fueled by AI, I helped make a award -winning short film entitled Lost intimacy This was produced by the Metaveverse Alliance responsible, Minderoo and the XR Guild. The fast 3 -minute story Represents a young family who eats in a restaurant while wearing automotive reality glasses (AR). Instead of human servers, avatars take orders from each dinner, using AI power to sell them in a personalized manner. The film was considered science fiction when it was released in 2023-but only two years later, Big Tech was engaged in an all-out Arms race to make glasses powered by AI This could easily be used in this way.
In addition, we must consider the psychological impact that will occur when we, humans begin to believe that the agents of the AI who give us advice are smarter than us On almost all fronts. When the AI reaches a perceived state of “cognitive supremacy” with regard to the average person, this will probably make us blindly accept their advice rather than using our own critical thinking. This deference for perceived superior intelligence (whether it is really superior or not) will make the manipulation of agents much easier to deploy.
I am not an overly aggressive regulatory fan, but we need intelligent and narrow restrictions on the AI to avoid superhuman manipulation by conversational agents. Without protection, these agents will convince us to buy things that we do not need, believe that false things and accept things that are not in our best interest. It is easy to tell you that you will not be susceptible, but with the AI optimizing each word they tell us, it is likely that we will all be surpassed.
A solution is to prohibit AI agents from the establishment feedback loops in which they optimize their persuasion by analyzing our reactions and repeatedly adjusting their tactics. In addition, AI agents should be required to inform you of their objectives. If their goal is to convince you to buy a car, vote for a politician or put pressure on your family doctor for a new medication, these objectives should be indicated in advance. And finally, AI agents should not have access to personal data on your history, your interests or your personality if these data can be used to influence you.
In today’s world, targeted influence is an overwhelming problem, and it is mainly deployed as Buckshot fired in your general management. The interactive AI agents will transform the targeted influence into heat research missiles which find the best path to each of us. If we do not protect ourselves against this risk, I fear that we all do not lose the play of humans.
Louis Rosenberg is a known computer scientist and author who was the pioneer mixed reality and founded Unanimous.
DATADECISIONMAKERS
Welcome to the Venturebeat community!
Data data manufacturers are the place where experts, including technicians who do data work, can share data -related information and innovation.
If you want to read on advanced ideas and up-to-date information, best practices and the future of data and data technology, join us at datadecisionmakers.
You might even consider Contribute an article It’s up to you!