Join our daily and weekly newsletters for the latest updates and exclusive content covering cutting-edge AI. Learn more
2025 will be the year that big tech moves from selling more and more powerful tools to selling more and more powerful capabilities. The difference between a tool and an ability is subtle but profound. We use tools as external artifacts that help us overcome our organic limitations. From cars and planes to phones and computers, tools are dramatically expanding what we can accomplish as individuals, in large teams, and as vast civilizations.
The abilities are different. We experience first-person abilities as self-embodied abilities that seem internal and instantly accessible to our conscious mind. For example, language and mathematics are human-made technologies that we load into our brains and carry with us throughout our lives, expanding our abilities to think, create, and collaborate. They are super powers which seem so inherent to our existence that we rarely think of them as technologies. Fortunately, we don’t need to purchase a service plan.
The next wave of superpowers won’t come free, however. But just like our abilities to think verbally and numerically, we will experience these powers as embodied abilities that we carry with us throughout our lives. I call this new technological discipline augmented mentality and it will emerge from the convergence of AI, conversational computing and augmented reality. And, in 2025, it will trigger an arms race between the world’s biggest companies to sell us superhuman abilities.
These new superpowers will be unleashed by contextual AI agents which are loaded in body-worn devices (like AI glasses) that travel with us throughout our lives, seeing what we see, hearing what we hear, experiencing what we experience, and providing us with enhanced abilities to perceive and interpret our world. In fact, by 2030, I predict that the majority of us will live our lives with the help of context-aware AI agents that provide digital superpowers in our normal daily experiences.
How will our superhuman future unfold?
First of all, we will whisper to these intelligent agents, and they will whisper backacting as an omniscient alter ego who gives us recommendations, knowledge, guidance, advice, adapted to the context. space remindersdirectional cues, haptic nudges and other verbal and perceptual content that will guide us through our days and tell us about our world.
Consider this simple scenario: You’re walking downtown and spot a store across the street. Wondering, what time does it open? So, you pick up your phone and type (or say) the store name. You quickly find the opening hours on a website and perhaps also check other information about the store. This is the basic computing model of tool usage that prevails today.
Now let’s see how big tech will move to a capacity calculation model.
Step 1: You wear AI-powered glasses that can see what you see, hear what you hear, and process your environment via a large multimodal language model (LLM). Now, when you see that store across the street, you just whisper to yourself, “I wonder when it will open?” and a voice will instantly ring in your ears “10:30”.
I know it’s a subtle change from asking your phone to search for the name of a store, but it will seem profound. The reason is that the contextual AI agent will share your reality. It’s not just about tracking your location like GPS, it’s about seeing, hearing and paying attention to what you’re paying attention to. It will feel much less like a tool and more like an internal capability tied to your first-person reality.
And when the AI-powered alter ego asks us a question in our ears, we often respond simply with nodding to affirm (detected by sensors in the glasses) or by shaking your head to dismiss it. It will feel so natural and transparent that we may not even consciously realize that we have responded.
Step 2: By 2030, we will no longer need to whisper to the AI agents that accompany us in our lives. Instead, we will be able to simply say the words, and the AI will know what we are saying by reading our lips and detecting the activation signals from our muscles. I am confident that mouthing will be deployed, as it is more private, more resilient in noisy spaces and, most importantly, it will feel more personal, internal and embodied.
Step 3: By 2035, you may not even need to say these words. Indeed, AI will learn to interpret signals from our muscles with such subtlety and precision that we will simply have to think about pronouncing words to express our intention. We will be able to focus our attention on any object or activity in our world and think about something, and useful information will come back from our AI glasses like a omniscient voice in our heads.
Of course, abilities will go far beyond just what’s around you. Indeed, the on-board AI that shares your reality in the first person will learn to anticipate the information you want before you even ask for it. For example, when a colleague comes down the hall and you can’t remember their name, the AI will sense your discomfort and a voice will ring out: “Gregg from engineering.”
Or when you buy a can of soup at a store and are curious about the carbs or wondering if it’s cheaper at Walmart, the answers will simply ring in your ears or appear visually. It will even give you superhuman abilities to judge emotions in other people’s faces, predict their moods, goals or intentions, and coach you in real-time conversations to make yourself more compelling, attractive or convincing (see this fun video example).
I know some people will be skeptical of the level of adoption I predict above and the rapid timeline, but I don’t make these claims lightly. I have spent a large part of my career working on technologies that increase and develop human capabilitiesand I can say without a doubt that the mobile computing market is about to take a very big turn in that direction.
Over the past 12 months, two of the world’s most influential and innovative companies, Meta and Google, have revealed plans to give us embodied superpowers. Meta made the first big move by adding contextual AI to its Ray-Ban glasses and introducing its Orion mixed reality prototype that adds impressive visual capabilities. Meta is now very well positioned to leverage its big investments in AI and extended reality (XR) and become a major player in the mobile computing market, and they’ll probably do it by selling us superpowers that we cannot resist.
Not to be outdone, Google recently announced Android XRa new AI-based operating system for increase our world with transparent contextual content. They also announced a partnership with Samsung to market new glasses and headsets. With over 70% market share for mobile operating systems and an increasingly strong AI presence with Gemini, I believe Google is well-positioned to become the leading provider of technological human superpowers over the next few years. next years.
Of course we have to consider the risks
To quote the famous 1962 Spiderman comic strip“With great power comes great responsibility.” This wisdom is literally about superpowers. The difference is that the big responsibility will not fall on the consumers who buy these technologies, but on the companies that provide them and the regulators who oversee them.
After all, by wearing AI-powered augmented reality (AR) glasses, any of us could find ourselves in a new reality where technologies controlled by third parties can selectively change what we see and hear, while AI-powered voices whisper advice, information, and guidance into our ears. If the intentions are positive, even magical, the potential for abuse is just as deep.
To avoid dystopian consequences, my main recommendation to consumers and manufacturers is to adopt a subscription business model. If the arms race to sell superpowers is driven by which company can deliver the most amazing new capabilities for a reasonable monthly fee, we will all benefit. If, instead, the business model becomes a competition to monetize the superpowers by delivering the most effective targeted influence to our eyes and ears throughout our daily lives, consumers could easily handled with a precision and omnipresence that we have never faced before.
Ultimately, these superpowers won’t seem optional. After all, not having them could put us at a cognitive disadvantage. It is now up to industry and regulators to ensure that we deploy these new capabilities in a way that is not intrusive, manipulative or dangerous. I believe this can be a magical new direction for IT, but it requires careful planning and oversight.
Louis Rosenberg founded Immersion Corp, Outland Research and Unanimous AIand author of Our Next Reality.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contribute to an article to you!