Enhancing AI agents with long-term memory: Insights into LangMem SDK, Memobase and the A-MEM Framework

MT HANNACH
7 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more


AI agents can automate many tasks that companies want to play. A drawback, however, is that they tend to be forgotten. Without long -term memory, agents must either finish a task in a single session or be constantly reprimanded.

Thus, while companies continue to explore use cases for AI agents and how to implement them in complete safety, companies allowing the development of agents must consider how to make them less forgetful. Long -term memory will make agents much more precious in a workflow, capable of remembering the instructions, even for complex tasks that require several laps to end.

Manvinder Singh, vice-president of AI products management at Redis, told Venturebeat that memory makes agents more robust.

“Agency memory is crucial to improve [agents’] Efficiency and capacities because LLMs are intrinsically without state – they do not remember things like cats, answers or stories of cat, “said Singh in an email. “Memory allows AI agents to recall past interactions, keep information and maintain the context to provide more consistent and personalized responses and more impactful autonomy.”

Companies love Lubricole began to offer options to extend agent memory. Langchain Langchain SDK helps developers build agents with tools “to extract conversation information, optimize the behavior of agents thanks to fast updates and maintain long -term memory on behavior, facts and events.”

Other options include MelobaseAn open source tool launched in January to give agents “user -centered memory” so that applications remember and adapt. Crewai also has tools around long -term agent memory, while Openai’s swarm Requires users to bring their memory model.

Mike Mason, Director of AI at Tech Consultancy Thoughtworks, told Venturebeat in an email that the best aging memory changes the way companies use agents.

“Memory transforms AI agents of simple and reactive tools into dynamic and adaptive assistants,” said Mason. “Without this, agents must count entirely on what is provided in a single session, limiting their ability to improve interactions over time.”

Best memory

A sustainable memory among agents could have come in different flavors.

Langchain works with the most common types of memory: semantic and procedural. Semantics refers to the facts, while the procedure refers to the processes or to carry out tasks. The company said agents already have a good short -term memory and can respond in the current conversation thread. Langmem stores procedural memory as an instructions updated in the prompt. Bank On his work on the optimization of prompts, Langmem identifies the interaction models and updates “the invitation of the system to strengthen effective behavior. This creates a feedback loop where the basic instructions of the agent evolve according to the performance observed. »»

Researchers working on ways to extend memories of AI models and, therefore, AI agents have found that agents with long -term memory can learn errors and improve. A paper Since October 2024, the concept of self-evolution of AI by long-term memory, showing that models and agents are improving more they remember. Models and agents are starting to adapt to more individual needs because they remember more personalized instructions longer.

In another article, researchers from Rutgers University, the Ant and Salesforce group have introduced a new Memory system called a-memBased on the Zettelkasten notes taking method. In this system, agents create knowledge networks that allow “more adaptive and complementary memory management”.

Singh of Redis said that agents with a long -term memory function such as hard drives, “holding a lot of information that persists on several tasks or conversations, allowing agents to draw comments and adapt to user preferences.” When the agents are integrated into the workflows, this type of adaptation and self-learning allows organizations to keep the same set of agents working on a task long enough to finish it without the need to reproduce them.

Memory considerations

But it is not enough to ensure that the agents remember more; Singh said organizations must also make decisions about What agents Need to forget.

“There are four high-level decisions that you should make by designing a memory management architecture: what type of memories do you store?” How do you store and update memories? How do you get relevant memories? How to decompose memories? Said Singh.

He stressed that companies must answer these questions, because they ensure that a “agentic system maintains speed, scalability and flexibility is the key to creating a quick, efficient and precise user experience”. ”

Langchain also said that organizations should be clear about the behavior that humans have established and which should be learned by memory; What types of knowledge agents should continuously follow; And what triggers the recall of memory.

“In Langchain, we first found it useful to identify the capacities that your agent needs to be able to learn them, to map them to specific types or approaches, and only implement them in your agent,” said the company in a blog.

Recent research and these new offers only represent the start of the development of tool sets to give agents a more sustainable memory. And as companies plan to deploy agents on a larger scale, memory offers businesses the opportunity to differentiate their products.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *