EVERY LLM MEMORY SOLUTION IN ONE PLACE

I spent a few weeks cataloging every LLM memory system I could find. Open-source libraries, commercial platforms, built-in features from the major LLM providers. This page is the result.

The space has exploded since mid-2025. A year ago, Mem0 and MemGPT were the only names most people knew. Now there are 20+ active projects, each with a different bet on how memory should work. Some use vector stores. Some use knowledge graphs. Some let the LLM manage its own memory. Some use SQL. One uses the Zettelkasten method, which I did not see coming.

I have tried to be fair. I built widemem, so I have opinions, but this page is meant to be useful regardless of which system you pick. Every project listed here is solving a real problem. Some are solving it differently than I would. That is fine. The field is better for having multiple approaches.

Hybrid (vector+graph)6Vector-first5Graph-first4Self-editing (LLM)2SQL-native1

Distribution of architectural approaches across active open-source memory projects.


OPEN SOURCE: THE MAJOR PROJECTS

These are the projects with significant traction (10k+ GitHub stars), active development, and real production usage. If you are evaluating memory systems for a serious project, start here.

MEM0

~50k starsApache 2.0
Vector + Graph (hybrid)Python

The most popular option and for good reason. Mem0 combines vector search with graph memory for entity relationships. Multi-level memory (user, session, agent) and intelligent compression. Their research shows 26% improvement over OpenAI's memory with 90%+ token savings. 186M+ monthly API calls. $24M raised. If you want the most battle-tested option with the largest community, this is it.

GRAPHITI (ZEP)

~24k starsCustom CLA
Temporal knowledge graphPython

Built around temporal validity. Every fact has a time window. The graph tracks how relationships change over time, not just what they are right now. 94.8% on Deep Memory Retrieval benchmark. Enterprise-focused with SOC 2 Type II and HIPAA compliance. Heavier infrastructure than a flat vector store, but the temporal modeling is the most sophisticated in the space.

LETTA (MEMGPT)

~22k starsApache 2.0
Self-editing memory (LLM-as-OS)Python

Fundamentally different approach: the LLM manages its own memory, reading and writing it explicitly through tool calls. Think of it as giving the model a notepad and trusting it to take good notes. Powerful and flexible, but every memory operation costs tokens, and the model can drift over time as it edits its own state. $10M raised.

SUPERMEMORY

~17k starsMIT
Vector + semanticTypeScript

Currently #1 on LongMemEval, LoCoMo, and ConvoMem benchmarks. Automatic fact extraction with contradiction handling. Ships with a browser extension and MCP server. TypeScript-first, which is unusual in this space. If benchmark scores are your primary selection criteria, Supermemory leads.

COGNEE

~13k starsApache 2.0
Graph + Vector + Cognitive SciencePython

Six-stage pipeline: classify, chunk, extract entities, summarize, embed, commit to graph. Handles 38+ data sources. Adopted by 70+ companies including Bayer. Pipeline volume grew from ~2k to 1M+ runs in 2025. $7.5M raised from backers including founders of OpenAI and FAIR. The most enterprise-ready open-source option.

MEMORI (GIBSON AI)

~12k starsApache 2.0
SQL-nativePython

The contrarian bet: store memories in standard SQL databases, not vector stores. Framework and LLM agnostic. Bring your own database. If your infrastructure is SQL-first and you do not want to add a vector database, Memori meets you where you are.


OPEN SOURCE: GROWING AND RESEARCH PROJECTS

Smaller but interesting. Some are research projects with NeurIPS papers behind them. Others are newer projects trying a different angle. Worth watching even if you would not put them in production today.

MEMOS

~7k stars
Hybrid (multi-modal)Python

Memory Operating System concept with composable 'memory cubes.' Async ingestion with millisecond-level latency. MCP support. Interesting architecture but newer and less proven.

OPENMEMORY

~4k stars
Multi-sector cognitivePython/Node

Five distinct memory types: episodic, semantic, procedural, emotional, reflective. The most neuroscience-inspired architecture in the space. Explainable recall traces. Self-hosted, local-first.

HIPPORAG

~3k starsMIT
Knowledge graph + PageRankPython

NeurIPS 2024 paper. Named after hippocampal indexing theory from neurobiology. Combines LLMs, knowledge graphs, and Personalized PageRank for retrieval. Academic but well-regarded research.

A-MEM

~900 starsMIT
Zettelkasten-inspiredPython

NeurIPS 2025 paper. Uses the Zettelkasten method (interconnected notes with dynamic indexing) for memory organization. Memories evolve and link to each other. Outperforms baselines across six foundation models. Small but academically strong.

LANGMEM

~1.3k starsMIT
Vector + prompt refinementPython

Official LangChain memory SDK. Native LangGraph integration. Background memory manager for auto-extraction. If you are already in the LangChain ecosystem, this is the path of least resistance. Not a standalone system.

WIDEMEM

Apache 2.0
Vector + importance scoringPython

Full disclosure: I built this one. Local-first with SQLite + FAISS. Importance scoring (1-10) with configurable decay. Batch conflict resolution in a single LLM call. YMYL safety net for health/financial/legal facts. 140 tests. Small project, opinionated about forgetting being a feature. No graph memory yet.

GOOGLE ALWAYS-ON MEMORY AGENT

MIT
LLM-driven (no vector DB)Python

Google's reference implementation for persistent memory. Runs as a background process with 30-minute consolidation cycles. SQLite storage. No vector database. Interesting for its simplicity and the fact that Google chose to skip vector search entirely.


COMMERCIAL PLATFORMS

Some of the open-source projects above also have commercial offerings. These are the managed/hosted versions plus dedicated commercial products.

MEM0 CLOUD

Free / $19/mo / $249/mo / Enterprise

Managed version of the open-source project. Free tier: 10k memories, 1k retrievals/month. Analytics dashboard, 80% token reduction via compression, on-prem deployment for enterprise. 80k+ developers. The most popular commercial option by usage.

ZEP CLOUD

Free tier + credits / Enterprise

Managed temporal knowledge graph. SOC 2 Type II certified, HIPAA BAA available. Bring your own keys/models/cloud (BYOK/BYOM/BYOC). Enterprise-first pricing and compliance posture.

LETTA CLOUD

Free / $49/mo / Enterprise

Stateful agent platform with SSO (SAML/OIDC), RBAC, private model deployment. More of an agent platform than a pure memory service. $10M raised.

SUPERMEMORY API

Free (1M tokens) / $19/mo / $399/mo / Enterprise

Generous free tier. Connectors for GitHub, S3, web crawlers. Startup program with $1k credits. Strongest benchmark numbers in the space.

GRAPHLIT

Free (1GB) / $49/mo + credits

RAG-as-a-Service with 30+ connectors. Multimodal: handles audio transcription, web scraping, knowledge graphs. SDKs in Python, JavaScript, and C#. More of a content infrastructure platform than a pure memory layer.

GOOGLE VERTEX AI MEMORY BANK

GCP pricing (GA)

Topic-based memory with async background extraction via Gemini. Session + long-term memory. Integrated with Google's Agent Engine. If you are on GCP, this is the native option. Generally available since mid-2025.


BUILT-IN MEMORY FROM LLM PROVIDERS

Every major LLM provider now ships some form of memory. These are consumer-facing features baked into the chat interfaces. Most are not available via API, which means you cannot build on them.

OPENAI (CHATGPT MEMORY)

Free (basic) / Plus/Pro (full)API: No

Two mechanisms: explicit 'saved memories' (user asks ChatGPT to remember something) and automatic chat history referencing. Became more comprehensive in April 2025. Good for high-level preferences, not for exact templates or verbatim content. The most mature consumer memory feature. No API access, so developers must build their own.

ANTHROPIC (CLAUDE MEMORY)

All tiersAPI: No (CLAUDE.md for Claude Code)

Editable memory summary per user. Projects feature adds separate memory per workspace. Can import memories from ChatGPT. Memory is summarized, not verbatim. Claude Code gets persistence through CLAUDE.md files and the auto-memory system. No API for memory, though.

GOOGLE GEMINI

Gemini Advanced / WorkspaceAPI: Yes (Vertex AI Memory Bank)

Persistent memory across Workspace apps (Gmail, Docs, Sheets). The only major provider that offers memory via API (through Vertex AI Memory Bank). Enterprise admins can disable for compliance. Agent Development Kit (ADK) has built-in memory primitives.

MICROSOFT COPILOT

M365 Copilot / GitHub Copilot Pro+API: No

Cross-app memory spanning Word, Teams, Outlook. Learns working style and preferences. GitHub Copilot has repository-level codebase memory (on by default since March 2026). Enterprise admin controls. No standalone API.

XAI (GROK)

Beta (not in EU/UK)API: No

Vector embedding-based memory. Similar approach to ChatGPT. Transparent: users can see exactly what Grok remembers. Not available in EU/UK due to GDPR compliance gaps. Beta quality.


WHAT PATTERNS ARE EMERGING

Graph is winning the architecture debate

A year ago, most memory systems were vector-store-plus-similarity-search. Now the top projects (Mem0, Graphiti, Cognee) are all graph-based or hybrid. The reason: entity relationships matter. "Alice works at Google" and "Alice lives in San Francisco" are related through Alice, and a graph captures that relationship in a way a flat vector store does not.

The tradeoff is complexity. A graph database is more infrastructure to manage. For single-user agents on a laptop, a flat vector store with good scoring is often enough. For multi-tenant platforms with complex entity relationships, graph memory is worth the overhead.

LLM providers are not solving the developer problem

OpenAI, Anthropic, and Microsoft all have memory features now. None of them expose it through an API (except Google through Vertex AI). If you are building an AI application and need memory, you cannot rely on the provider's built-in feature. You need a memory layer.

This is probably intentional. Memory is a differentiator for the consumer chat products. But it creates a gap that open-source projects are filling.

Benchmarks exist but nobody agrees on them

Supermemory leads on LongMemEval, LoCoMo, and ConvoMem. Zep leads on Deep Memory Retrieval (94.8%). Mem0 reports 26% improvement over OpenAI's memory. These numbers are not directly comparable because they measure different things.

There is no ImageNet moment for AI memory yet. No single benchmark that the community agrees on as the standard. This makes comparison shopping difficult. Pick the benchmark that matches your use case and evaluate against that, not the headline number.

Local-first is a real use case, not just a philosophy

Several projects (widemem, Memori, OpenMemory, Google's Always-On agent) run fully local with SQLite and FAISS or no vector database at all. This is not just privacy theater. For personal AI assistants, coding agents, and edge deployments, sending every memory operation to a cloud API is impractical. The local-first projects tend to be simpler but cover a real need that the enterprise-focused solutions do not.

Nobody has solved forgetting

Most systems are still primarily accumulation engines. They add information well. Removing or superseding outdated information is an afterthought. Temporal graphs (Zep), conflict resolution (widemem, Mem0), and self-editing memory (Letta) all take different approaches, but none of them have a satisfying answer to the question: "how do you know when a fact is no longer true?"

This is the hardest open problem in the space. Whoever solves intelligent forgetting will have a significant advantage. The brain does it through synaptic pruning and retrieval-induced suppression. We are still using exponential decay functions and hoping the LLM catches contradictions. The gap is large.


HOW TO CHOOSE

Picking a memory system depends on what you are building. Here is a rough decision tree:

Need the largest community and most integrations? Mem0. 50k stars, $24M raised, 80k+ developers. Hard to go wrong.

Need temporal versioning and enterprise compliance? Zep/Graphiti. SOC 2, HIPAA, bi-temporal facts.

Want the agent to manage its own memory? Letta. Most flexible, but token-expensive.

Optimizing for benchmark scores? Supermemory. #1 on three major benchmarks.

Already on LangChain? LangMem. Path of least resistance.

Want local-first with YMYL safety? widemem. SQLite + FAISS, zero cloud dependencies, opinionated about what to forget.

SQL shop that does not want a vector database? Memori. Stores memories in Postgres/MySQL/SQLite. No embeddings required.

Or just try two or three. Most of these install with pip and take five minutes to set up. The best way to evaluate a memory system is to throw your actual data at it and see what comes back.


This page will be updated as the space evolves. If I missed a project or got something wrong, open an issue and I will fix it.