Operational

Your agents forget everything.
Ours don't.

d33pmemory is a cognitive memory engine for AI agents. It doesn't just store what users say — it understands it. Every conversation becomes structured knowledge with confidence tracking, evidence chains, and automatic contradiction resolution.

142
tokens vs 15,000
99.1%
context compression
2 calls
ingest + recall
POST /v1/ingest
{
  "user_message": "Book dinner for me and Emma",
  "source": "slack-bot"
}
→ Extracted memories
fact:    gluten-free diet   0.95  stated
rel:     companion: Emma    0.88  inferred
event:   dinner booking    0.92  stated
pref:    Italian cuisine   0.74  inferred

Quick start

Two integration paths

OpenClaw agentsRecommended

Install the plugin. Everything is automatic — no code changes needed. Hooks handle ingest and recall.

openclaw plugins install d33pmemory

Or via npm: npm install @d33pmemory/openclaw-plugin

Set agentId: "" in config — it auto-uses your gateway's agent name.

View on npm →
Other agent frameworks

Add skill.md to your agent. Use heartbeat batching + on-demand recall.

View skill.md →
1

Sign up and create an API key for your account

2

Create a named agent via POST /v1/agents (e.g. 'personal-assistant')

3

Ingest: call POST /v1/ingest with agent_id after your agent responds

4

Recall: call POST /v1/recall with agent_id and a natural language query

The problem

Generic memory tools weren't built for agents.

Vector stores, conversation dumps, and DIY parsers all fall apart at scale. Here's why.

Vector stores

Just similarity search — no semantic structure
No confidence scoring or provenance
No contradiction detection
Grows forever — no compression

Conversation history

Token explosion at scale
No compression whatsoever
Model still forgets with long contexts
Expensive and slow to process

DIY solutions

Brittle custom parsers
Engineering overhead never ends
Breaks at scale
No shared memory across agents

d33pmemory was built from scratch for one thing: making AI agents actually know their users.

How it works

Every conversation becomes structured knowledge

Multi-layer cognitive model

Episodic, semantic, and procedural memory layers — mirroring how humans organize knowledge.

Confidence tracking

Every memory scored 0.0–1.0. Stated vs inferred is always clear. Evolves as evidence accumulates.

Auto consolidation

Corroborated memories strengthen. Stale ones decay. Contradictions resolve automatically.

Semantic recall

Search by meaning, not keywords. Describe what you need in natural language.

Context compilation

142 tokens of distilled intelligence replacing 15,000 tokens of conversation history.

Evidence chains

Every fact has provenance — which interaction, when, and whether stated or inferred.

Why d33pmemory

Beyond vector search

Standard RAG

Linear token growth
Ephemeral retention
No contradiction handling
Context = message dump
No confidence or provenance

d33pmemory

99% context compression
Persistent evolving memory
Auto conflict resolution
Compiled context payloads
Confidence + evidence chains

Memory structure

Five types of knowledge

Our extraction engine categorizes every piece of knowledge, each with confidence and provenance.

FactUser follows a gluten-free diet
RelationshipEmma is a frequent companion
EventDinner booked for Feb 15
PreferencePrefers Italian restaurants
PatternBooks restaurants on weekends
See full memory structure →
Memory object
{
  "type": "fact",
  "content": "User is gluten-free",
  "confidence": 0.95,
  "source": "stated",
  "scope": "shared",
  "contributed_by": "slack-bot"
}

Use cases

Agents that actually know people.

Personal AI assistant

Remembers preferences, habits, relationships. Gets smarter every conversation. Never asks twice what it already knows.

Customer support bots

10 agents, 1 shared brain. New agent joins and already knows your top issues, common questions, and product quirks.

Code agents

Tracks your stack, preferences, past decisions. Stops asking the same architectural questions every session.

Agents & Teams

One API key. As many agents as you need.

Create named agents and organize them into teams. Each agent builds its own private memory. Agents in the same team share a collective pool — new agents inherit everything instantly.

Named agents Pass agent_id in every call — each agent gets its own isolated memory
Teams = shared pools Assign agents to a team to share collective knowledge
Private by default No cross-contamination between agents
Instant inheritance New team member? It starts with everything the team knows
Learn more about agents & teams →
Architecture

Your account (1 API key)

├── agent: personal-assistant → private

├── agent: slack-bot → private

└── team: customer-support

├── agent: support-1 → private + shared

├── agent: support-2 → private + shared

└── shared pool → all team agents

Pricing

Simple, transparent pricing

Start free. Scale when your agents need more memory.

Starter

$9/mo
  • 1 agent · 1 team
  • 1,000 memories
  • 500 ingests/mo
Get started

Basic

$19/mo
  • 3 agents · 3 teams
  • 5,000 memories
  • 2,000 ingests/mo
Get started

Pro

Popular
$39/mo
  • Unlimited agents & teams
  • 100,000 memories
  • 20,000 ingests/mo
  • Shared team memory
Get started

See full plan comparison →

5 min
Integration time
2
Endpoints to learn
0
Infrastructure to manage

"We replaced 200 lines of RAG pipeline code with two API calls. Our agent went from forgetting users mid-conversation to remembering preferences from weeks ago."

AK
AI Engineer
Building with d33pmemory since beta

AI agents that remember.

Give your agents persistent memory in under 5 minutes. Free to start.

Status: Operational