Now in Public Beta

Persistent Memory
for AI Applications

Your LLMs forget everything between sessions. CtxVault gives them long-term memory — decisions, conventions, patterns — searchable and always available.

your-app.py
# 1. Store a decision your AI made
response = requests.post("https://api.ctxvault.dev/api/v1/memory/candidates",
  headers={"Authorization": f"Bearer {api_key}"},
  json={
    "project_slug": "my-app",
    "items": [{
      "type": "decision",
      "title": "Use PostgreSQL for persistence",
      "content": "Chose PostgreSQL with pgvector for vector search support."
    }]
  }
)

# 2. Later, retrieve relevant context for your LLM
context = requests.post("https://api.ctxvault.dev/api/v1/context-pack",
  json={"project_slug": "my-app", "query": user_question}
).json()

# 3. Inject into your LLM prompt
messages = [
  {"role": "system", "content": context["pack"]},
  {"role": "user", "content": user_question}
]

Everything Your AI Needs to Remember

A complete memory infrastructure for AI-powered applications.

🧠

Semantic Memory

Store decisions, conventions, patterns, and lessons. Search by meaning, not just keywords, using vector embeddings.

📦

Context Packs

Auto-generate optimized context bundles for your LLM prompts. Token-budgeted, relevance-scored, ready to inject.

🔑

API-First

Simple REST API with Bearer token auth. Python & Node.js SDKs, MCP Server for Claude, and ChatGPT GPT Actions.

🔍

Vector Search

Powered by pgvector. Find the most relevant memories instantly using cosine similarity on OpenAI embeddings.

🔄

Memory Lifecycle

Propose → Verify → Pin → Deprecate. Full audit trail. Memories decay naturally unless verified by reviewers.

🏗️

Multi-Project

Isolate memory per project. Manage multiple AI applications from one account. Role-based access control.

Three Steps to Persistent AI Memory

01

Sign Up & Get API Key

Create an account in seconds. Get your API key and project instantly. No credit card required.

02

Push Memory Items

Send decisions, conventions, and patterns from your AI workflows. They're embedded and stored automatically.

03

Query & Inject Context

Before each LLM call, fetch a context pack. Relevant memories are scored, ranked, and token-budgeted for you.

Ready to Give Your AI a Memory?

Free to start. No credit card needed. Get your API key in 30 seconds.

Get Started Free