Open Source

MAST

The SQLite of agent memory. An embeddable storage engine with vector search, full-text search, entity graph, and time-tiered compaction — in a single file.

The Problem

Agent memory is fragmented. Vector databases for embeddings. Relational stores for metadata. Graph databases for entity relationships. Separate compaction logic bolted on top. Developers end up cobbling 3-4 systems together just to give an agent persistent context.

MAST unifies all of this in one embeddable library. No servers. No network hops. One file on disk. Open the file, store memories, recall them — with the full power of vector search, keyword matching, graph traversal, and temporal compaction available in every query.

How It Works

Four core operations. Each one atomic.

STORE

Write a memory, generate its embedding, and update all indexes in one atomic transaction. Content, metadata, timestamps, and entity links all land together.

RECALL

Hybrid query combining vector similarity, BM25 keyword matching, metadata filters, and tier blending. One call, one ranked result set.

COMPACT

Promote memories up the tier hierarchy: raw events become summaries, summaries become patterns, patterns become core identity. Synthesis, not summarization.

RELATE

First-class entity graph. Link memories to entities, entities to each other. Traverse relationships during recall to surface contextually relevant memories.

Install
rust
# Cargo.toml
[dependencies]
mast-core = { git = "https://github.com/ericc59/mast" }
mast-embed-openai = { git = "https://github.com/ericc59/mast" }  # optional
mast-embed-local = { git = "https://github.com/ericc59/mast" }   # optional, on-device
python
pip install maturin
git clone https://github.com/ericc59/mast && cd mast/crates/mast-py
maturin develop --release
node.js
git clone https://github.com/ericc59/mast && cd mast/crates/mast-node
npm install && npm run build
cli
cargo install --git https://github.com/ericc59/mast mast-cli
Usage
rust
use mast_core::{Mast, config::MastConfig, types::*};
use mast_embed_openai::OpenAIEmbedder;

let mut mast = Mast::open(MastConfig::default().with_db_path("agent.mast"))?;
let embedder = OpenAIEmbedder::new("text-embedding-3-small");

// store a memory
let mem = mast.store(StoreRequest {
    collection: "user:alice".into(),
    content: "Alice prefers dark mode and monospace fonts".into(),
    metadata: HashMap::from([("source".into(), "onboarding".into())]),
    tier: Tier::Active,
    ..Default::default()
}, &embedder).await?;

// vector recall
let results = mast.recall(RecallRequest {
    collection: "user:alice".into(),
    query: Some("ui preferences".into()),
    limit: 5,
    ..Default::default()
}, &embedder).await?;

// hybrid recall — vector + BM25 full-text
let results = mast.recall(RecallRequest {
    collection: "user:alice".into(),
    query: Some("ui preferences".into()),
    text_query: Some("dark mode".into()),
    search_mode: SearchMode::Hybrid { vector_weight: 0.6, text_weight: 0.4 },
    filter: Some(MetadataFilter::Eq("source".into(), "onboarding".into())),
    limit: 10,
    ..Default::default()
}, &embedder).await?;

// entity graph
mast.relate(RelateRequest {
    collection: "user:alice".into(),
    source: "alice".into(),
    target: "dark_mode".into(),
    relation: "prefers".into(),
    weight: 0.9,
})?;

let edges = mast.traverse(TraverseRequest {
    collection: "user:alice".into(),
    start: "alice".into(),
    max_depth: 3,
    ..Default::default()
})?;

mast.close()?;
python
from mast import Mast

db = Mast("agent.mast")

# store
memory = db.store("user:alice", "Alice prefers dark mode", embedder,
                   metadata={"source": "onboarding"}, tier="active")

# vector recall
results = db.recall("user:alice", embedder, query="ui preferences", limit=5)

# hybrid recall
results = db.recall("user:alice", embedder,
                    query="ui preferences", text_query="dark mode",
                    search_mode="hybrid")

# graph
db.relate("user:alice", "alice", "dark_mode", "prefers", weight=0.9)
edges = db.traverse("user:alice", "alice", max_depth=3)

# snapshots
db.snapshot("user:alice", "backup.jsonl")
db.restore("backup.jsonl", merge=True)

db.close()
node.js
const { MastDb } = require('mast-memory');

const db = new MastDb('agent.mast');

// store
const mem = db.store('user:alice', 'Alice prefers dark mode', embed, 384,
                     { source: 'onboarding' }, 'active');

// recall
const results = db.recall('user:alice', embed, 384, 'ui preferences',
                          null, 'vector', 5);

// hybrid recall
const hybrid = db.recall('user:alice', embed, 384, 'ui preferences',
                         'dark mode', 'hybrid', 10);

// graph
db.relate('user:alice', 'alice', 'dark_mode', 'prefers', 0.9);
const edges = db.traverse('user:alice', 'alice', 3);

db.close();
CLI
bash
# store a memory
mast store --db agent.mast -c "user:alice" \
  --content "Alice prefers dark mode" \
  --metadata source=onboarding --tier active

# vector recall
mast recall --db agent.mast -c "user:alice" \
  --query-vec 0.1,0.2,0.3 --limit 5

# full-text search
mast recall --db agent.mast -c "user:alice" \
  --text-query "dark mode" --search-mode fulltext

# hybrid search
mast recall --db agent.mast -c "user:alice" \
  --query "preferences" --text-query "dark mode" \
  --search-mode hybrid --limit 10

# entity graph
mast relate --db agent.mast -c "user:alice" \
  --source alice --target dark_mode --relation prefers --weight 0.9
mast traverse --db agent.mast -c "user:alice" \
  --start alice --max-depth 3

# collection management
mast info --db agent.mast
mast list --db agent.mast -c "user:alice"
mast vacuum --db agent.mast -c "user:alice"

# snapshot & restore
mast snapshot --db agent.mast -c "user:alice" --output backup.jsonl
mast restore --db agent.mast --input backup.jsonl --merge
Architecture

Three layers, bottom to top. Everything lives in a single file.

Storage

Single-file ACID storage via redb. Copy-on-write B-trees for crash safety. No WAL, no journal, no compaction pauses. The file is the database.

Indexing

HNSW vector index (usearch) for approximate nearest neighbors. BM25 full-text index with Porter2 stemming. Inverted metadata index for fast filtered queries.

Application

Pluggable Embedder and Compactor traits. Four memory tiers: raw, summary, pattern, core. Tier-aware recall blends across levels with configurable weighting.

Key Decisions
+
Single file, not client-server

Like SQLite, MAST is a library you link into your process. No daemon, no port, no connection string. Open a file path, get a database. Deploy anywhere you can write to disk.

+
Compaction as synthesis, not summarization

Summarization loses signal. Compaction in MAST promotes memories through tiers by extracting patterns and relationships, preserving the important structure while reducing volume.

+
Zero vendor deps in core

The core library depends only on redb and usearch. No cloud SDKs, no API keys, no runtime services. Embedders and compactors are pluggable traits — bring your own model or use the defaults.

+
Graph as first-class citizen

Entity relationships aren't an afterthought. Every memory can link to entities, and entity-entity edges carry typed relations. Graph traversal is integrated into the recall query planner.

+
Hybrid search in one query

Vector similarity and BM25 keyword matching run in the same query, with min-max normalization and configurable weighting. No separate index calls, no manual result merging.

+
Versioned serialization

On-disk format is versioned from day one. Schema migrations are handled transparently on open. Old files always work with new code.

Pluggable Embedders

Core has zero vendor dependencies. Bring any embedding provider via the Embedder trait.

rust
#[async_trait]
pub trait Embedder: Send + Sync {
    async fn embed(&self, text: &str) -> Result<Vec<f32>, MastError>;
    async fn embed_batch(&self, texts: &[String]) -> Result<Vec<Vec<f32>>, MastError>;
    fn dimensions(&self) -> usize;
}
OpenAI text-embedding-3-smallVoyage AI voyage-3Local AllMiniLM-L6-v2, 384d, ~23MBMock deterministic, for tests
Benchmarks

13 criterion benchmarks across store, recall, delete, vacuum, and compaction. Run with cargo bench -p mast-core

store_single128d, 768d, 1536d

Single memory store with embedding generation and index update

store_batch100×128d, 1000×128d, 100×768d, 1000×768d

Batch store — single embed_batch call, one transaction

recall_vector100@128d, 1000@128d, 100@768d, 1000@768d

Vector similarity search over HNSW index

recall_filtered1000@128d

Vector search with metadata filter predicate

recall_bm251000 memories

Full-text BM25 keyword search with Porter2 stemming

recall_hybrid1000@128d

Combined vector + BM25 with min-max normalization

delete_single1000 baseline

Single memory deletion with index cleanup

vacuum500 expired

Bulk TTL expiration cleanup

compact100 memories

Tier promotion with synthesis and re-embedding

Test Coverage
Core unit 109Integration 64Embedder 16CLI 24FFI 7Total 220+
Bindings
Rust (native)Python (PyO3)Node.js (napi-rs)C FFICLI
Stack
RustredbusearchfastembedPyO3napi-rscbindgen