Memory Categories
The agent extracts up to 3 memories per category (max 200 characters each):| Category | What It Captures | Example |
|---|---|---|
| Patterns | Reusable code or workflow patterns | ”Used recon, extraction, curate workflow for auth module” |
| Preferences | User style, naming, and structure decisions | ”Prefers functional components over class components” |
| Entities | Key files, modules, APIs, and dependencies | ”src/auth is an actively curated module” |
| Decisions | Architectural choices (immutable log) | “Chose RS256 over HS256 for JWT signing” |
| Skills | Tool invocation recipes that worked | ”Start curate with recon tool, then map-extract” |
How Extraction Works
- Threshold — Extraction triggers after a session with at least 4 messages (1 for curate sessions). Short or trivial interactions are skipped.
- Serialization — The conversation is serialized into a text digest, truncated to 12,000 characters at a natural message boundary.
- LLM extraction — An LLM call identifies 0–3 memories per category from the digest.
- Fallback — For curate sessions, deterministic fallback drafts are generated from the curated file paths and module labels instead of using the LLM. This is faster, cheaper, and more consistent for routine curations.
Deduplication
Before storing, draft memories are compared against the 60 most recent agent memories. For each draft, the system decides:| Action | When | Effect |
|---|---|---|
| Create | Memory is genuinely new | Stored as-is |
| Merge | Overlaps with an existing memory | Combined with the existing entry |
| Skip | Already covered by existing memory | Discarded |
Storage
Extracted memories are stored as JSON blobs in the project’s.brv/ directory, tagged with source "agent". They persist across sessions and are available for future agent interactions — the agent can reference past patterns, honor stated preferences, and build on prior decisions without being told again.