Prerequisites
Ensure the following are installed and available in yourPATH:
| Requirement | How to install |
|---|---|
ByteRover CLI (brv) | ByteRover installtion guide |
Hermes Agent (hermes) | Hermes installtion guide |
ByteRover CLI initialization
ByteRover CLI needs an LLM provider to power its curation and query features. Supported list: ByteRover, OpenRouter, Anthropic, OpenAI, Google Gemini, xAI, Groq, Mistral, DeepInfra, Cohere, Together AI, Perplexity, Cerebras, Vercel, MiniMax, GLM, Moonshot AI, and OpenAI Compatible. To see all available providers:Hermes integration guide
Option 1 — Interactive setup (recommended):byterover from the list of available memory providers.
Option 2 — Manual configuration:
Features
Context Retrieval
Before each LLM call, the provider runsbrv query with the user’s message and injects the results as additional context. Your agent automatically has relevant memories, past decisions, and patterns available without any manual prompting.
Automatic Curation
After each conversation turn, the provider runsbrv curate in the background to extract and store valuable knowledge — architectural decisions, bug fixes, recurring patterns, and more. This happens asynchronously and does not add latency to responses.
Pre-Compression Flush
When the agent’s context window approaches compression, the provider extracts and stores insights before the conversation history is summarized. Nothing valuable is lost when the context is compacted.Hermes Agent Tools
ByteRover integration exposes three tools that your agent can call directly:| Tool | Description |
|---|---|
brv_query | Search the persistent knowledge tree for memories, decisions, and patterns from previous sessions |
brv_curate | Store a fact, decision, or pattern into the knowledge tree — automatically categorized and organized |
brv_status | Check the ByteRover CLI version, tree stats, and cloud sync state |
$HERMES_HOME/byterover/.brv (profile-scoped, created automatically on first use)
LLM Providers
Connect an external provider or use the built-in LLM
Onboard Context
Learn how to seed your context tree with existing knowledge
Reference
Troubleshooting, error
Local & Cloud
Exploring local & cloud options