Query context from the context tree
This guide walks you through querying context from your context tree using agentic search. There are two ways to query your context tree:/query— Run directly in the ByteRover REPL for quick, manual lookupsbrv query— Executed by your coding agent for integration into coding workflows. Can be run manually.
How Query Works
ByteRover routes every query through a 5-tier strategy, starting with the fastest path and escalating only when needed:| Tier | Name | Speed | When It Fires |
|---|---|---|---|
| 0 | Exact cache | ~0ms | You repeat a recent query (MD5 match, 60s TTL) |
| 1 | Fuzzy cache | ~50ms | A cached query shares ≥60% token similarity |
| 2 | BM25 direct | ~100-200ms | Top search result scores high with a clear gap — no LLM needed |
| 3 | LLM pre-fetch | <5s | Top results are injected as context for a single LLM call |
| 4 | Agentic loop | 8-15s | Full multi-step reasoning: reads files, follows relations, iterates |
Compound Scoring
Search results are ranked using a formula that balances text relevance, accumulated importance, and freshness:core tier) receive a 1.15× boost, meaning well-established context surfaces above newer drafts even with slightly lower text relevance.
Path-Scoped Queries
You can scope queries to a specific domain or topic by including a path:Manual Query via /query
You can also query the context tree directly in the ByteRover REPL using the /query command:
When to Use /query vs Agent Prompts
| Approach | Best For |
|---|---|
/query in REPL | Quick lookups, ad-hoc questions, exploring what’s stored |
Agent prompts (brv query) | Complex workflows, multi-step tasks, when context feeds into code generation |
/query when you want to quickly check what knowledge is available before starting a task. Use agent prompts when the retrieved context should flow directly into your coding workflow.
What Makes This Intelligent?
Multi-tier strategy: ByteRover doesn’t use a single retrieval method. It combines exact caching, BM25 full-text search with compound scoring, LLM-assisted pre-fetch, and full agentic reasoning — routing each query to the fastest tier that can produce a quality answer. Follows explicit relations: ByteRover follows the@domain/topic relations between topics to gather comprehensive, connected context.
Synthesizes information:
Instead of returning ranked documents, ByteRover reads relevant context files and synthesizes a coherent answer with citations.
Context-aware answers:
You get understanding, not just matches. ByteRover comprehends your query semantically and provides relevant, actionable information.
Out-of-domain detection:
When your query falls outside the knowledge stored in the context tree, ByteRover tells you rather than returning a low-quality guess, and suggests curating relevant knowledge first.
For details on query tiers, compound scoring, and path-scoped queries, see How Query Works.
Multi-Step Queries
For complex tasks requiring different types of context, you can run multiple queries: Copy this prompt and paste it into your coding agent’s chat:Crafting Effective Queries
The quality of your results depends on your query. Here are some tips: Specific queries work better:You’re in Control of Your Queries
Thebrv query command is flexible and adapts to how you want to work: