Skip to main content
ByteRover includes a built-in LLM with limited free credits. Connect your own provider to remove limits and choose any supported model.

Connect a provider

1

Open the provider selector

/providers
ByteRover lists all providers with status indicators — (Current) for the active provider, [Connected] for previously connected ones.
2

Select and connect

Select an unconnected provider. If it requires an API key, you’ll be prompted to enter one — ByteRover validates it before saving. After connecting, a model selector opens automatically.Provider selector

Select a model

1

Open the model browser

/model
Models are grouped by provider, with pricing and context window size shown when available.Model browser
2

Select a model

Use arrow keys to browse and press Enter to select. You can also type to filter models by name or provider.
Model lists are cached for 1 hour. ByteRover refreshes automatically when the cache expires.

Switch providers

1

Open the provider selector

/providers
2

Select a connected provider

Select any provider marked [Connected]. ByteRover switches without re-entering your API key.

Disconnect a provider

Disconnecting removes the stored API key and reverts to ByteRover’s built-in LLM if the disconnected provider was active.
1

Open the provider selector

/providers
2

Select a connected provider

Select a connected provider to open the actions menu.
3

Select Disconnect

Choose Disconnect to remove the stored API key and disconnect the provider.Disconnect provider

Local LLM setup

The openai-compatible provider connects to any OpenAI-compatible local server — Ollama, LM Studio, vLLM, or any custom endpoint. You provide the base URL when connecting.
1

Serve your model

Make sure your local LLM server is running before connecting.
2

Connect the provider

Run /providers and select OpenAI Compatible. ByteRover prompts you to enter the base URL of your endpoint:
/providers
Base URL prompt for OpenAI Compatible providerEnter your base URL (e.g., http://localhost:11434/v1 for Ollama). An API key prompt follows — leave it blank if your server doesn’t require one.
3

Select your model

/model

Supported providers

ByteRover supports 18 providers:
ProviderIDDefault modelGet API key
ByteRover (built-in)byteroverNo key required
OpenRouteropenrouteranthropic/claude-sonnet-4.5openrouter.ai/keys
Anthropicanthropicclaude-sonnet-4-5-20250929console.anthropic.com
OpenAIopenaigpt-4.1platform.openai.com
Google Geminigooglegemini-2.5-flashaistudio.google.com
xAI (Grok)xaigrok-3console.x.ai
Groqgroqopenai/gpt-oss-120bconsole.groq.com
Mistralmistralmistral-large-latestconsole.mistral.ai
DeepInfradeepinframeta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8deepinfra.com
Coherecoherecommand-a-03-2025dashboard.cohere.com
Together AItogetheraimeta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8api.together.ai
Perplexityperplexitysonar-properplexity.ai
Cerebrascerebrasgpt-oss-120bcloud.cerebras.ai
Vercelvercelv0-1.5-mdv0.dev
MiniMaxminimaxMiniMax-M2platform.minimax.io
GLM (Z.AI)glmglm-4.7chat.z.ai
Moonshot AI (Kimi)moonshotkimi-k2.5platform.moonshot.ai
OpenAI Compatibleopenai-compatiblellama3Optional (depends on endpoint)
The following models have been verified and are recommended for optimal performance with ByteRover:
ProviderModel
Anthropicclaude-opus-4.6
Anthropicclaude-sonnet-4.6
Anthropicclaude-3.7-sonnet
Anthropicclaude-haiku-4.5
Anthropicclaude-3-haiku
Google (Gemini)gemini-3.1-pro
Google (Gemini)gemini-3.1-flash
Google (Gemini)gemini-3.1-flash-lite
Google (Gemini)gemini-3-pro
Google (Gemini)gemini-3-flash
Google (Gemini)gemini-2.5-pro
Google (Gemini)gemini-2.5-flash
OpenAIgpt-5.4
OpenAIgpt-5.2
OpenAIgpt-5.1
OpenAIgpt-5-mini
OpenAIgpt-5
OpenAIgpt-4.5
OpenAIgpt-4.1
OpenAIgpt-4.1-nano
OpenAIgpt-4o
OpenAIgpt-4o-mini
OpenAIo3
OpenAIo3-mini
ZAIglm-5
ZAIglm-4.7
ZAIglm-4.6
ZAIglm-4.5
ZAIglm-4.5-flash

Environment variable auto-detection

If an API key is already set in your environment, ByteRover detects it automatically — no manual entry needed when connecting.
ProviderEnvironment variable(s)
AnthropicANTHROPIC_API_KEY
OpenAIOPENAI_API_KEY
OpenRouterOPENROUTER_API_KEY
Google GeminiGOOGLE_API_KEY, GEMINI_API_KEY
xAIXAI_API_KEY
GroqGROQ_API_KEY
MistralMISTRAL_API_KEY
DeepInfraDEEPINFRA_API_KEY
CohereCOHERE_API_KEY
Together AITOGETHER_API_KEY, TOGETHERAI_API_KEY
PerplexityPERPLEXITY_API_KEY
CerebrasCEREBRAS_API_KEY
VercelVERCEL_API_KEY
MiniMaxMINIMAX_API_KEY
GLMZHIPU_API_KEY
Moonshot AIMOONSHOT_API_KEY
OpenAI CompatibleOPENAI_COMPATIBLE_API_KEY

Hot-swap

No restart required when switching providers or models. When you switch, ByteRover broadcasts a provider:updated event and agent processes pick up the new configuration at the start of their next task. If tasks are already running, the swap is deferred until all in-flight tasks complete. Two behaviors depending on what changed:
  • Provider changed — a new session is created. In-memory conversation history is cleared (history formats are incompatible across providers).
  • Model only changed — the session ID is reused, but in-memory conversation history is still lost on the new session manager.

Credential storage

API keys are stored in an AES-256-GCM encrypted local file — not your system keychain. Both files use 0600 permissions (owner read/write only).
FilePurpose
<data-dir>/.provider-keysRandom 32-byte encryption key, rotated on each save
<data-dir>/provider-credentialsAES-256-GCM encrypted JSON map of provider → API key
Non-sensitive preferences (active provider, active model, favorites, recent models) are stored in plaintext at <config-dir>/providers.json. Platform-specific paths:
Platform<config-dir><data-dir>
macOS~/Library/Application Support/brv~/Library/Application Support/brv
Linux~/.config/brv (or $XDG_CONFIG_HOME/brv)~/.local/share/brv (or $XDG_DATA_HOME/brv)
Windows%APPDATA%/brv%LOCALAPPDATA%/brv

Next steps

Quickstart

Full setup guide including provider configuration

CLI Reference

Complete reference for all brv providers and brv model commands