Cipher supports 4 major LLM providers with seamless integration and advanced capabilities. The system is designed for maximum flexibility, allowing you to switch between providers while maintaining consistent functionality and memory capabilities.

Supported Providers

  1. OpenAI - GPT Models
    Supported Models: All OpenAI models including gpt-4.1, gpt-4.1-mini, etc.
  2. Anthropic - Claude Models
    Supported Models: All Anthropic Claude models including claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022, claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307, claude-4-sonnet-20250514, claude-4-opus, etc.
  3. OpenRouter - 200+ Models
    Supported Models: Some candidate models include openai/gpt-4o, anthropic/claude-3.5-sonnet, google/gemini-pro-1.5, meta-llama/llama-3.1-8b-instruct, mistralai/mixtral-8x7b-instruct, and many more.
  4. Ollama - Local Models
    Supported Models: Some candidate models include llama3.1:8b, llama3.1:70b, qwen3:8b, mistral:latest, phi4-mini:3.8b, and others.

Advanced Features

Dual LLM Configuration

Cipher supports separate evaluation LLMs for enhanced performance:
# Main LLM for conversation
llm:
  provider: anthropic
  model: claude-3-5-haiku-20241022
  apiKey: $ANTHROPIC_API_KEY
  maxIterations: 50

# Evaluation LLM for reflection/analysis
evalLlm:
  provider: anthropic
  model: claude-3-7-sonnet-20250219
  apiKey: $ANTHROPIC_API_KEY
The purpose of evalLlm is for reflection memory evaluation. If you want to learn more, see the Reflection Memory documentation.

Provider-Specific Features

Tool Calling Support All providers support cipher’s advanced tool calling:
  • OpenAI: Native function calling with JSON schema
  • Anthropic: Tool use with structured inputs
  • OpenRouter: Provider-dependent tool support
  • Ollama: OpenAI-compatible tool calling
Message Formatting Each provider has optimized message formatting:
  • Context Management: Intelligent conversation history
  • Image Support: Vision capabilities where available
  • Error Handling: Robust retry logic and error recovery

Configuration Validation

Cipher validates all LLM configurations at startup: Schema validation ensures:
  • provider: Must be ‘openai’, ‘anthropic’, ‘openrouter’, or ‘ollama’
  • model: Must be non-empty string
  • apiKey: Required for cloud providers (not Ollama)
  • maxIterations: Must be positive integer (default: 50)
  • baseURL: Must be valid URL if provided
Error Handling:
  • Startup Validation: Catch configuration errors early
  • Runtime Retry: Automatic retry with exponential backoff
  • Graceful Fallback: Continue operation when possible

Best Practices

Provider Selection

ProviderWhen to Choose
OpenAILatest GPT models, reliability, speed, Azure OpenAI support
AnthropicAdvanced reasoning, safety, long context windows
OpenRouterAccess to many models, cost optimization, model diversity, flexibility
OllamaPrivacy, no API costs, offline use, local hardware available

Performance Optimization

maxIterations Configuration:
# Conservative (faster responses)
maxIterations: 25

# Balanced (default)
maxIterations: 50

# Aggressive (complex tasks)
maxIterations: 100
Model Selection Tips:
  • Development: Use faster, cheaper models (gpt-4.1-mini, claude-3-haiku)
  • Production: Use more capable models (gpt-4.1, claude-4-sonnet)
  • Local Development: Use Ollama for cost-free iteration