Cipher uses a declarative YAML configuration system (cipher.yml) combined with environment variables for secure and flexible agent setup. All configurations are validated using strict Zod schemas at startup to ensure reliability.

Configuration Architecture

File Structure

  • Primary Config: memAgent/cipher.yml (or custom path with --agent)
  • Environment Variables: .env file for sensitive data
  • Validation: Zod schemas with strict type checking
  • Loading: Intelligent path resolution with environment variable expansion

Configuration Hierarchy

  1. Environment Variables (.env) - Highest priority
  2. YAML Configuration (cipher.yml) - Medium priority
  3. Schema Defaults - Lowest priority

Core Configuration File (cipher.yml)

Complete Structure

# ===========================================
# LLM Configuration (Required)
# ===========================================
llm:
  provider: anthropic                    # Required: 'openai', 'anthropic', 'openrouter', 'ollama'
  model: claude-3-5-haiku-20241022      # Required: Model name for the provider
  apiKey: $ANTHROPIC_API_KEY            # Required: API key via environment variable
  maxIterations: 50                     # Optional: Max iterations for agentic loops (default: 50)
  baseURL: https://api.anthropic.com/v1 # Optional: Custom API base URL

# ===========================================
# Evaluation LLM (Optional)
# ===========================================
evalLlm:
  provider: anthropic                   # Optional: Separate LLM for evaluation/reflection
  model: claude-3-7-sonnet-20250219    # Optional: Non-thinking model for evaluation
  apiKey: $ANTHROPIC_API_KEY           # Optional: Can use same or different API key

# ===========================================
# System Prompt (Required)
# ===========================================
systemPrompt: |
  You are an AI programming assistant focused on coding and reasoning tasks. You excel at:
  - Writing clean, efficient code
  - Debugging and problem-solving
  - Code review and optimization
  - Explaining complex technical concepts
  - Reasoning through programming challenges

# ===========================================
# MCP Servers Configuration (Optional)
# ===========================================
mcpServers:
  filesystem:                          # Server name (any identifier)
    type: stdio                        # Connection type: 'stdio', 'sse', 'http'
    command: npx                       # Command to launch server
    args:                              # Arguments array
      - -y
      - '@modelcontextprotocol/server-filesystem'
      - .
    env:                               # Environment variables for server
      HOME: /Users/username
    timeout: 30000                     # Connection timeout (default: 30000ms)
    connectionMode: lenient            # 'strict' or 'lenient' (default: lenient)

  web_browser:                         # Additional MCP server example
    type: stdio
    command: uvx
    args:
      - '@truffle-ai/puppeteer-server'
    timeout: 60000
    connectionMode: strict

# ===========================================
# Session Management (Optional)
# ===========================================
sessions:
  maxSessions: 100                     # Maximum concurrent sessions (default: 100)
  sessionTTL: 3600000                  # Session TTL in milliseconds (default: 1 hour)

LLM Provider Configurations

OpenAI Configuration

llm:
  provider: openai
  model: gpt-4.1-mini                  # Models: gpt-4.1, gpt-4.1-mini, o4-mini
  apiKey: $OPENAI_API_KEY
  baseURL: https://api.openai.com/v1   # Optional: for Azure OpenAI or custom endpoints
  maxIterations: 50

Anthropic Claude Configuration

llm:
  provider: anthropic
  model: claude-3-5-haiku-20241022     # Models: claude-4-sonnet, claude-3-7-sonnet, etc.
  apiKey: $ANTHROPIC_API_KEY
  maxIterations: 50

OpenRouter Configuration

llm:
  provider: openrouter
  model: openai/gpt-4.1                # Any OpenRouter model
  apiKey: $OPENROUTER_API_KEY
  maxIterations: 50
OpenRouter Model Examples:
  • openai/gpt-4.1, openai/gpt-4.1-mini
  • anthropic/claude-4-sonnet, anthropic/claude-3.5-haiku
  • google/gemini-pro-2.5
  • meta-llama/llama-3.1-8b-instruct
  • mistralai/mixtral-8x7b-instruct

Ollama Configuration (Self-Hosted)

llm:
  provider: ollama
  model: qwen3:32b                     # Local model (no API key needed)
  baseURL: $OLLAMA_BASE_URL           # Default: http://localhost:11434/v1
  maxIterations: 50
Recommended Ollama Models:
  • High Performance: qwen3:32b, llama3.1:70b, deepseek-r1:32b
  • Balanced: qwen3:8b, llama3.1:8b, hermes3:8b
  • Coding: qwen2.5-coder:32b, deepseek-coder:33b
  • Lightweight: phi4-mini:3.8b, granite3.3:2b

MCP Server Configurations

Stdio Servers (Local Processes)

mcpServers:
  filesystem:
    type: stdio
    command: npx                       # or node, python, uvx, etc.
    args:
      - -y
      - '@modelcontextprotocol/server-filesystem'
      - .
    env:
      API_KEY: $MY_API_KEY
    timeout: 30000
    connectionMode: lenient            # or strict

SSE Servers (Server-Sent Events)

mcpServers:
  sse_server:
    type: sse
    url: https://api.example.com/sse
    headers:
      Authorization: 'Bearer $TOKEN'
      User-Agent: 'Cipher/1.0'
    timeout: 30000
    connectionMode: strict

HTTP Servers (REST APIs)

mcpServers:
  http_server:
    type: http
    url: https://api.example.com
    headers:
      Authorization: 'Bearer $TOKEN'
      Content-Type: 'application/json'
    timeout: 30000
    connectionMode: lenient

Environment Variable Expansion

Syntax Support

llm:
  apiKey: $OPENAI_API_KEY              # Simple expansion
  baseURL: ${API_BASE_URL}             # Brace syntax
  model: ${MODEL_NAME:-gpt-4.1}        # With default value

Automatic Type Conversion

  • Numbers: Environment strings automatically converted to numbers
  • Booleans: true/false strings converted to boolean
  • URLs: Validated as proper URLs

Configuration Loading & Path Resolution

Default Behavior

# Uses memAgent/cipher.yml relative to package installation
cipher

Custom Configuration Paths

# Absolute path
cipher --agent /home/user/my-config.yml

# Relative to current directory  
cipher --agent ./configs/research-agent.yml

# In current directory
cipher -a my-cipher.yml

Configuration Loading Process

  1. Environment Loading: Read .env file if present
  2. Path Resolution: Resolve configuration file path
  3. YAML Parsing: Parse and validate YAML structure
  4. Environment Expansion: Replace environment variables
  5. Schema Validation: Validate against Zod schemas
  6. Type Conversion: Convert strings to appropriate types

Connection Modes

Lenient Mode (Default)

  • Failed MCP connections log warnings but don’t stop startup
  • Suitable for development and optional tools
  • Agent continues with available connections
mcpServers:
  optional_tool:
    type: stdio
    command: some-tool
    connectionMode: lenient

Strict Mode

  • Failed connections cause application to exit
  • Ensures all required tools are available
  • Can be set per-server or globally
mcpServers:
  required_tool:
    type: stdio
    command: critical-tool
    connectionMode: strict
Global Override:
# Force all servers to use strict mode
cipher --strict

Configuration Validation

Zod Schema Validation

  • LLM Provider: Must be valid provider type
  • API Keys: Required for cloud providers (not Ollama)
  • URLs: Must be properly formatted URLs
  • Numbers: Must be positive integers where specified
  • MCP Types: Must be ‘stdio’, ‘sse’, or ‘http’

Error Handling

  • Startup Validation: All configs validated before agent starts
  • Detailed Errors: Clear error messages with field paths
  • Type Safety: Runtime guarantees via strict schemas

Advanced Configuration Examples

Multi-Tool Research Agent

systemPrompt: |
  You are an advanced research assistant with access to multiple tools.
  Use the filesystem to read documents, web browser for research,
  and remember everything in your persistent memory.

llm:
  provider: anthropic
  model: claude-3-5-haiku-20241022
  apiKey: $ANTHROPIC_API_KEY
  maxIterations: 50

evalLlm:
  provider: anthropic
  model: claude-3-7-sonnet-20250219
  apiKey: $ANTHROPIC_API_KEY

mcpServers:
  filesystem:
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
    connectionMode: strict

  web_browser:
    type: stdio
    command: uvx
    args: ["@truffle-ai/puppeteer-server"]
    timeout: 60000
    connectionMode: lenient

sessions:
  maxSessions: 50
  sessionTTL: 7200000  # 2 hours

High-Performance Local Agent

systemPrompt: |
  You are a high-performance local AI assistant running on Ollama.
  Focus on privacy, speed, and local computation.

llm:
  provider: ollama
  model: qwen3:32b
  baseURL: $OLLAMA_BASE_URL
  maxIterations: 100

mcpServers:
  filesystem:
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
    connectionMode: strict

sessions:
  maxSessions: 25
  sessionTTL: 10800000  # 3 hours

Production API Agent

systemPrompt: |
  You are a production API assistant designed for high availability
  and enterprise integration.

llm:
  provider: openrouter
  model: anthropic/claude-4-sonnet
  apiKey: $OPENROUTER_API_KEY
  maxIterations: 75

mcpServers:
  database:
    type: http
    url: https://api.internal.com/database
    headers:
      Authorization: 'Bearer $DB_TOKEN'
    timeout: 15000
    connectionMode: strict

  external_api:
    type: sse
    url: https://events.external.com/stream
    headers:
      Authorization: 'Bearer $EXTERNAL_TOKEN'
    timeout: 30000
    connectionMode: lenient

sessions:
  maxSessions: 200
  sessionTTL: 1800000  # 30 minutes

Best Practices

Security

  • Never hardcode API keys in YAML files
  • Use environment variables for all secrets
  • Enable secret redaction with REDACT_SECRETS=true
  • Validate configurations before deployment

Performance

  • Choose appropriate models for your hardware/budget
  • Set reasonable timeouts for MCP servers
  • Use connection modes appropriately (strict for critical, lenient for optional)
  • Configure session limits based on expected usage

Reliability

  • Test configurations in development before production
  • Use schema validation to catch errors early
  • Monitor MCP connections and handle failures gracefully
  • Implement proper logging levels for debugging