Files
pocketpaw/docs/concepts
Rohit Kushwaha 869407bf69 refactor: provider adapter pattern + LiteLLM support (#595)
* feat: add LiteLLM provider support and unified env resolution

Add LiteLLM as a provider option for all backends (Claude SDK, OpenAI
Agents, Google ADK, Copilot SDK), enabling access to 100+ LLM providers
including HuggingFace, Ollama, vLLM, Together AI, Groq, Mistral, and
more through a single proxy or direct SDK integration.

Key changes:
- New config fields: litellm_api_base, litellm_api_key, litellm_model
- OpenAI Agents: native LitellmModel extension with proxy fallback
- Google ADK: LiteLlm model wrapper for cross-provider support
- Claude SDK: routes through LiteLLM proxy via ANTHROPIC_BASE_URL
- Copilot SDK: LiteLLM via OpenAI-compatible BYOK config
- resolve_backend_env() pushes unified POCKETPAW_* keys to env vars
  each SDK expects, fixing the issue where switching backends required
  manually reconfiguring environment variables

* fix: complete LiteLLM integration across dashboard, health checks, and WS

- Fix health check false warning: add 'litellm' to provider skip list
  in check_api_key_primary() for Claude SDK, OpenAI Agents, and
  Google ADK backends
- Add google_adk_provider to WS settings handler and broadcast
- Add litellm_api_base, litellm_api_key, litellm_model to WS handler
- Add 'LiteLLM (100+ Providers)' option to provider dropdowns in
  settings UI for Claude SDK and OpenAI Agents backends
- Add LiteLLM config fields (proxy URL, API key, model) shown when
  litellm provider is selected in the settings modal

* fix: add LiteLLM provider to Copilot SDK and Google ADK settings UI

- Add LiteLLM option to Copilot SDK provider dropdown
- Add provider dropdown to Google ADK settings (was missing entirely)
- Add LiteLLM config fields (proxy URL, API key, model) for Google ADK
  when litellm provider is selected

* fix: sync env vars at runtime when API keys change via dashboard

resolve_backend_env() now accepts force=True to overwrite existing env
vars. Called after every settings/API-key save so backends immediately
see updated keys without a restart. Codex CLI subprocess gets an
explicit env snapshot via env=os.environ.copy().

* refactor: provider adapter pattern for LLM providers

Extract provider-specific logic (config resolution, client creation,
env var setup, error formatting) into adapter classes under
llm/providers/. Six adapters: Anthropic, Ollama, OpenAICompatible,
OpenRouter, Gemini, LiteLLM.

LLMClient delegates to adapters internally while keeping its public
API stable. Backends (OpenAI Agents, Google ADK, Copilot SDK) now use
adapters directly, replacing 70+ line if/elif chains with ~5 line
adapter calls. Adding a new provider means adding one adapter file
and registering it, no backend changes needed.

30 new tests for adapters, registry, model resolution, and LLMClient
delegation.

* fix: _stderr_lines UnboundLocalError and test_fast_path failures

Move _stderr_lines initialization before the try block so the except
handler can always access it. Add missing _HookMatcher and is_litellm
to test mocks.

* style: format test files and fix UP038 lint

* docs: add Discord deployment config with multi-provider support

Add Docker Compose, Dockerfile, env example, and identity files for
headless Discord bot deployment. Supports direct Anthropic, LiteLLM
proxy, OpenAI-compatible, and OpenAI Agents backends out of the box.

* chore: remove Discord deploy files (moved to separate PR #597)

Deploy config now lives under deploy/discord/ in the
deploy/discord-docker branch to keep this PR focused on
the provider adapter pattern and LiteLLM integration.

* fix: LiteLLM max_tokens setting, config persistence, and health recheck

- Add litellm_max_tokens config option so users can cap output tokens
  for models like DeepSeek (8192 limit)
- Fix LiteLLM settings not persisting on reload: litellmApiBase,
  litellmModel, litellmMaxTokens were missing from SETTINGS_MAP in
  app.js, so the frontend never loaded them back from the server
- Also add 13 other missing backend settings to SETTINGS_MAP
  (openaiAgentsModel, googleAdkProvider, codexCliModel, etc.)
- Re-run health checks after settings save so switching to LiteLLM
  immediately clears the "no Anthropic API key" degraded warning
- Claude SDK fast-path respects provider max_tokens instead of
  hardcoding 1024

* fix: LiteLLM proxy routing, adapter bugs, and remove plan docs

- Fix LiteLLM + OpenAI Agents: use OpenAI-compat proxy path when
  base_url is set instead of native LitellmModel which tried to route
  directly (causing Vertex AI credential errors)
- Fix operator precedence in _to_provider_config() base_url resolution
- Fix dashboard unable to clear LiteLLM API key (falsy vs None check)
- Fix fragile hasattr duck-typing in openai_agents, add explicit
  provider == "litellm" guard
- Replace unnecessary getattr() with direct attribute access in
  google_adk now that the field exists in Settings
- Pass api_key and base_url to ADK LiteLlm wrapper explicitly
- Remove plan design doc

* docs: add OpenRouter and LiteLLM provider documentation

- Add dedicated OpenRouter and LiteLLM sections to LLM providers guide
  with configuration examples for proxy vs direct SDK mode
- Update backend compatibility matrix to include both new providers
- Update per-backend provider tables (Claude SDK, OpenAI Agents,
  Google ADK, Copilot SDK) with LiteLLM and OpenRouter support
- Add OpenRouter and LiteLLM config fields to configuration reference
- Update getting-started configuration with OpenRouter and LiteLLM
  environment variable examples
- Replace OpenRouter-only section in backends index with combined
  OpenRouter + LiteLLM section
2026-03-14 03:46:02 +05:30
..