Provider Setup Guide¶
Orion supports both local (self-hosted) and cloud AI providers. You can switch between them at any time through the Dashboard Settings page or the CLI.
Provider Categories¶
Orion uses four provider categories, each configurable independently:
| Category | Local Options | Cloud Options |
|---|---|---|
| LLM (Text) | Ollama (Llama 3.2, Mistral) | OpenAI (GPT-4o, GPT-4o Mini) |
| Image Generation | ComfyUI (SDXL, Flux) | DALL-E 3 |
| Video Generation | ComfyUI (AnimateDiff, SVD) | Runway Gen-3 |
| TTS (Voice) | Piper TTS, Coqui TTS | ElevenLabs, OpenAI TTS |
Switching Providers via Dashboard¶
- Open http://localhost:3001 and log in
- Navigate to Settings from the sidebar
- You will see four provider cards: LLM, Image, Video, and TTS

Each Card Shows¶
- Provider mode dropdown:
Local (Ollama / ComfyUI)orCloud (OpenAI / Replicate) - Model dropdown: available models for the selected mode
- Connection status: green check (connected), red alert (disconnected), or spinner (checking)
- Test Connection button: click to verify the selected provider is reachable before saving
- Model Parameters accordion: expand to adjust generation settings (e.g., temperature, max tokens)
To Switch a Provider¶
- Select the desired mode from the Provider dropdown
- The Model dropdown updates to show only models available for that mode
- Select a model
- Click Save -- the configuration is applied immediately
- Optionally click Test Connection to verify the provider is reachable
- The status indicator will update to show whether the new provider is reachable
Example: Switch LLM from Local to Cloud¶
- On the LLM (Text Generation) card, change Provider to
Cloud (OpenAI / Replicate) - Select
GPT-4ofrom the Model dropdown - Click Save
- The status indicator should turn green if the
OPENAI_API_KEYenvironment variable is configured
Switching Providers via CLI¶
┌──────────────────┬──────────┬──────────────┬───────────┐
│ Service │ Mode │ Model │ Status │
├──────────────────┼──────────┼──────────────┼───────────┤
│ LLM │ LOCAL │ llama3.2 │ connected │
│ Image │ LOCAL │ sdxl │ connected │
│ Video │ LOCAL │ animatediff │ connected │
│ TTS │ LOCAL │ piper │ connected │
└──────────────────┴──────────┴──────────────┴───────────┘
Switch to cloud providers:
# Switch LLM to OpenAI GPT-4o
orion provider switch llm --mode CLOUD --provider openai --model gpt-4o
# Switch image generation to DALL-E 3
orion provider switch image --mode CLOUD --provider openai --model dall-e-3
# Switch TTS to ElevenLabs
orion provider switch tts --mode CLOUD --provider elevenlabs --model elevenlabs
# Switch back to local
orion provider switch llm --mode LOCAL --provider ollama --model llama3.2
Environment Variables for Cloud Providers¶
Cloud providers require API keys. Add these to your .env file:
Do not commit .env files
API keys should never be committed to version control. Use .env.example as a template and keep your .env file local. Never share API keys in chat, issues, or pull requests. Rotate keys immediately if they are accidentally exposed.
After updating .env, restart the affected services:
Local Provider Setup¶
Ollama runs as a container in the Docker Compose stack. To pull additional models:
When to Use Local vs Cloud¶
| Scenario | Recommended |
|---|---|
| Development and testing | Local |
| Demos without GPU hardware | Cloud |
| Production with cost constraints | Local |
| Highest quality output | Cloud |
| Offline or air-gapped environments | Local |
| Rapid prototyping | Cloud |
Next Steps¶
- Full Pipeline Demo -- End-to-end walkthrough
- Monitoring -- Track provider health in Grafana
- Configuration Reference -- All environment variables
- Analytics Guide -- Track costs by provider
- System Administration -- Service health and GPU monitoring