Integration Guide
Step-by-step setup for using Infux with OpenClaw
OpenClaw is an open-source personal AI assistant that runs locally on your machine. It connects through popular chat apps like WhatsApp, Telegram, Discord, and more. Because OpenClaw supports OpenAI-compatible APIs, you can point it at Infux to get access to 18 open-weight models with flat-rate pricing and automatic failover.
Install OpenClaw with the official install script:
curl -fsSL https://openclaw.ai/install.sh | bash
Set these environment variables to point OpenClaw at the Infux gateway. Add them to your shell profile (~/.zshrc or ~/.bashrc):
export OPENAI_API_KEY="sk-infux-your_key_here" export OPENAI_BASE_URL="https://api.infux.dev/v1"
Then reload: source ~/.zshrc
Set the model via the OPENAI_MODEL environment variable:
export OPENAI_MODEL="minimax-m2.5" # highest SWE-bench score
| Use Case | Model | Why |
|---|---|---|
| Best overall | minimax-m2.5 | Highest SWE-bench score (80.2%) |
| Fast responses | deepseek-v3-0324 | Fastest TTFT, great code quality |
| Complex tasks | kimi-k2.5 | Strong SWE-bench, large context |
Start OpenClaw and send a test message through your preferred chat app. You should see responses powered by the Infux gateway.
openclaw start
Check the Infux dashboard to confirm requests are flowing through.
Verify OPENAI_BASE_URL is set to https://api.infux.dev/v1 and that your network allows outbound HTTPS.
Verify your key starts with sk-infux-. You can regenerate keys in the dashboard.
Make sure OPENAI_MODEL is set to a valid Infux model ID. Check available models at /models.