Integration Guide
Step-by-step setup for using Infux with Codex CLI
npm install -g @openai/codex)Set the environment variables and run:
export OPENAI_API_KEY="sk-infux-your_key_here" export OPENAI_BASE_URL="https://api.infux.dev/v1" codex "Refactor this function to use async/await"
Create a config.toml for persistent multi-provider setup:
model = "deepseek-v3-0324" model_provider = "infux" [model_providers.infux] name = "Infux" base_url = "https://api.infux.dev/v1" env_key = "INFUX_API_KEY" wire_api = "chat"
Then set your key: export INFUX_API_KEY="sk-infux-your_key_here"
export OPENAI_API_KEY="sk-infux-your_key_here" export OPENAI_BASE_URL="https://api.infux.dev/v1"
This overrides the default OpenAI endpoint globally. Then just run codex "your prompt".
| Use Case | Model | Why |
|---|---|---|
| General coding | deepseek-v3-0324 | Fast, great code quality |
| Large codebases | kimi-k2.5 | 256K context for big repos |
| Reasoning tasks | deepseek-r1 | Extended thinking for hard problems |
codex "What is 2+2?"
Ensure you are using an Infux model ID (e.g., deepseek-v3-0324, not deepseek-chat).
Verify your API key starts with sk-infux- and is active in the dashboard.
Set model_provider = "infux" at the top level to route requests through your custom provider.