Integration Guide
Step-by-step setup for using Infux with Continue
Cmd+Shift+P → "Continue: Open Config"~/.continue/config.yamlmodels:
- name: DeepSeek V3 (Infux)
provider: openai
model: deepseek-v3-0324
apiBase: https://api.infux.dev/v1
apiKey: sk-infux-your_key_here
- name: Kimi K2.5 (Infux)
provider: openai
model: kimi-k2.5
apiBase: https://api.infux.dev/v1
apiKey: sk-infux-your_key_here
- name: DeepSeek R1 (Infux)
provider: openai
model: deepseek-r1
apiBase: https://api.infux.dev/v1
apiKey: sk-infux-your_key_here
tabAutocompleteModel:
name: GLM-4.7 Flash (Infux)
provider: openai
model: glm-4.7-flash
apiBase: https://api.infux.dev/v1
apiKey: sk-infux-your_key_hereIf you prefer JSON (still supported):
{
"models": [
{
"title": "DeepSeek V3 (Infux)",
"provider": "openai",
"model": "deepseek-v3-0324",
"apiBase": "https://api.infux.dev/v1",
"apiKey": "sk-infux-your_key_here"
}
],
"tabAutocompleteModel": {
"title": "GLM-4.7 Flash (Infux)",
"provider": "openai",
"model": "glm-4.7-flash",
"apiBase": "https://api.infux.dev/v1",
"apiKey": "sk-infux-your_key_here"
}
}| Use Case | Model | Why |
|---|---|---|
| Chat | deepseek-v3-0324 | Fast, great at explaining code |
| Edit | kimi-k2.5 | Strong at multi-file edits |
| Tab Autocomplete | glm-4.7-flash | Lowest latency, 200K context |
| Agent mode | kimi-k2.5-thinking | Best reasoning for complex tasks |
Cmd+L to open Continue chatEnsure provider is set to "openai" (not "openai-compatible"). Continue's openai provider works with any OpenAI-compatible API.
Verify the tabAutocompleteModel section is present. Check Continue logs: Cmd+Shift+P → "Continue: View Logs".