Infux

Model Registry

17 open-weight models

All models are MIT or Apache 2.0 licensed. Zero vendor lock-in. Every model routes across multiple providers with automatic failover.

MiniMax M2.5Featured

Top SWE-bench — matches Opus quality

200K ctx2 providers
minimax-m2.580.2%
GLM-5Featured

Zhipu's frontier model

200K ctx1 provider
glm-577.8%
Kimi K2.5Featured

256K context, strong reasoning

256K ctx1 provider
kimi-k2.576.8%
GLM-4.7

Balanced speed & quality

200K ctx1 provider
glm-4.773.8%
Devstral 2

Mistral's dev-focused model

128K ctx1 provider
devstral-272.2%
Llama 4 Maverick

Meta's flagship coding model

128K ctx2 providers
llama-4-maverick50.1%
Qwen3 235B

Alibaba's large coding model

128K ctx1 provider
qwen3-235b
Mistral Large

Mistral's largest model

128K ctx1 provider
mistral-large
DeepSeek V3

Fastest TTFT, excellent quality

128K ctx3 providers
deepseek-v3-032442.1%
GLM-4.7 Flash

200K context, very fast

200K ctx1 provider
glm-4.7-flash
Llama 4 Scout

Meta's efficient scout model

128K ctx2 providers
llama-4-scout
Mistral Small

Compact and capable

128K ctx1 provider
mistral-small
Qwen3 32B

Alibaba's fast coding model

128K ctx1 provider
qwen3-32b
GLM-4.7 FlashX

Extended GLM flash variant

200K ctx1 provider
glm-4.7-flashx
Kimi K2.5 ThinkingFeatured

Best reasoning SWE-bench

256K ctx1 provider
kimi-k2.5-thinking76.8%
DeepSeek R1

Chain-of-thought reasoning

128K ctx2 providers
deepseek-r149.2%
Qwen3 235B Thinking

Extended thinking mode

128K ctx1 provider
qwen3-235b-thinking