Mistral AI API Pricing (April 2026) — Per-Token Costs

Last updated:

How much does Mistral AI cost? Mistral AI API pricing ranges from $0.04 per million tokens for Ministral 3B to $6.00 per million output tokens for Mistral Large 2 and Pixtral Large. Mistral Large 2 is priced at $2.00 / $6.00 per 1M tokens, Mistral Small 3 at $0.10 / $0.30, and Codestral at $0.30 / $0.90. Ministral 8B ($0.10 / $0.10) and Ministral 3B ($0.04 / $0.04) target edge deployments, while Mixtral 8x22B and Pixtral Large match the $2 / $6 flagship tier.

Key facts about Mistral AI pricing

  • Mistral Large 2 flagship priced at $2.00 / $6.00 per 1M tokens — 60% cheaper on output than GPT-5.4.
  • Mistral Small 3 at $0.10 / $0.30 per 1M tokens directly undercuts GPT-5.4 mini.
  • Codestral code-specialist at $0.30 / $0.90 per 1M tokens with fill-in-the-middle support.
  • Ministral 3B is the cheapest API model at $0.04 / $0.04 per 1M tokens, designed for edge and on-device deployment.
  • Mixtral 8x22B sparse mixture-of-experts model at $2.00 / $6.00 per 1M tokens.
  • Pixtral Large multimodal flagship at $2.00 / $6.00 per 1M tokens, competitive with GPT-5.4 vision.
  • Mistral Large 2 and Small 3 ship with a 128K token context window; Codestral with 32K.
  • Several Mistral models ship under Apache 2.0 or Mistral Research License for self-hosting in addition to API access.

How much does each Mistral model cost per million tokens?

  • Pixtral Large
    Mistral
    Mistral
    Input
    $2.00
    Cached
    Output
    $6.00
Showing 1 of 7 models · USD per 1M tokens
Last synced:

Last synced:

Why choose Mistral over OpenAI or Anthropic?

Mistral AI's positioning comes down to three things: European data residency, open-weight availability, and aggressive mid-tier pricing. On pricing, Mistral Large 2 at $2.00 / $6.00 per 1M tokens undercuts GPT-5.4 ($2.50 / $15.00) by 60% on output — the most common bottleneck in production chat workloads. Mistral Small 3 at $0.10 / $0.30 per 1M is roughly 7x cheaper than GPT-5.4 mini on input and 15x cheaper on output.

On openness, Mistral ships many models (Ministral 8B, Mixtral, Codestral Mamba) under permissive Apache 2.0 or research licenses — you can self-host if API costs ever become the bottleneck. This is a hedge that neither OpenAI nor Anthropic provides. On residency, la Plateforme runs entirely in EU data centers, which matters for GDPR-sensitive workloads where routing traffic through US providers is a compliance problem.

When Mistral is not the right choice

Mistral Large 2 still trails GPT-5.4 and Claude 3.5 Sonnet on the hardest reasoning benchmarks and on long-context recall above 64K. If your workload depends on frontier reasoning quality, staying with OpenAI or Anthropic is usually worth the premium. Mistral also has no prompt-caching discount as of April 2026, unlike Anthropic's 90% cached-read discount.

Price History

Track how Mistral AI API pricing has changed over time.

Mistral Large

Mistral Small

Codestral

Ministral 8B

No history yet — first snapshot 2026-04-16. Price trends will appear here as data accumulates.

Ministral 3B

No history yet — first snapshot 2026-04-16. Price trends will appear here as data accumulates.

Mixtral 8x22B

No history yet — first snapshot 2026-04-16. Price trends will appear here as data accumulates.

Pixtral Large

No history yet — first snapshot 2026-04-16. Price trends will appear here as data accumulates.

Price history tracking started April 2026. Charts will appear after the first price change is detected.
View pricing changelog →

Frequently asked questions

How much does Mistral AI charge per token?

Mistral AI pricing ranges from $0.04 per million tokens for Ministral 3B up to $6.00 per million output tokens for Mistral Large 2. The flagship Mistral Large 2 costs $2.00 per million input tokens and $6.00 per million output tokens, while Mistral Small 3 is a fraction of that at $0.10 input / $0.30 output per 1M tokens.

Does Mistral have a free tier?

Mistral offers a free experimentation tier via la Plateforme for prototyping with rate-limited access to their open-weight and smaller commercial models. Production paid tiers are pay-as-you-go per token with no monthly minimums. Ministral 3B is also free to self-host under the Mistral Research License for non-commercial use.

How does Mistral compare to OpenAI?

Mistral Large 2 at $2.00 / $6.00 per 1M tokens is roughly 20% cheaper on input and 60% cheaper on output than GPT-5.4 ($2.50 / $15.00 per 1M). Mistral Small 3 at $0.10 / $0.30 directly undercuts GPT-5.4 mini ($0.75 / $4.50). Mistral models also ship with Apache 2.0 or research licenses for several tiers, which OpenAI does not offer.

What's Codestral's pricing?

Codestral — Mistral's code-specialist model — is priced at $0.30 per million input tokens and $0.90 per million output tokens. That makes it about 5x cheaper than GPT-5.4 for code completion and refactoring workloads, and it ships with a 32K context window and fill-in-the-middle support tuned for IDE integrations.

Which Mistral model is cheapest?

Ministral 3B is the cheapest Mistral API model at $0.04 per million input tokens and $0.04 per million output tokens — effectively $0.08 round-trip per 1M tokens. Ministral 8B follows at $0.10 / $0.10 per 1M. Both are designed for on-device and edge deployments but are also hosted on la Plateforme.

What's Mistral Large's context window?

Mistral Large 2 supports a 128K token context window, matching GPT-5.4 and Claude 3.5 Sonnet. Mistral Small 3 also ships with 128K context. Codestral is tuned for 32K context, optimized for typical repository-level code completion tasks.

Methodology

Pricing sourced from https://mistral.ai/pricing on . All prices expressed in USD per 1 million tokens. We track pricing across 7 Mistral models covering flagship, efficient, code, edge, and multimodal tiers.