news

OpenAI GPT-5.5 Launches: $5/$30 Pricing, 1M Context

OpenAI's GPT-5.5 is live: $5 input / $30 output per 1M tokens, 1.05M-token context, and a new GPT-5.5 Pro tier at $30/$180. What it means for API buyers.

By AI Pricing Guru Editorial Team

OpenAI just shipped GPT-5.5, and the pricing story is immediately clear: this is not a quiet GPT-5.4 refresh. The new flagship lands at $5.00 per million input tokens and $30.00 per million output tokens, exactly 2x GPT-5.4 on both sides.

OpenAI also launched GPT-5.5 Pro at $30 input / $180 output per 1M tokens, keeping the same top-end rate card as GPT-5.4 Pro but attaching it to the new model generation.

If you buy AI on token economics, here is the short version: GPT-5.5 is a capability bet, not a price cut.

GPT-5.5 Pricing (April 2026)

ModelInput ($/1M)Cached input ($/1M)Output ($/1M)Notes
GPT-5.5$5.00$0.50$30.00New standard flagship
GPT-5.5 Pro$30.00$180.00Higher-precision premium tier
GPT-5.4$2.50$0.25$15.00Previous flagship, still active
GPT-5.4 mini$0.75$0.075$4.50Best mainstream value

For the full current OpenAI rate card, see our updated OpenAI pricing page. If you want to model your own spend, plug the numbers into the token cost calculator.

What changed besides price?

According to OpenAI’s model docs, GPT-5.5 ships with a 1,050,000-token context window and 128,000 max output tokens. That is a major jump from GPT-5.4’s 270K context and enough to matter for repo-scale coding, research synthesis, and multi-document agent workflows.

Two pricing details matter in practice:

  1. Cached input is still heavily discounted on standard GPT-5.5 — down to $0.50/M, which preserves OpenAI’s strong economics for repeated prompts and agent loops.
  2. Very long prompts get more expensive. OpenAI says prompts above 272K input tokens are charged at 2x input and 1.5x output for the full session on GPT-5.5. If you run long-context pipelines, price-test them before you migrate.

Is GPT-5.5 worth 2x GPT-5.4?

That depends on the workload.

Use GPT-5.5 when:

  • coding quality and multi-step reasoning are on the critical path
  • you actually need 1M-scale context
  • you are replacing manual analyst or engineer hours, not just cheap generation

Stick with GPT-5.4 or GPT-5.4 mini when:

  • you care more about price-performance than absolute frontier quality
  • your prompts are nowhere near 272K tokens
  • you run large production volumes where a 2x flagship jump meaningfully changes margin

The likely winner for many teams is still GPT-5.4 mini: cheap enough for broad deployment, strong enough for most product workloads, and far easier to scale economically.

Competitive read

At $5/$30, GPT-5.5 moves closer to Anthropic’s premium pricing psychology than OpenAI’s old value-led flagship stance. Claude Opus 4.7 still looks cheaper on output at $25/M, while GPT-5.4 remains the cheaper OpenAI alternative if you want frontier quality without the GPT-5.5 premium.

That gives AI buyers a cleaner segmentation:

  • GPT-5.5 = pay up for OpenAI’s newest frontier model
  • GPT-5.4 = cheaper high-end default
  • GPT-5.4 mini = best-value production workhorse
  • GPT-4.1 nano / mini = routing, extraction, and other low-cost infrastructure calls

Bottom line

GPT-5.5 is important because it resets the top of OpenAI’s stack, not because it makes OpenAI cheaper. If the quality jump is real, the economics will work for high-value coding and professional tasks. If not, GPT-5.4 mini will keep eating volume.

We’ll keep tracking the live rates on our OpenAI pricing page and across the full AI pricing comparison table. For the deeper buy decision, see GPT-5.5 vs GPT-5.4: Is GPT-5.5 worth 2x the cost?.