news

Claude Opus 4.7 Launches: $5/$25 Pricing, 87.6% SWE-bench Verified, Beats GPT-5.4 and Gemini 3.1

Anthropic released Claude Opus 4.7 on April 16, 2026. Same $5/$25 per million token pricing as Opus 4.6 but with a 13% coding uplift, 3.26x vision resolution, and a new xhigh effort level. Here's what changed and what it costs.

By AI Pricing Guru Editorial Team

Anthropic released Claude Opus 4.7 today (April 16, 2026), narrowly retaking the lead as the most powerful generally available frontier model. The headline for buyers is simple: pricing is unchanged at $5.00 per million input tokens and $25.00 per million output tokens — but the model is meaningfully better on every benchmark Anthropic published.

If you’re already paying for Opus 4.6, there is almost no reason not to migrate today. If you’ve been on GPT-5.4 or Gemini 3.1 Pro for coding, it’s worth a re-evaluation.

Here is what changed, what it costs, and how it compares.

Opus 4.7 Pricing (April 2026)

TierPrice per 1M tokens
Input$5.00
Cached input$0.50 (90% discount)
Output$25.00
Batch API50% off standard rates
Context window200K (1M beta available)

This is identical to Opus 4.6 pricing from February 2026. Anthropic is holding price-per-token steady and pushing the value proposition through capability upgrades.

For full rates across every Claude tier (Opus, Sonnet, Haiku, plus legacy models), see our Anthropic Claude API pricing page. You can also plug Opus 4.7 into our token cost calculator to model your monthly spend.

What’s New in Opus 4.7

1. A 13% lift on Anthropic’s internal coding benchmark. On a 93-task coding evaluation, Opus 4.7 solved 13% more tasks than Opus 4.6 — including four tasks neither Opus 4.6 nor Sonnet 4.6 could handle.

2. 87.6% on SWE-bench Verified. This is a substantial gain over Opus 4.6 and puts Opus 4.7 ahead of every other generally available model on real-world software engineering tasks.

3. 64.3% on SWE-bench Pro (vs 53.4% for Opus 4.6). A 10.9-point jump on the harder, production-grade version of the benchmark.

4. 3.26x higher vision resolution. Opus 4.7 accepts images up to 2,576 pixels on the long edge (roughly 3.75 megapixels), up from 1,568 pixels (1.15 megapixels) on Opus 4.6. Dense diagrams, design mockups, and screenshots now come through at actual fidelity. On visual navigation benchmarks without tools, Opus 4.7 scores 79.5% vs 57.7% for Opus 4.6.

5. A new xhigh effort level. Opus 4.7 introduces a fifth reasoning tier — sitting between high and max — that Anthropic recommends as the default for coding and agentic use cases.

6. 14% improvement on multi-step agentic workflows while using fewer tokens and producing one-third the tool-call errors compared to Opus 4.6.

7. Document reasoning jumps from 57.1% to 80.6% on OfficeQA Pro. If you process contracts, spreadsheets, or slide decks, this is the biggest single-release improvement in any frontier model for this category.

8. Literal instruction following. Opus 4.7 executes prompts as written rather than “reading between the lines.” This is a double-edged sword — older prompt templates that relied on Claude’s interpretive behavior may need updating.

How Opus 4.7 Compares to GPT-5.4 and Gemini 3.1 Pro

ModelInput ($/1M)Output ($/1M)SWE-bench VerifiedNotes
Claude Opus 4.7$5.00$25.0087.6%New leader on coding and agentic benchmarks
GPT-5.4$2.50$15.00~82%Cheapest flagship; still strong on general reasoning
Gemini 3.1 Pro$2.50$15.00~80%Strong on vision, long context; cheapest of the three
Claude Opus 4.6 (legacy)$5.00$25.00~84%Previous flagship, now superseded at the same price

Opus 4.7 costs 2x more than GPT-5.4 on input and 67% more on output. The premium buys you a measurable lead on code generation, agent reliability, and document reasoning. For bulk text processing or chat-style workloads, GPT-5.4 remains the better price-performance pick.

We cover this head-to-head in detail in our Claude Opus 4.7 vs GPT-5.4 vs Gemini 3.1 Pro comparison.

Should You Upgrade from Opus 4.6?

Yes — with almost no caveat. Same price, measurably better model, same API surface. Run your evals, swap the model string, and ship.

The only reason to stay on 4.6 for now is if you’ve heavily tuned prompts against its interpretive style and don’t have time to revalidate against 4.7’s more literal instruction-following. That’s a week of prompt review for most teams.

Full migration guidance is in our Opus 4.7 vs 4.6 comparison post, including the prompt-updating tips you’ll want before cutting over.

Availability

Opus 4.7 is live today across:

  • Claude.ai (Pro, Max, Team, Enterprise plans)
  • Anthropic API (claude-opus-4-7 model string)
  • Amazon Bedrock
  • Google Cloud Vertex AI
  • Microsoft Foundry

Prompt caching, batch mode, the 1M-token context beta, and the new xhigh effort level are all available from day one.

What This Means for the Pricing Landscape

Anthropic’s strategy here is clear: hold the line on price, widen the capability moat. With Opus 4.7 at $5/$25 — the same as Opus 4.6 since February — Anthropic is betting that buyers will pay a 2x multiple over GPT-5.4 for the coding and agentic quality lead.

For now, that bet looks correct. The gap between Opus 4.7 and the next-best generally available model on SWE-bench Verified is wider than the gap between Opus 4.6 and GPT-5.4 was at launch.

We’ll be watching to see how OpenAI and Google respond. Historically, a new flagship from Anthropic triggers either a price cut or a model refresh from the other two within 60 days.


Track AI pricing changes as they happen. Subscribe to our weekly pricing newsletter for alerts on new model releases, price cuts, and affiliate-program launches across OpenAI, Anthropic, Google, xAI, and more.

Building with Claude and need to cut your output cost? Tools like Writesonic (AI writing with multi-model routing) can offload bulk drafting from Opus to cheaper tiers while keeping Opus for the quality-sensitive final pass. See our best AI models in 2026 guide.