news

OpenAI Models Come to AWS Bedrock: Pricing Impact

OpenAI models, Codex, and Managed Agents are coming to AWS Bedrock in limited preview. Here's what changes for pricing, procurement, and usage planning.

By AI Pricing Guru Editorial Team

OpenAI and AWS just expanded their partnership: OpenAI models, Codex, and Amazon Bedrock Managed Agents powered by OpenAI are launching on AWS in limited preview.

The headline is not a new token price. It is distribution. OpenAI says AWS customers can now access frontier models including GPT-5.5 through Amazon Bedrock, configure Codex to use Bedrock as a provider, and build managed agents inside AWS security, billing, and procurement workflows.

For AI buyers, the practical question is simple: does this make OpenAI cheaper? Not automatically. But it can change how enterprises budget, approve, and operationalize OpenAI usage.

What changed

According to OpenAI’s announcement, the AWS launch has three parts:

  1. OpenAI models on AWS, including GPT-5.5 on Amazon Bedrock.
  2. Codex on AWS, with Codex CLI, desktop, and VS Code extension able to use Bedrock as the provider.
  3. Amazon Bedrock Managed Agents powered by OpenAI, designed for production agent workflows with AWS security and governance controls.

All three are launching in limited preview, so this is not yet a broad self-serve migration path for every OpenAI API customer. But it is still strategically important because Bedrock is already where many large companies centralize model access, identity, logging, vendor approvals, and cloud commitments.

If your company already buys AI through AWS, OpenAI just became easier to route through that existing procurement lane.

Pricing impact: no public price cut yet

OpenAI’s announcement does not publish a separate AWS Bedrock price card for GPT-5.5, Codex, or Managed Agents. Until AWS exposes final public rates for the preview, buyers should treat this as an availability and procurement change, not a confirmed pricing discount.

The current public OpenAI API benchmark still looks like this:

Model / pathPublic token benchmarkPricing implication
GPT-5.5$5 input / $30 output per 1M tokensPremium flagship; expensive for broad default routing
GPT-5.5 cached input$0.50 per 1M cached input tokensStrong economics when prompts repeat
GPT-5.4$2.50 input / $15 output per 1M tokensCheaper frontier OpenAI fallback
GPT-5.4 mini$0.75 input / $4.50 output per 1M tokensBetter default for volume workloads
Bedrock routeNot separately published in the announcementMay help with AWS commitments, billing, governance, and vendor consolidation

For the live OpenAI rate card, use our OpenAI pricing page. To model your own workload, plug expected input, output, and cache hit rates into the token cost calculator.

The important nuance: even if the token rate is the same, effective cost can change when usage counts against existing AWS commitments or moves under a centralized cloud budget. For some enterprises, that is as meaningful as a headline discount because it removes procurement friction.

Why Bedrock matters for OpenAI buyers

Bedrock is not just another API wrapper. For large AWS customers, it is often the approved control plane for model usage.

That matters for OpenAI adoption because the hardest enterprise blockers are frequently not model quality. They are:

  • vendor approval
  • data handling review
  • IAM and access control
  • logging and governance
  • budget ownership
  • regional and compliance requirements
  • whether spend can be attached to existing cloud commitments

OpenAI says customer data is processed by Amazon Bedrock for the Codex integration, and eligible customers can apply Codex usage toward AWS cloud commitments. That is the sentence finance and platform teams will care about most.

In other words, this launch may not make GPT-5.5 cheaper per token, but it can make OpenAI easier to buy at scale.

Codex on AWS: the cost risk to watch

Codex running through Bedrock is especially interesting because coding agents can burn tokens quickly.

A normal chat request is usually bounded: prompt in, answer out. A coding agent may inspect files, reason across a repository, call tools, retry failed edits, generate tests, and summarize its work. That means the same model price can produce a very different bill depending on how long the agent loop runs.

If your team tests Codex on Bedrock, set controls before broad rollout:

  • cap agent run length
  • log input, cached input, and output tokens separately
  • route routine tasks away from GPT-5.5 when GPT-5.4 mini or another lower-cost model is good enough
  • create separate budgets for interactive coding and background automation
  • measure cost per merged pull request, not just cost per prompt

For a model-by-model view of OpenAI economics, see our OpenAI API pricing guide. For the newest flagship tradeoff, read GPT-5.5 vs GPT-5.4 pricing.

Managed Agents: better governance, but watch hidden usage

Amazon Bedrock Managed Agents powered by OpenAI could make enterprise agent deployment easier because infrastructure, tool orchestration, security, and governance are bundled into the AWS environment.

That is useful. It also changes the budgeting problem.

Agent workloads often include more than a single model call:

  • planning steps
  • tool calls
  • retrieval
  • retries
  • intermediate summaries
  • long context carried between steps
  • final synthesis

Those hidden steps are where agent bills drift. A managed service can reduce engineering overhead, but it does not remove the need to meter the full workflow.

The safest starting point is to price agents by business outcome: cost per support ticket resolved, cost per report generated, cost per code change, cost per analyst task, or cost per workflow completed. Token price alone will understate the real economics.

Who benefits first

Large AWS-first enterprises benefit most. If your organization already routes model access through Bedrock, OpenAI support reduces the need for a separate vendor path.

Platform teams benefit because they can standardize identity, logging, and governance around a familiar AWS control plane.

Developer productivity teams benefit if Codex on Bedrock lets them use existing AWS commitments rather than creating a new budget line.

Small API buyers may see less immediate impact. If you already use the OpenAI API directly and do not care about AWS procurement, this announcement does not by itself change your token bill.

What to do now

If you are evaluating the preview:

  1. Ask AWS for the exact Bedrock price card before migration. Do not assume parity, discounts, or surcharges until they are in writing.
  2. Benchmark effective cost, not just model quality. Include cached tokens, agent loops, retries, and tool activity.
  3. Keep cheaper models in the route. GPT-5.5 is powerful, but GPT-5.4 and GPT-5.4 mini are still better economics for many production tasks.
  4. Separate procurement value from token value. AWS billing consolidation may be worth it even without a lower per-token rate.
  5. Set Codex and agent budgets early. Coding and multi-step agents can turn small experiments into real spend quickly.

Bottom line

OpenAI coming to AWS Bedrock is a procurement and deployment milestone, not a confirmed price cut. It makes GPT-5.5, Codex, and OpenAI-powered managed agents easier for AWS-heavy enterprises to adopt, especially where security reviews and cloud commitments shape buying decisions.

For now, keep budgeting against the public OpenAI rates, watch for AWS’s final Bedrock pricing, and treat Codex or Managed Agents as workflow-level costs rather than simple per-prompt costs.

Related: compare OpenAI against Claude, Gemini, DeepSeek, and xAI in our AI API pricing comparison and track the full market on the AI model pricing table.