AWS and OpenAI Forge Multi-Year $38 Billion Strategic Partnership

On November 3rd, Amazon AWS and OpenAI jointly announced a multi-year strategic partnership. Under the agreement, AWS will provide OpenAI with globally leading infrastructure to support the running and scaling of its core artificial intelligence workloads. This $38 billion partnership will last for seven years and is set to grow progressively. OpenAI will leverage AWS’s powerful computing resources, including hundreds of thousands of advanced NVIDIA GPUs, with the capability to expand to tens of millions of CPUs, to efficiently meet the rapidly growing demand for agentic tasks.

As part of the collaboration, OpenAI has already begun utilizing AWS’s computing resources and plans to complete the full target capacity deployment before the end of 2026, while retaining the ability to expand further in 2027 and beyond.

To support OpenAI, AWS has built a highly optimized infrastructure architecture designed to maximize AI processing efficiency and performance. This architecture connects NVIDIA GPUs, including the GB200 and GB300, via Amazon EC2 UltraServers within a unified network environment, achieving low-latency communication between systems. This enables OpenAI to efficiently run various workloads and maintain optimal performance. These clusters are flexibly designed to support both the inference services of ChatGPT and the training tasks of next-generation models, fully adapting to OpenAI’s evolving development needs.

Earlier this year, OpenAI’s open-weight foundation models were launched on Amazon Bedrock, offering millions of AWS customers additional model choices. OpenAI quickly became one of the most popular publicly available model providers on Amazon Bedrock, with thousands of customers—including Bystreet, Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health—using its models for agentic workflows, programming, scientific analysis, and mathematical problem-solving.

Leave a Reply