Amazon Web Services will provide OpenAI with cloud infrastructure to run and scale advanced artificial intelligence workloads under a multiyear agreement signed by the two companies.
AWS said Monday the potential seven-year, $38 billion deal would give OpenAI access to hundreds of thousands of NVIDIA graphics processing units, with potential expansion to tens of millions of central processing units to support agentic workloads.
OpenAI CEO Sam Altman stressed the importance of reliable, large-scale compute for scaling frontier AI.
“Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone,” he added.
AWS CEO Matt Garman highlighted the company’s ability to support vast AI workloads with the immediate availability of optimized compute.
How Will AWS Infrastructure Support OpenAI’s AI Workloads?
The AWS infrastructure for OpenAI is designed to optimize AI processing performance and efficiency.
NVIDIA GB200 and GB300 GPUs will be clustered through Amazon EC2 UltraServers to enable low-latency performance across interconnected systems. The clusters are designed to support tasks such as running ChatGPT and training AI models, with the ability to adjust to OpenAI’s changing requirements.
OpenAI will start using AWS computing resources immediately, with full deployment expected by the end of 2026 and potential expansion in 2027.


