Nvidia Ships 1 Million Chips to Amazon by 2027
The deal bundles GPUs with networking gear and seven inference chips in Nvidia's biggest AWS play yet

Nvidia will deliver one million GPUs to Amazon Web Services by the end of 2027, with shipments beginning this year. Ian Buck, Nvidia's vice president of hyperscale and high-performance computing, confirmed the timeline to Reuters on March 19. Both companies announced the deal earlier this week but had not disclosed when deliveries would begin or end. The confirmation adds critical detail to what is shaping up to be one of the largest single chip orders in the AI era.
Why Seven Chips Beat One for AI Inference
The deal is far bigger than a GPU order. Buck revealed that AWS is buying a broad mix of Nvidia products beyond the headline million GPUs. That includes networking chips, ConnectX data center networking gear, and Nvidia's recently launched Groq chips, which emerged from a $17 billion licensing deal with an AI chip startup late last year.
AWS plans to deploy seven different Nvidia chips together to handle inference, the process by which trained AI models generate answers and perform tasks for users. “Inference is hard. It's wickedly hard,” Buck told Reuters. “To be the best at inference, it is not a one chip pony. We actually use all seven chips.”
That quote matters. It signals that the era of buying one type of GPU for every AI workload is ending. Cloud providers now need specialized silicon for different stages of the AI pipeline, and Nvidia wants to be the vendor supplying all of them.
AWS Opens Its Data Centers to Nvidia Networking
One of the most surprising elements of the agreement is the networking component. AWS has spent years building and perfecting its own custom networking equipment for data centers. It is a point of pride and a competitive advantage for the cloud giant.
Now, Nvidia's ConnectX and its data center networking systems will sit inside AWS facilities alongside that proprietary gear. “They're still going to do that, of course,” Buck said of AWS's own networking. “But we are collaborating now on deploying ConnectX for those important workloads and biggest customers across AI with AWS.”
This is a meaningful concession from Amazon. It suggests that the performance demands of the largest AI workloads are outpacing what any single vendor's networking stack can deliver alone.
Jensen Huang's $1 Trillion Bet Gets Its First Anchor
Nvidia CEO Jensen Huang laid the groundwork for this deal just days earlier. At a March 17 event, Huang projected a $1 trillion revenue opportunity for the company's Blackwell and Rubin chip families through the end of 2027. That is the exact same timeline as the AWS deal.
Neither Nvidia nor Amazon disclosed the financial terms of this specific transaction. But context helps sketch the scale. In November 2025, OpenAI signed a $38 billion deal with AWS for access to hundreds of thousands of Nvidia GB200 and GB300 chips. Oracle separately committed around $40 billion to purchase 400,000 Nvidia GB200 chips for its AI facility in Texas. A million-GPU deal with the world's largest cloud provider almost certainly sits in the same financial range, if not bigger.
Amazon Buys Nvidia While Building Its Own Rival Chips
The deal is notable because Amazon is simultaneously building its own AI chips. AWS launched Trainium3 in December 2025, claiming four times the performance of its predecessor at lower cost and 40 percent less energy consumption. Amazon CEO Andy Jassy has called the Trainium line a multi-billion-dollar business already.
Yet here Amazon is, locking in a million Nvidia GPUs. The message is clear. Custom chips can handle certain workloads efficiently, but Nvidia's ecosystem remains essential for the highest-performance AI tasks and for the customers who demand it.
AWS is even designing its upcoming Trainium4 chip to work with Nvidia's NVLink Fusion interconnect technology. That ensures interoperability between its homegrown silicon and Nvidia's GPUs. This is not a company planning to walk away from Nvidia anytime soon.
How This Deal Fits the $400 Billion AI Spending Spree
This deal arrives during a period of extraordinary spending on AI infrastructure. OpenAI alone has announced deals worth more than $400 billion across Oracle, AWS, Nvidia, AMD, and Broadcom. AMD secured a multi-year agreement to supply OpenAI with 6 gigawatts of compute capacity. Nvidia invested $2 billion in CoreWeave in January 2026 to build out more than 5 gigawatts of AI data center capacity.
The Nvidia-AWS agreement reinforces a pattern. Despite every major cloud provider investing in custom AI chips, demand for Nvidia hardware keeps growing. Nvidia's share of the AI training chip market remains between 80 and 90 percent. Its push into inference with multi-chip solutions could extend that dominance into entirely new territory.
For investors, the deal confirmed what the stock market already suspected. Nvidia shares closed at $178.56 on March 19, while Amazon ended the day at $208.77. Both stocks showed muted movement, suggesting Wall Street had priced in a deal of this magnitude after the earlier announcement.
The real question is whether the $1 trillion pipeline Huang described can hold up through 2027. If the AWS deal is any indication, the appetite for Nvidia silicon shows no signs of slowing down.
FAQs
How many GPUs is Nvidia selling to Amazon Web Services?
Nvidia will sell 1 million GPUs to AWS, with deliveries starting in 2026 and extending through the end of 2027. The deal also includes networking hardware and inference-specific chips beyond the GPU count.
What chips are included beyond GPUs?
AWS will receive Nvidia's ConnectX networking gear, data center networking systems, and the newly launched Groq inference chips. In total, AWS plans to use seven different Nvidia chip types for inference workloads.
How much is the Nvidia-Amazon chip deal worth?
Neither company disclosed financial terms. For comparison, OpenAI's similar AWS deal for hundreds of thousands of Nvidia GPUs was valued at $38 billion, suggesting this deal could be in a comparable or larger range.
Why is AWS still buying Nvidia chips if Amazon makes its own AI processors?
Amazon's Trainium chips serve price-sensitive and internally optimized workloads, but Nvidia GPUs remain the standard for the most demanding AI training and inference tasks. AWS is even building NVLink Fusion compatibility into Trainium4 to pair with Nvidia hardware.
What is Nvidia's $1 trillion revenue target?
CEO Jensen Huang said Nvidia expects a $1 trillion sales opportunity from its Blackwell and Rubin chip families through 2027. The AWS million-GPU deal falls within this exact timeline and represents one of the first major anchors for that projection.
Does this deal affect Nvidia's stock price?
Nvidia shares closed at $178.56 on March 19, down about 1 percent, while Amazon closed at $208.77. Markets showed limited reaction, indicating the deal had been largely anticipated after the initial announcement earlier in the week.
Sources
Topics
Nvidia
Amazon Web Services
AI Chips
GPU Infrastructure
Cloud Computing
AI Inference
Blackwell and Rubin
Data Center Networking
Nvidia-AWS Deal
GPU Volume
1 million units
Delivery Window
2026 through
2027
Chip Types
Seven total chips
Networking Gear
ConnectX
included
Financial Terms
Not disclosed
Nvidia's Revenue
Target
$1 trillion by 2027
✉️ Daily AI Digest
Get the day's most important AI stories in one sharp email. Join 42,800+ readers.
Free forever · No spam
🔥 Trending Now






