Anthropic has secured multiple gigawatts of next-generation TPU capacity from Google and Broadcom in a partnership that will begin delivering compute infrastructure in 2027.

The AI safety lab's revenue run-rate has surged to $30 billion from approximately $9 billion at the end of 2025, driven by explosive enterprise demand for its Claude models.

Over 1,000 business customers now spend more than $1 million annually on Claude services, doubling from 500 customers in February when the company raised its Series G round.

"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure," said Krishna Rao, Anthropic's CFO. "We are making our most significant compute commitment to date to keep pace with our unprecedented growth."

The majority of the new compute capacity will be located in the United States, expanding on Anthropic's November 2025 commitment to invest $50 billion in American AI infrastructure.

Multi-cloud strategy continues

The deal deepens Anthropic's existing relationship with Google Cloud, building on increased TPU capacity announced last October. The company also maintains its partnership with Broadcom for chip infrastructure.

Anthropic operates across multiple AI hardware platforms including AWS Trainium, Google TPUs, and NVIDIA GPUs. This approach allows the company to match specific workloads to optimal chip architectures.

Amazon remains Anthropic's primary cloud provider and training partner through Project Rainier. Claude is the only frontier AI model available across all three major cloud platforms: Amazon Web Services Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.

The compute expansion comes as Anthropic faces intensifying competition from OpenAI and other frontier AI labs racing to scale their infrastructure and model capabilities.

Anthropic expects the new TPU capacity to power future iterations of Claude and support the company's growing enterprise customer base through 2027 and beyond.