Anthropic has raised an additional $4 billion from Amazon, and has agreed to train its flagship generative AI models primarily on Amazon Web Services (AWS), Amazon’s cloud computing division.
The OpenAI rival also said it’s working with Annapurna Labs, AWS’ chipmaking division, to develop future generations of Trainium accelerators, AWS’ custom-built chips for training AI models.
“Our engineers work closely with Annapurna’s chip design team to extract maximum computational efficiency from the hardware, which we plan to leverage to train our most advanced foundation models,” Anthropic wrote in a blog post. “Together with AWS, we’re laying the technological foundation — from silicon to software — that will power the next generation of AI research and development.”
In its own post, Amazon clarified that Anthropic will use Trainium — including the current version of Trainium, Trainium2 — to train its upcoming models. The AI startup would use Inferentia, Amazon’s in-house chip meant to accelerate model running and serving, to deploy those models, Amazon said.
“By collaborating with Anthropic on the development of our custom Trainium chips, we’ll keep pushing the boundaries of what customers can achieve with generative AI technologies,” AWS CEO Matt Garman said in a statement. “We’ve been impressed by Anthropic’s pace of innovation and commitment to responsible development of generative AI, and look forward to deepening our collaboration.”
This new cash infusion brings Amazon’s total investment in Anthropic to $8 billion while maintaining the tech giant’s position as a minority investor, Anthropic said. To date, Anthropic has raised $13.7 billion in venture capital, according to Crunchbase.
The Information reported earlier this month that Amazon was in talks to invest billions in Anthropic, its first financial pledge in the company after a $4 billion deal last year. That deal was split into two separate tranches, a $1.25 billion first installment last September, and a $2.75 billion extension in March.
The new investment is reportedly structured similarly to the last one, but with a twist: Amazon insisted that Anthropic use Amazon-developed silicon hosted on AWS to train its AI.
Anthropic is said to prefer Nvidia chips, but the money may have been too good to pass up. Early this year, Anthropic reportedly projected it would burn through more than $2.7 billion in 2024 as it trained and scaled up its AI products. Anthropic has been discussing raising new funding at a $40 billion valuation for several months, per The Information. No doubt, the pressure was on to clinch something soon.
Anthropic’s co-founder and CEO Dario Amodei noted that its work with AWS has greatly expanded over the past year. Through Amazon Bedrock, AWS’ platform for hosting and fine-tuning generative models, Anthropic’s Claude family of models are being used by “tens of thousands” of customers, Amodei said.
“This has been a year of breakout growth for Claude, and our collaboration with Amazon has been instrumental in bringing Claude’s capabilities to millions of end users across tens of thousands of customers on Amazon Bedrock,” he said in a press release. “We’re looking forward to working with Amazon to train and power our most advanced AI models using AWS Trainium, and helping to unlock the full potential of their technology.”
Recently, Anthropic teamed up with AWS and Palantir to provide U.S. intelligence and defense agencies access to Claude. Amazon said that going forward, AWS customers will get early access to the ability to fine-tune new Claude models on their data.
Beyond AWS, Amazon is said to be working with Anthropic to revamp the former’s consumer products. And, the tech giant is reportedly set to replace the in-house models powering Alexa, its virtual assistant, with Anthropic’s after encountering technical challenges.
The collaboration — and investments — have attracted regulatory scrutiny.
The FTC earlier this year sent a letter to Amazon, as well as Microsoft and Google, requiring the companies to explain the impacts their investments in startups such as Anthropic have on the competitive landscape of generative AI. Google has also invested in Anthropic, pouring $2 billion into the company late last October for a 10% stake, while Microsoft is a major OpenAI backer.
The U.K.’s competition regulator, the Competition and Markets Authority, also opened several inquiries into big tech tie-ups with AI firms. But the regulator recently okayed Alphabet’s partnership and investment in Anthropic; it had given the green light to Amazon’s deal last year.
Anthropic continues to maintain pace with other frontier AI labs, releasing new functionality like Computer Use, which lets the company’s current best model, Claude 3.5 Sonnet, perform tasks on a PC autonomously.
The company has also faced its fair share of setbacks. The company unexpectedly hiked the price of one of its models this month, and it has delayed the launch of its long-awaited, next-gen model, Claude 3.5 Opus.
In a bid to boost revenue, Anthropic has shifted some of its focus to releasing new tools and subscription plans, including a desktop client, enterprise and “team” tiers, and mobile apps. It has also opened offices in Europe and made high-profile hires, including Instagram co-founder Mike Krieger, OpenAI co-founder Durk Kingma, and ex-OpenAI safety researcher Jan Leike.
Anthropic was co-launched in 2021 by Amodei, who was once VP of research at OpenAI and reportedly split with the firm after disagreements over the roadmap. Amodei brought along a number of ex-OpenAI employees to start Anthropic, including OpenAI’s former policy lead, Jack Clark.
Anthropic often attempts to position itself as more safety-focused than OpenAI.
Correction: This story was corrected to remove a reference to Gabor Cselle, who was hired by OpenAI, and not Anthropic.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.