The world‘s biggest tech companies are now companies involved in artificial intelligence in some way, shape or form – in creating models offering products and services, or in hardware design, or in web services, or more than one of these categories.
Nvidia is the biggest company by market cap, but the big model services companies are becoming household names. OpenAI is at the top, but Anthropic is a runner-up in many ways.
One of the things I heard from a recent AI Daily Brief podcast with Nathaniel Whittemore is the announcement that Anthropic is getting $2 billion in new funding, for a $60 billion evaluation overall.
That makes both of the Amodeis (Dario and Daniela) billionaires as well as five other top brass at the firm. So who’s funding Anthropic this way?
Amazon, Etc. On Board
By most accounts, the top contributor is Amazon, which has integrated Anthropic’s Claude model into some of its web services.
Here’s a resource from Amazon talking about the use of Anthropic’s Claude 3 Haiku:
“Unlock the power of fine-tuning Anthropic’s Claude 3 Haiku model with Amazon Bedrock. This comprehensive deep-dive demo guides you through the process, from accessing Amazon Bedrock to customizing Claude 3 Haiku for your business needs. Discover how to boost model accuracy, quality, and consistency by encoding company and domain knowledge. Learn to generate higher-quality results, create unique user experiences, and enhance performance for domain-specific tasks. Don’t miss this opportunity to harness the full potential of Claude 3 Haiku by getting started with fine-tuning today.”
The co-branding here is obvious.
Other companies like Google have also chipped in, and new reports show that Lightspeed Venture is also on board. In a way, it makes sense that Amazon, for one, would fund the company this way, based on the partnerships in play.
But there’s some amount of speculation on the overall context of this deal, too.
Burning Through Cash? What’s the Runway?
A recent opinion article at Marketwatch talks about how both OpenAI and Anthropic are spending quite a lot of money on compute and other web services.
Therese Poletti reports that OpenAI was forecasted to lose $5 billion in 2024, where the company appears to be losing money on its ChatGPT subscriptions, because people are using up too much compute power.
Anthropic, she notes, is following suit, racking up its use of AWS capacity.
Plummeting Costs of Inference
Although both OpenAI and Anthropic are spending a lot of money, some people with industry knowledge point to a prediction that new types of networks are going to greatly decrease the cost of a ChatGPT session or any other kind of AI use.
They point to a hyper-process that follows the outlines of Moore’s law, where computing costs came down as hardware got smaller. Citing various kinds of scaling laws, these experts are suggesting that we’re going to see much lower costs for token use, and some of that is going to be enabled by multi-agent AI. In other words, individual AI components can work together, and bring their own skill sets to the table.
I go back to Ethan Mollick’s bullish predictions on what robots and AI entities will be able to do within just a few years – anything from changing a diaper, to comforting the sick, or making a meal, or designing a greeting card.
Those are my examples, not his, but there are compelling arguments to be made that the technology is going to become faster, cheaper and quite a lot better in 2025!
The Anthropic news is one more way station toward a burgeoning industry that is going to become much more important as the months go on.