As we look at the trajectory of enterprise AI near the end of 2024, it seems like the landscape of the industry is becoming more fully baked. And that has some interesting ramifications for all of us.

A recent essay at Sequoia captures this pretty well, and also talks about the inflection point that we’re at with new technologies.

Let’s start with the latter and then move toward the realities that we’re seeing right now, and how corporate action on state of the art (SOTA) systems is shaking out.

The Rein of Reasoning Models

One thing that the authors go over (those penning this item are listed as Sonya Huang, Pat Grady, and 01) in their thorough narrative around next-gen AI is the emergence of reasoning models that move from simply mimicking human thought to simulating it more fully.

Citing “inference time compute” systems, they explain that the newer models are able to “stop and think,” which supercharges their capabilities quite a bit. We get the contrast of “instinctive” versus “deliberate” thought process, and a history of how these models evolved. For instance, before o1 was o1, it was tentatively called “Strawberry”, (and you can see some of our reporting on it at this time here). Essentially, whether you’re looking at the history of game playing with chess and go, or enterprise rollouts, this capability of AI systems to error correct and display multi-step reasoning is new, and, to say the least, very exciting!

The New Challenge of Scoring Abstract Tasks

Another thing you get from reading this paper and others on the subject is that we’re going to move from the kind of easy evaluation that we’ve had of AI in the past, into a whole new world of trying to evaluate the products of a reasoning mind (or a reasonable facsimile).

For instance, you might be able to score a game of chess pretty well, because of all of its technical criteria and logical gameplay, but what about scoring an essay, or a collection of recipes, or something that seems to be the product of an engaged and aware sentience?

That, in itself, will get a little difficult, and it will be one more symptom of harnessing the enormously powerful technologies we have discovered in the last few years.

Reading the Tea Leaves: The Markets

Let’s look at the really interesting point around market domination for these technologies.

In this write-up, the authors start with the supposition that we’re seeing more of a market equilibrium on ownership of AI results and powers.

Specifically, they mention:

· Microsoft and OpenAI

· AWS and Anthropic

· Google and Deepmind

· Meta, which for the moment, seems to be going it alone

What these prescient analysts are telling us, and what we’re seeing, is that a number of large companies have access to the kinds of hardware and systems that will host the best and brightest of AI’s digital citizens.

In other words, you can’t just access this kind of non-human cognitive power from your desktop, even though new liquid networks have allowed us to do much more at the edge.

In order to get the most out of the latest science, you need to build big: for example, Elon Musk is building the Colossus system, which houses many hundreds (and thousands) of Nvidia GPUs to really support all of this inference time thinking that’s going on in the most complex models.

Calling the market challenge a “knife fight,” the authors talk about how some of these systems moved from “wrappers” to “cognitive architecture”, and what we should look for as the market continues to coalesce around these monolithic brands with their walled gardens and big plans for AI collaboration.

The rest of us can dutifully follow along as we see these titans rolling out new systems almost annually, and new cases injecting themselves into our lives. What’s next, as AI continues to think about its world, and ours?

Share.

Leave A Reply

Exit mobile version