Thumbnail: AGI

If you’re paying attention to what people are saying online, you may have seen some of the more prominent items around AI advancement, like Eric Schmidt suggesting that neural network scaling laws are not experiencing diminishing returns. (or more accurately, that there’s ‘no evidence’ of this.)

Or you may have seen this confusing post on X by researcher and mathematician Ethan Caballero:

Ethan Caballero is busy on X: “https://t.co/QAHC5BUQey” / X

Recent research has led some in the industry to suggest that new models aren’t getting the same amount of juice out of their scaling, and that we may be experiencing a plateau in development.

Some of them cite the lack of availability of high-quality training data.

But a related debate asks whether AI work is supposed to continue on a smooth upward trajectory, or if it will be a more complex journey…

Broken Neural Scaling Laws

Let’s go back to Caballero’s post, which on its face seems to suggest that we’re headed for a leveling off in AI’s ability to bring home the bacon.

On first glance, it seems like this seasoned AI pro might be supporting the idea that we’re seeing mathematical declines in model scaling performance.

But if you look at his actual research, what it suggests is that we’re going to see these “breaks” along the way as scaling changes, and that a temporary reduction doesn’t mean that the line is going to veer off into a plateau for any significant length of time. It’s more like Burton Malkiel’s “Random Walk on Wall Street.”

All of the associated information that goes along with his paper indicates that we’ll see a variety of vectors, so to speak, in what AI can do over time.

Growth and Expectations.

Here’s another slightly different take by Chris and Mike Sharkey on EP85 of the This Day in AI podcast, where they’re discussing all of the brouhaha over current model predictions.

Calling some in the community “news junkies,” Sharkey tells Sharkey that we should be focusing more on what we can do with existing models, and less on our predictions about what they’ll be able to do in the future.

That’s part of a general excitement, for example, in the business community, over what we already have in our pockets.

You can check out this footage of Fei-Fei Li at A16z, where she mentions where we’ve already come to with regard to AI systems.

“Visual-spatial intelligence is fundamental,” she says. “We’re in the right moment to make a bet, and focus, and unlock this.”

So regardless of exactly what’s going to happen with our AI systems, we can be exploring the capabilities of what we have now, and according to some experts, there still a lot to discover.

7 Stages of Artificial Intelligence: The Broader Journey

I’m going to end with this – in the big picture, where are we going with AI?

This is where things get really interesting, in the sense that we’re looking beyond just the next couple of model iterations to what we can expect from an emerging field over the next few decades.

Here’s one of the presentations that impressed me the most, and made me think about what humankind is likely to encounter.

It’s from AI Uncovered, and it’s a seven-stage evolution for AI.

I’ll go over each one of these stages briefly:

Rule-based AI – this is what we had in the oughts, when we were developing AI responses to questions or more programmatic systems.

Context-aware retention systems – this is the sort of system where the model can understand what you’re talking about in context, and refer to it, along with the use of other context clues based on time, current events, etc. Think of this as the era of the digital assistant.

Domain specific mastery – at this stage of the process, the AI is learning to excel at some kind of cognitive discipline – learning skills in a specialized and domain-centric way.

Thinking and reasoning AI systems – OpenAI just announced the ability of the model to display reasoning capabilities, with chain of thought and everything else. I’ve written on this more extensively on the blog if you’re interested.

AGI: strong artificial intelligence is good at adapting, implementing knowledge and creating things – learning languages, and writing symphonies and so on.

ASI: AI super-intelligence – this is where the artificial intelligence capability starts to outstrip what humans can do in significant ways. Imagine that the artificial agent or entity can outperform a human prodigy in some creative endeavor, or administrate systems better than a human team. Or, as in my post last, week think about an entire company created and staffed by AI, with no humans involved at all, and where the company is competing with “human” companies for customers.

Stage Seven: The Singularity – this would be the projected point of AI where it merges with human intelligence to become supreme. I’m not going to attempt to really describe this in detail; you can read about things from Marvin Minsky and Ray Kurzweil and writers more knowledgeable than me at showing this, let’s just say at this point, you’re really through the looking glass on comparing and contrasting human and AI capabilities.

That’s about where we’re at, as we see people around ideas AI returns this week. It’s useful, to some extent, to overlook the momentary hype, and focus on the broader future.

Share.

Leave A Reply

Exit mobile version