What do we have to look forward to?

Is that a trick question?

We’re in a halcyon time of rapid-fire disruption in nearly all industries. This year makes the dot.com bubble look like a kid selling lemonade at a lemonade stand.

Artificial intelligence is rapidly changing everything we know and care about.

With that in mind, I wanted to go over one of the essays that brings home the pace of development, and what we’re poised to do over the next few years, not the next few hundred years.

Steve Jurvetson is a well-known venture capitalist with a history at Hewlett Packard, and a board seat at SpaceX. He’s one of many industry insiders who are coming out with predictions for the future of AI, suggesting that exponential progress is imminent.

But I thought that his recent post on X, with its complementary scatterplot chart, provides an even more far-ranging picture of not only where we’re going, but where we’ve been over the entire 20th century and even further back.

Here are a few main things that I took from this piece of writing…

Moore’s Law Started in the 1800s

One of the most interesting ideas that Jurvetson puts forward is that Moore’s law started well before Gordon Moore made his prediction about transistors. Essentially, he starts with Babbage and the analytical engine, and includes some other technologies along the way, like Hollerith cards.

It’s an astute analysis, but where Jurvetson really gets into some interesting territory is with his thoughts on what Moore’s law represents:

“What Moore observed in the belly of the early IC industry was a derivative metric, a refracted signal, from a longer-term trend, a trend that begs various philosophical questions and predicts mind-bending AI futures,” Jurvetson writes, intriguingly.

It’s not, he says, ‘a transistor-centric metric.’ At least not anymore. It’s more about the proliferation of computational speed in general.

Jurvetson’s sort of Plato’s cave allegory suggests that we’ve only begun to understand the real nature of the digital curve that we’re on.

Nvidia’s Win was Predestined

Another thing that Jurvetson points out is that Nvidia’s win in the industry started happening a long time ago too.

The company’s progress is just now making headlines this year. Most of us didn’t know that by the end of 2024, the company would be head and shoulders above any other tech company of its kind. We were throwing around names like Intel just quarters ago.

But according to Jurvetson, Intel lost the game a decade and a half ago.

“The computational frontier has shifted across many technology substrates over the past 128 years,” Jurvetson writes. “Intel ceded leadership to NVIDIA 15 years ago, and further handoffs are inevitable.”

He also points out why that happened:

“Intel lost to NVIDIA for neural networks because the fine-grained parallel computing architecture of a GPU maps better to the needs of deep learning. There is a poetic beauty to the computational similarity of a processor optimized for graphics processing and the computational needs of a sensory cortex, as commonly seen in the neural networks of 2014.”

Essentially, Nvidia bet big on AI, and now it’s AI moment, so it’s Nvidia’s moment, too. It’s a nice spot for Jensen Huang and co. to be in.

Check out this take from Ian King at Bloomberg Tech Daily:

“Intel isn’t even in the race to compete with Nvidia Corp. to produce accelerators to power AI workloads,” King writes. “Tens of billions of dollars that are being spent on data center gear, money that would once have gone to Intel, is heading to a rival that has eclipsed it in sales and market value. In processors for servers and personal computers, Intel has just begun to stabilize market share losses.

Should someone at Intel have seen that AI shift coming? It’s arguable that only one person in technology did, Jensen Huang of Nvidia. His company spent years preparing new designs, software and a host of other products to put it in the position to take advantage of the huge shift in computing. High-end chip designs take years to go from concept to manufacturing in the millions.”

Interesting…

Lab Science and Simulation Science

I was struck by a statement that Jurvetson made later in the essay about a change in ‘trial and error’ scientific process.

“As Moore’s Law crosses critical thresholds, a formerly lab science of trial and error experimentation becomes a simulation science, and the pace of progress accelerates dramatically, creating opportunities for new entrants in new industries,” he writes. “Consider the autonomous software stack for Tesla and SpaceX, and the impact that is having on the automotive and aerospace sectors.”

Fitting words from a SpaceX board member…

I think the main idea, though, is that with the data and processes that we have available, we’re able to make these extremely elaborate simulations that eliminate the sort of experimental lab science that we did in the 20th century.

Just yesterday, I wrote about Mike Pritchard at Nvidia giving us insight into digital twinning for the entire earth. That fits right into this groundbreaking idea that instead of lab science, we’re going to have simulation-based science that already knows more than it used to in the hypothesis phase.

The Effects for Business

All that is really interesting, but Jurvetson also has a relevant prediction for business.

“Every industry on our planet is going to become an information business,” he said. “Consider agriculture. If you ask a farmer in 20 years’ time about how they compete, it will depend on how they use information — from satellite imagery driving robotic field optimization to the code in their seeds. It will have nothing to do with workmanship or labor. That will eventually percolate through every industry as IT innervates the economy.”

That sort of follows from the above, but if we have these abilities to simulate based on big data, data is going to be king, even more than it already is.

History Rhymes?

Those are some of the main points that I took from this, but here’s another strange one based on names.

When you look at Jurvetson’s chart, you have an entry for something called Colossus in the 1940s. That involved British engineers breaking German codes based on Lorenzian dynamics.

Now, Musk and XAI are making waves with their own new Colossus data center that we’re likely to hear a lot more about as the data center wars heat up.

So did the makers of today’s colossus name their project after the 1940s initiative?

We asked ChatGPT. Here’s what we got:

“The Colossus supercomputer, developed by Elon Musk’s company xAI, is a state-of-the-art AI training system featuring 100,000 NVIDIA H100 GPUs, with plans to expand to 200,000 GPUs, including 50,000 of NVIDIA’s next-generation H200 chips. This massive computing infrastructure is designed to train advanced AI models, such as xAI’s Grok series. The name ‘Colossus’ has historical significance, notably associated with the British code-breaking computer developed during World War II to decrypt German communications. Additionally, ‘Colossus’ is the title of a 1970 science fiction film about a supercomputer that gains sentience. Some speculate that xAI’s supercomputer may be named after the film’s computer. However, xAI has not publicly confirmed the specific inspiration for the name ‘Colossus.’ Therefore, while it’s possible that the name references either the wartime computer or the fictional supercomputer, the exact reasoning behind the choice remains unspecified.”

That’s another example of how you go to AI models to find things out. The Google search is obsolete. A large language model can sift through all of the available information in instant, and like a wise friend, tell you exactly what’s up. Here, for example, you get the history of both projects, and the best possible assessment of what’s out there regarding your question.

That’s all for now: stay tuned for more.

Share.

Leave A Reply

Exit mobile version