It’s another stock market day for U.S. tech stocks responding to the rather new announcement from Chinese firm DeepSeek that its new model has uniquely cost-effective reasoning capabilities. Some of these American technology stocks dropped on Monday, so how are they doing now?

From a cursory review, it looks like most of these companies have recouped their losses already. Microsoft and Meta are both back up to five-day levels more or less.

On the other hand, Nvidia languishes around $122 per share, as opposed to over $140 per share prior to the DeepSeek announcement. So while there’s a spate of articles on the Internet saying that Nvidia has rebounded already, math-wise, I don’t see it. A rise from $117 to $122 does not seem like a real coup.

Commentary on Emerging Trends

Amid the dismay of U.S. investors who feel like China is reasserting its dominance, some experts are suggesting that this isn’t so much about geopolitics as it is about the appeal of open source models. Meta had a leading open source model prior to DeepSeek’s announcement, and that’s one element of this. The DeepSeek case shows how open source models can become dominant.

Capable and Efficient Models

Here’s the part that people should know about if they’re thinking of how to re-engineer the LLM to be more muscular and competent with fewer resources.

The whole point of the DeepSeek announcement was that its model accomplished more with less firepower in terms of actual mechanics.

But that’s not the only example of this kind of innovation.

For example, I have spoken with researchers at the MIT CSAIL lab that are figuring out how to run systems on a much leaner footprint.

There are a lot of aspects to this: teams are figuring out how to use quantization and perform input compression. They’re working on a lean-resource model and building digital twins. They’re doing all kinds of things to re-engineer systems.

Long-Term Competitive Outlook

So let’s take a page from object oriented programming, and instantiate a class composed of all of this called “New World” models.

New world models will give other models a run for their money. They will emerge as alternatives to traditional neural network backpropagation, and the kinds of techniques we saw in the prior decade. They will allow engineers to figure out how to use a leaner resource build and get better results.

One use case that people usually talk about with these more efficient models is endpoint services. Putting a robust LLM on an endpoint device like a smartphone generally means developing a more efficient system, because otherwise, you would need too much hardware to run whatever you’re trying to build.

So the new world networks will bring this functionality to the market, and potentially disrupt it quite a bit, unseating the prior kings of the mountain, whether that’s Nvidia or anyone else.

Share.

Leave A Reply

Exit mobile version