There’s a new idea making the rounds about how to organize our LLM models and new technologies to better leverage their power. It’s not very mainstream yet, because people are still getting used to the idea of AI in general, but it is popping up in some areas of academia and business.

One of those places is the MIT Media Lab, where researchers continue the work of great individuals like Marvin Minsky, Seymour Papert and others. It’s actually the 40th anniversary of the lab this year, and what we’re finding is that this research institution is offering some pretty good ideas for how to move forward in the AI era.

One of those ideas is to decentralize AI platforms. It’s something I’ve been hearing about in the context of planning what the next generation of services and deployments will look like. What is decentralized AI, and why is it important?

The Power of Decentralized AI

Some of the best advocates for decentralized AI are pointing out that within the current centralized and monolithic models, some big companies are dominating the use of data to feed systems.

One of these analysts is my friend and colleague Ramesh Raskar. In making the case for decentralized AI, he points out that companies are centralizing data, compute, and even governance, in concerning ways, and that they often don’t want to share data in constructive ways.

He cites “distrusting, disconnected and disinterested” parties without a real incentive to work together.

Against that idea, proponents of decentralized AI put forward the possibility of a “mixture of experts” design where the interplay between reinforcement learning and supervised fine-tuning creates a workflow for success. That, some suggest (including in this Venturebeat article) is where DeepSeek came in – using the more brute-force aspect of RL to jump ahead on LLM efficiency, and bypassing SFT.

But beyond DeepSeek’s model coup, there’s the idea that the companies on the vanguard of innovation are themselves vulnerable. This Grayscale article points out how, just after DeepSeek’s announcement, the company fell victim to a hack that represents the kinds of things that decentralized AI could potentially solve.

Four Pillars of Decentralized AI

When you listen to Ramesh or somebody else talking about the principle of decentralized AI, they point to four overarching goals for these processes, as follows:

Privacy – how do you keep an individual’s data safe and private?

Incentives – what are the incentives for the various parties to work together?

Verification – how do you verify whether someone is a good actor or not?

Dashboard – there has to be some kind of interface for this type of collaboration.

I also came across this interesting analogy for decentralized AI that has to do with the development of the Internet. “Web one” was that series of websites and pages that composes the global Internet. “Web two” was social media. “Web three” is the Blockchain, a truly decentralized system in which nodes and components interact with sovereignty.

What else goes into decentralized design?

Decentralization and The History of AI Theory

Another way to think about this is to go back to the old days when people were first theorizing the concept of AI itself.

For example, the idea of “decentralized AI” echoes some of the work of Marvin Minsky himself in his book, “Society of mind.” In it, he suggests that the best AI system should function like the human brain, which is essentially not one computer, but a series of interconnected computers working together.

As for risks, those who are trying to prepare for decentralized systems point out that some of these constructions can be vulnerable to crashing, or hostile takeovers in a case of a 51% attack.

On the other hand, they can expedite complex buy-in if the system is smart enough to vet the agents to make sure that they’re trustworthy.

And that is, in large part, what it comes down to: trust. We might each have our own individual AI agents acting on our behalf – or we might just have large networks of real life non-player characters, so to speak. We have to be able to have some framework for establishing trust, and the rest will fall into place.

“There are billions of agents. They all want to talk to each other, but they don’t trust each other. They don’t even know where they are. They don’t even know what they do. Like if I meet you today….if I meet somebody else, I don’t know … their name, I don’t know their expertise and how to figure out whether we should collaborate or not, right?” – Ramesh Raskar

Think about the possibility of decentralized AI systems for everything that we’re designing now – from recommendation engines to autonomous driving, to insurance and loan systems, or AI planners for smart cities. With artificial intelligence, we’re operating at a bird’s-eye view level, understanding the full picture and all of the details at once. It’s a powerful thing, but it has to be harnessed and deployed correctly, which is why a lot of these experts have one more suggestion:

It’s that we need to first deploy our pioneer systems in low-stakes environments. Then we continue to work on them until we can trust them, and only then can they be put into mission-critical systems. So let’s remember that as we move forward.

Share.

Leave A Reply

Exit mobile version