What if you could just run to the supply room, and Xerox an entire firm? What would that look like?
Well, it might be expensive. But probably not as expensive as humans.
Dwarkesh Patel gives us an idea in a new collaborative essay Jan. 31 talking about the potential for all-AI companies.
Suggesting that “everyone is sleeping on the collective advantages AI will have” and “seriously underestimating how different the world will look,” Patel lays out some theories about how this would work.
Replication Power Changes Everything
“Currently, firms are extremely bottlenecked in hiring and training talent,” Patel writes. “But if your talent is an AI, you can copy it a stupid number of times. What if Google had a million AI software engineers? … This ability to turn capital into compute and compute into equivalents of your top talent is a fundamental transformation. Since you can amortize the training cost across thousands of copies, you could sensibly give these AIs ever-deeper expertise – PhDs in every relevant field, decades of business case studies, intimate knowledge of every system and codebase the company relies on.”
Going on this premise, the author continues to spin out predictions of high functionality, for example, that human teams can be replicated for projects, that AI will be copying in ways that will transform both management and labor – and that there’s going to be a lot of close cooperation between these non-human players, that will astound us. Forecasting “no miscommunication, ever again,” Patel focuses, throughout, on the idea that seamless transitions will unlock more benefit than we think.
Knowledge Transition in the Post-Human Era
Here’s one central component of Patel’s essay talking about AI firms – the author contrasts the process of knowledge replication or transmission in human firms, and that of theoretical AI corporations:
“Humanity’s great advantage has been social learning – our ability to pass knowledge across generations and build upon it,” he writes. “But human social learning has a terrible handicap: biological brains don’t allow information to be copy-pasted. So you need to spend years (and in many cases decades) teaching people what they need to know in order to do their job. Look at how top achievers in field after field are getting older and older, maybe because it takes longer to reach the frontier of accumulated knowledge. Or consider how clustering talent in cities and top firms produces such outsized benefits, simply because it enables slightly better knowledge flow between smart people.”
Supposing that innovation increases with larger population size, Patel welcomes us to consider how armies of AI agents will work pretty much in lockstep.
Under the Hood: How it Works
So how does AI accomplish this kind of internal mind-meld that’s going to allow collaboration, not just data, to fly at the speed of light around the world?
Patel mentions the practice of speculative decoding, so I looked that up.
ChatGPT defines it capably:
“Speculative decoding is a technique to speed up text generation from large language models (LLMs) by using a smaller “helper” model to propose multiple tokens at once, which the larger model then quickly checks or “verifies.” In simpler terms, it is a way to reduce the number of calls to an expensive (large) model, without significantly changing the quality or distribution of the generated text.”
You can also invoke phrases like ensemble learning, or distillation, but in the end, it boils down to the same thing: robots working together, either cognitively, physically, or both.
What’s Valuable? And a Monolithic View of Companies
Elsewhere, the essay veers into a look at Randian, winner-take-all territory.
Patel suggests that, given the changes, only one role will still be valuable to companies. That’s right: it’s the CEO.
“So what becomes expensive in this world? Roles which justify massive amounts of test- time compute,” he writes. “The CEO function is perhaps the clearest example. Would it be worth it for Google to spend $100 billion annually on inference compute for mega-Sundar? Sure! Just consider what this buys you: millions of subjective hours of strategic planning, Monte Carlo simulations of different five-year trajectories, deep analysis of every line of code and technical system, and exhaustive scenario planning.
Imagine mega-Sundar contemplating: “How would the FTC respond if we acquired eBay to challenge Amazon? Let me simulate the next three years of market dynamics… Ah, I see the likely outcome. I have five minutes of datacenter time left – let me evaluate 1,000 alternative strategies.
Later, Patel quotes Gwern Branwen talking about what companies could do to proliferate themselves:
“Why do we not see exceptional corporations clone themselves and take over all market segments? Why don’t corporations evolve such that all corporations or businesses are now the hyper-efficient descendants of a single ur-corporation 50 years ago, all other corporations having gone extinct in bankruptcy or been acquired? Why is it so hard for corporations to keep their “culture” intact and retain their youthful lean efficiency, or, if avoiding “aging” is impossible, why [not] copy themselves or otherwise reproduce to create new corporations like themselves? Corporations certainly undergo selection for kinds of fitness, and do vary a lot. The problem seems to be that corporations cannot replicate themselves … Corporations are made of people, not interchangeable, easily copied widgets or strands of DNA.”
In the same vein, Patel quotes von Neumann: “All stable processes we shall predict. All unstable processes we shall control.” And then asks this set of questions:
So then the question becomes: If you can create (AI agents) for any task you need, why would you ever pay some markup for another firm, when you can just replicate them internally instead? Why would there even be other firms? Would the first firm that can figure out how to automate everything will just form a conglomerate that takes over the entire economy?”
Companies, he writes, exist to reduce transaction costs. Precision and detail-oriented success relies on a process of grounding in an outer loss function. Those are thin moats, though, for human relevance. The reality is that we all need to take a closer look at what is going to happen to business.