One half of the Amodei duo is making headlines this week talking about progress toward AGI – and it’s not the member that most frequently receives attention in tech headlines.
It’s Daniela Amodei, the sister, who is now suggesting that we may already have artificial general intelligence working among us.
“AGI is such a funny term because many years ago it was kind of a useful concept to say: when will artificial intelligence be as capable as a human,” Amodei reportedly said in an interview during the early days of 2026. “And what’s interesting is, by some definitions of that, we’ve already surpassed that.”
That in turn has led large numbers of interested people to do a double-take. What? Already?
In general, tech companies are throwing these terms around like confetti. The Singularity has famously been mentioned a lot already this year, including by none other than Elon Musk himself, and the agentic approach is rewriting how we think of work, with actual AI agents playing lots of roles in the average organization.
But AGI?
Amodei’s Background
Daniela Amodei, who started Anthropic with her brother Dario just a few years back, has experience studying the nature of the technology, first at Stripe, and then on various AI safety teams within the human-centered business that she heads. It’s interesting to note that the two chose the word “anthropic” to name their business: human-centered, and interested in the welfare of the human race.
And Daniela and Dario both are well-known for explaining some of the vagaries of the term AGI, noting, as in this Medium article, that as people, we tend to move the goalposts. Tasmia Sharmin, writing Jan. 6, articulates this well:
“Every time AI achieves something we thought required human-level intelligence, we decide that thing doesn’t actually count,” Sharmin notes. “Chess? Turns out that’s just brute-force calculation, not real intelligence. Go? Pattern matching, not true reasoning. Writing essays? Autocomplete on steroids. Coding? Well, it can’t do EVERYTHING humans can do, so clearly not AGI. … The definition of AGI has become ‘whatever AI can’t do yet.’ The moment AI achieves it, we retroactively decide it doesn’t count as general intelligence.”
That being the case, and taking into account the extremely rapid advancement of these LLMs, how do we work with AI and not against it, or, perhaps more accurately, against ourselves?
Enter Daniela, who wants to square that circle. In a 2023 interview, she talked about the goal of keeping AI “helpful, honest, and harmless” and what’s at the heart of this effort.
“I really view my job as helping to take that vision that Dario and other technical leadership have, and help to actually translate it into sets of operating norms,” she told Fast Company’s Mark Sullivan at the time. “How the researchers work together, how we build things into a product and how we translate that into a business.”
Keeping it Real
All of that as preamble, we now have a situation where we as a human community have to define AGI, define the Singularity, and then work within that framework to try to advance the benefits of AI, not just AI itself (if that makes sense.) In that context, we need real talk, not hype.
“Daniela’s admission that ‘we don’t know’ if current approaches will keep working is refreshingly honest.” Sharmin writes in the above current reporting. “Her brother Dario helped create the scaling laws driving the industry. Now, both siblings are betting on efficiency over pure scale. And they’re admitting uncertainty about whether any approach will reach transformative AI. That uncertainty matters more than the semantic debate about AGI. If the people building the most advanced AI systems don’t know if their approaches will keep working, everyone else projecting confidence about timelines should probably reconsider.”
Touche.
“We don’t know” is going to become a very important phrase, as we ponder a world where AI, in some ways, competes with humanity. This is not just John Henry, as some would like to propose. This is a big inflection point.
Figuring Things Out
The ambiguity and abstraction notwithstanding, some believe the concept of AGI can really be boiled down into something straightforward.
Check out this line of thinking from Pat Grady and Sonya Huang at Sequoia where the duo write under the title: “2026: This is AGI.”
“A human who can figure things out has some baseline knowledge, the ability to reason over that knowledge, and the ability to iterate their way to the answer. … An AI that can figure things out has some baseline knowledge (pre-training), the ability to reason over that knowledge (inference-time compute), and the ability to iterate its way to the answer (long-horizon agents).”
You can use components to label the idea this way, or, like Daniela Amodei, you can say that the technology has met the AGI standard for some things, but not for others.
Either way, it’s clear that we have to take this discussion seriously.
Stay tuned.











