There’s a lot of talk about what’s happening over at Anthropic, where Claude 3.7 Sonnet is getting all kinds of new user activity, and showing off the latest iteration from this leading tech company. Last week I wrote about Ethan Mollick’s response to the model in his blog, and other coverage.

But there’s also a more direct source of information: Dario Amodei went on Hard Fork this week to talk to the fellows (Kevin Roose and Casey Newton) about where things are at, and the context of new developments with Claude.

“There are these reasoning models out there, that have been out there for a few months, and we wanted to make one of our own, but we wanted to focus to be a little bit different,” Amodei explained.

“In particular, a lot of the other reasoning models are trained primarily on math and competition coding, which are objective tasks where you can measure performance. We trained 3.7 to focus more on real world tasks.”

He also addressed the development of Claude 3.7 Sonnet as a hybrid model.

“It’s generally been that there’s a regular model and then there’s a reasoning model,” he said. “This would be like if a human had two brains, and you can talk to brain number one to ask a quick question, like what’s your name, and you’re talking to brain number two if you’re asking to prove a mathematical theorem.”

Amodei also talked about future models being able to self-refer for inference or impose a bound on thinking. Web search, he said, is coming very soon. When specifying, there was a funny point in the podcast where Amodei mentioned a “small number of time units,” to the hosts’ delight.

Future Dangers of AI

Amodei spoke a bit about safety and the context of new technologies.

“I feel like there’s this constant conflation of present dangers with future dangers,” he said. “It’s not that there aren’t present dangers (but) I’m more worried about the dangers that we’re going to see as models become more powerful.”

As he testified to the Senate, he said, he thought about things like biological or chemical warfare, and risks of misuse.

Trials, he said can help where they test models for vulnerabilities.

“It means that a new risk exists in the world,” he said of AI’s advent. “A new threat vector exists in the world.”

Calling for additional security measures and additional deployment measures, Amodei noted that the stakes are high.

Assistive AI

In terms of utilities, Amodei talked about personal and business use of these new systems.

“The best assistant for me might not be the best assistant for some other person,” he said. “I think one area where the models will be good enough is if you’re trying to use this as a replacement for Google search or quick information retrieval.”

The DeepSeek Bombshell

In going over some current events, Amodei addressed the recent announcement from DeepSeek that seemed to have U.S. companies so worried.

“I worry less about DeepSeek from a commercial competition perspective,” he said. “I worry more about them from a national competition and national security perspective.”

He doesn’t want autocracies to have an edge in AI over representative democracy.

“I want to make sure that liberal democracies have enough leverage and enough advantage in technology that they can prevent certain abuses from happening, and prevent adversaries from putting us in a bad position with respect to the rest of the world,” he said.

Opportunities with AI

“I’m a fan of seizing the opportunities,” Amodei said, mentioning his writing of Machines of Loving Grace, a prominent essay on new tech. “:For someone who worries about risks, I feel like I have a better vision of the benefits than a lot of people who spend all their time talking about the benefits. In the background, like I said, as the models have gotten more powerful, the amazing and wondrous things that we can do with them have increased, but also the risks have increased.”

He noted a zeitgeist in which AI looms large.

“You talk to folks who live in San Francisco, and there’s this bone deep feeling that within a year or two years, we’re just going to be living in a world that has been transformed by AI,” he said.

There’s a lot more in the podcast – that’s some of what I felt is most relevant to our analysis of AI systems, as 2025 continues apace. It’s always good to check in with business leaders closest to the process to get clues on what’s coming next.

Share.

Leave A Reply

Exit mobile version