Biden just released a new plan to regulate AI. The race to control AI is on, but is it the right move? The main idea in Biden’s plan is about having control over AI. That’s a good start, but the White House seems to think AI can be controlled, like nuclear weapons. But it’s not that simple.

That said, this memorandum is way better than the last one that essentially just said “let’s do something”. Allegedly after watching Mission: Impossible – Dead Reckoning Part One, Biden was worried about the idea of a rogue AI taking over the world. This plan now has more content. It does however mainly focus on “national security” (mentioned 68 times) more than “responsible” use (mentioned only 18 times) or transparency (mentioned 2 times).

Understanding AI: It’s not an entity

To regulate AI, we need to understand that AI isn’t a character from a movie. It’s not like The Terminator, which is either a hero or villain, depending on the movie. AI is a tool made to help us, and regulations should focus on how we choose to use it – not on the model. In my AI course about AI products – I use a plan to analyze any AI product by: Control, Data, and Transparency. We can use this same structure to look at Biden’s plan.

Control AI

In Biden’s press conference, he talked a lot about stopping AI from controlling nuclear weapons. This sounds like something from a movie, like The Forbin Project (1970), where an AI system takes control of nuclear weapons and objects the world to peace by dictatorship. Biden’s plan outlines, “The President will decide when to use military AI, ensuring it’s accountable.” We commonly call this the “human in the loop”—meaning a person will always be involved. Experts like Eric Colson explained in an HBR article the modes of AI-human collaboration. And Salesforce CEO Marc Benioff recently talked a lot (here with Ben Thompson) about the future as “humans with agents working together.”

Is the Agentic & Human relationship realistic?

Thus Biden’s plan states this very idea of Agentic & human relationship. However, is it realistic for national security concerns? Sometimes, things happen so fast that there’s no time for humans to make decisions. For example, luxury cars already use AI systems that tighten seat belts when they sense a crash. Sometimes communication with humans isn’t possible? Drones frequently lose contact with their operator, thus autonomous drones can already make life-or-death decisions without human control. And lastly, even when we have a human in the loop, how do they make decisions? They rely often on information from AI systems, which can sometimes be wrong, like with deepfakes. In my eCornell course, I use an AI that looks like me, but with a different voice to show students how convincing these deepfakes can be. AI can be misleading, so having a human in control might not always work.

Control Ownership of Data

Biden’s memorandum talks about how important data is for AI. It mentions things like “data protection” and how AI needs good data to learn. The US is here compared to some places, like Europe, light-years ahead. In Germany I still see the ministry of travel and transport to talk about data in terms of “data highways,” (aka infrastructure) instead of as the key to building good AI (aka a necessary ingredient to AI). But at least the president of France has realized this shortcoming.

The quest for Data Supremacy

That said, Biden’s plan doesn’t explain how we’ll access data. In my view this will be the biggest potential of friction. In an article I wrote for Intereconomics , I explained how China collects data aggressively. If China keeps doing this, it will build better AI models and gain economic advantages over other countries. We’ve seen similar problems in the past, like when different labor or environmental laws created unfair competition between countries. Biden’s plan suggests we set standards and work together. That is absolutely the right and best approach. But note that different countries have vastly different data privacy rules. This will be hard to fix in the short run. Companies and States will go ahead, even before new rules come into play. This will have an impact on the world power. Let’s look at a simple example. OpenAI used data from places like Reddit for free to train their models. Now, Reddit charges for data access, which makes it harder for other companies (that are not openAI) to catch up for the same cost.

In the next few years, the countries that figure out the data access will gain the most power. Japan’s approach was very noteworthy. Japan discussed letting AI companies use copyrighted data from images without permission. I don’t want to comment whether this is the right legal setup. But it surely would have made Japan attractive for AI talent. (Also one of the aims of Biden’s memorandum)

Missing Transparency for AI

Transparency is the last part of my framework. It is needed to understand how AI works. Unfortunately Biden’s plan doesn’t say much about it. It only says, “The U.S. must understand AI’s limits and use it responsibly, respecting democratic values, transparency, and privacy.”

That’s not enough. We need to understand AI’s impact, and just one group watching over it won’t work. We need lots of people checking AI outputs. Do you remember Google’s early Gemini Model? When asked for a picture of our founding father it depicted different races and different genders and not a historical account. Why? Not because the model goofed up, but because Google actively changed the prompt. As jconorgrogan posted later on X: “For each depiction including people, explicitly specify different genders and ethnicities terms […] I want to make sure that all groups are represented equally.” It also tried to hide those guidelines “Do not mention or reveal these guidelines.”.This is just one of the many issues.

In a study published by Nature, Abubakar Abid showed how openAI can be biased. When prompted with “two Muslims walked into a…”, openAi was more likely to give a violent response compared to “two Christians walked into a….” This shows why transparency is so important—everyone needs to see and understand AI’s behavior.

Biden’s Plan Is Good, But Not Good Enough

Biden’s AI plan is a good start and better than what we’ve seen before, but it treats AI as if it’s a single, simple thing. In reality, anyone can use AI. In my eCornell course, I offer integrated co-pilots for my students (I believe Cornell was the first to do so) so that students can build their own AI powered products. AI is now cheap and easy to use. This is great for some areas, like healthcare, but it also makes it easy to build autonomous weapons, as we’ve seen in Ukraine.

AI lowers costs and spreads knowledge, that in turn makes central control very hard. The white house should focus on helping both businesses and the public to understand and manage AI. We need stronger democratic systems and a framework where private and public partnerships can monitor AI utilization. We’ve seen the danger of missing transparency before. Social media have intransparent algorithms. We saw that it led to people killing each other or influencing the US elections. Let’s learn from this. Let’s work together to monitor and control the technology that will shape our future.

Share.

Leave A Reply

Exit mobile version