How can you regulate something if you can’t even define it?

California is on the brink of passing an AI-regulation bill. But even as SB 1047 sits on Governor Gavin Newsom’s desk awaiting potential passage into law, it fails to clearly define the very thing it is about, artificial intelligence. No law can be efficacious if its writing addresses only an amorphous idea, including this one.

Controversy abounds around this bill, but mostly for other reasons. Industry leader Andrew Ng argues that the bill “makes the fundamental mistake of regulating a general purpose technology rather than applications of that technology.” Former Facebook chief privacy officer Chris Kelly points out that it is both “too broad and too narrow in a number of different ways.” Stability AI’s Ben Brooks writes that the bill includes provisions that “pose a serious threat to open innovation.”

But there’s an even more fundamental issue: The bill doesn’t define what it regulates. Its definition of AI is so vague, it could apply to almost any computer program.

Here’s how the bill defines AI: “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

Let’s break down why that definition is vacuous:

  • “Varies in its level of autonomy.” That is true about any system or machine. The degree of autonomy is determined by how it’s used, the degree to which humans let it operate on its own.
  • “Can… infer from the input it receives how to generate outputs.” That’s what every computer program does: It takes input and generates outputs.
  • Its outputs “can influence physical or virtual environments.” The influence of a computer program’s output is determined by how it’s used. Do the outputs control a thermostat, for example? That’s up to humans.

The bill does get a little more specific, but to no avail. It narrows the scope of what it regulates to “AI models” that fulfill certain criteria—but the result is still amorphous. Without clarification, the word “model” is a catch-all that could allude to almost any formalism or program. Moreover, the criteria that it establishes relate not to any AI-specific quality, but instead come down to a simple quantity: the number of calculations performed when developing the model. As a result, the bill could arguably pertain to many programs unrelated to AI, such as one that breaks encryption by way of many calculations. It might even pertain to any program that, due to a programming error, inadvertently performs a large number of calculations.

Regulate On Well-Defined Technology, Not “AI”

This lack of clear focus could render the bill inapplicable where it may be most needed, and yet abusively applied where there are corrupting influences. I’m not a legal scholar, but I can see the catch-all defense against almost any proceedings written on the wall: “Since this law applies so broadly—to almost any or all computer programs that could have indirectly ‘enabled harm’—it is capricious to apply it to my computer program in particular.” At the same time, such a law might be subject to abuse, since it so readily lends itself to selective prosecution across many non-AI systems.

The European Commission has run into the same issue. It also announced legislative measures that define AI very broadly: as a combination of data and algorithms. Members of the Dutch Alliance on AI retorted, “This definition… applies to any piece of software ever written, not just AI.”

The fundamental problem is that “AI” is an amorphous buzzword that defies definition and intrinsically overpromises. If there’s a pervasive comfort with “AI” being subjectively defined, it should not extend to legislative activities. Public discourse often conveys that, well, AI is hard to define—but you’ll know it when you see it. That position confuses the regulation and legislation of machine learning and other technologies.

We should never try to regulate on “AI.” Technology greatly needs regulation in certain arenas, for example, to address bias in algorithmic decision-making and to oversee the development of autonomous weapons—which often use machine learning—so clarity is critical in these efforts. Using the imprecise term “AI” is gravely detrimental to the effectiveness and credibility of any initiative to regulate technology. Regulation is already hard enough without muddying the waters.

Share.

Leave A Reply

Exit mobile version