Sometimes in our frenzied discussion about active AI, the idea of capable regulation and policy governance tends to get lost in the mix. It shouldn’t – because government will have a fundamental role in how we move forward!

If you want a better idea of how this will work, look no further than remarks by Dr. Alondra Nelson as she received an award from the Boston Global Forum’s AI World Society Initiative April 30.

Nelson is a former Deputy Assistant to President Joe Biden and was Acting Director of the White House Office of Science and Technology Policy. She is also a professor at the Institute for Advanced Studies. In a keynote address, she made instructive remarks about what we can expect as government wrestles with this enormous technology change.

One of the most fascinating parts of her talk came as she presented insight on developing an ‘AI Bill of Rights’ and how that has worked out across the country – and, in related work, across the world, as societies anticipate the impact of the AI revolution.

At a federal level, she said, it started with an op-ed in Wired magazine inviting feedback from the American public.

“We read those emails and engaged with the public in that way,” she said, adding that roundtables and listening sessions were also part of the effort.

This program, she said, took place in October 2022, before ChatGPT came out, and had an effect on various state governments.

“There’s been a sort of flowering that’s been fascinating,” she said, citing efforts in California, Connecticut and Oklahoma, and detailing cases of how this has worked out for states.

“The impetus here was that the U.S. has had to reinterpret (civil and human) rights over its entire trajectory,” she said. “We created powerful government, and powerful government needs powerful checks. If we think of AI as a powerful tool, how do we want to think about checks on that? What do we want it to look like at its best, and how do we want to prevent it from operating at its worst?”

In aid of this, she presented five core principles as goals for AI:

Keep AI safe and effective

Prevent algorithm discrimination

Ensure data privacy

Give notice and explanation

Provide an alternative or human fallback

Putting these in context, she said it is important for people who are being impacted in terms of something like employment or housing to be able to see how critical decisions were rendered.

She also mentioned the writers’ strike an example of humans fighting back against AI’s encroachment on their livelihoods.

“They understood it at the level (of) how it mattered for their lives,” she said, citing the use of a person’s voice and images as a hot-button issue in the AI age.

Moving back to policy innovation, Nelson talked about how daunting it can be try to move forward when so many stakeholders push back with things like lack of money or votes.

“How do you get to ‘yes’?” she asked. “How do you get anything done when you have a Congress that doesn’t work?”

Pushing though that lack of buy-in, she asserted, is nonetheless critically important.

“Policymaking is creating the art of the possible,” Nelson said.

AI is doubly hard in some ways, as Nelson explained, because it is so new and confusing to everyone. She mentioned an inherent aversion to dealing with AI at any level from legislators and others.

“We don’t understand it, it’s too complicated,” she said. “We don’t want to touch it.”

Importantly, Nelson went over three big principles of trying to get over these challenges:

One is to return to first principles. Suggesting we shouldn’t “freak out” about technology, but focus on the public good, she pointed out the value of simplifying unknowns into more of a context around our shared social contracts.

The second principle she mentioned is that existing laws, rules and standards can still apply.

“If you discriminate with AI, it’s still discrimination,” she said. “If you commit fraud with AI, it’s still fraud.”

Third, she suggested, new laws, norms and standards may be needed.

“You have a right to science,” she said. “You have the right to participate.”

In terms of initiatives, Nelson mentioned an interim report from the UN Governing AI for Humanity released December 18, and a Chips in Science Act that came out of the White House August 9 of last year. She talked about provisions like job training for veterans and mothers, and childcare for parents, and she stressed the point that governance doesn’t have to mean stifling innovation.

This idea was really at the core of a lot of Nelson’s remarks as she went over the challenges of raising AI awareness at all levels, applying new technologies to the public good, and moving forward with good-faith efforts to give people better quality of life.

“Safety and innovation are not mutually exclusive,” she said, in conclusion, to applause. “If we think about the (AI) landscape, we have cause for some optimism, because we see that we have already had some success stewarding the outcomes we want, and that this can be done with prudent policy and an increasingly powered public.”

Share.

Leave A Reply

Exit mobile version