Dave Link is CEO and cofounder of ScienceLogic.
As artificial intelligence (AI) continues to gain momentum, regulation has struggled to keep up. Over 100 bills pertaining to AI have been introduced to Congress—evidence that American lawmakers want to move AI policy forward this year. Yet, we remain behind the curve.
For comparison’s sake, Europe adopted its AI policies into law in August 2024. When an AI roadmap was revealed in May, it was a dead end despite bipartisan support. This was for two reasons:
1. It lacked basic protections for data copyright, usage and privacy
2. It required $32 billion that the government didn’t allocate to spend.
In September 2024, the House Committee on Science, Space and Technology passed nine bipartisan bills to ensure U.S. leadership in AI, which included everything from shared research infrastructure to authorization for a safety institute. Yet, according to Rep. Zoe Lofgren of California, the legislation still “significantly underfunds the activities” in question.
With federal guidance lagging, many states are introducing laws of their own. However, this approach is often the illusion of progress. Ineffective regulations stand to be worse than no regulations at all, with patchwork rules holding the potential to stifle innovation and plateau the technology’s momentum.
A Smorgasbord Of State Regulations
In January 2024, Gov. Glen Younkin of Virginia, my home state, issued an executive order on AI calling for the development of policy standards for AI implementation in agencies, disclaimers for outcomes generated by AI and more. However, Colorado was the first state to actually roll out comprehensive AI regulations to fill the federal regulatory void. The law, which goes into effect in 2026, regulates the use of AI for consumer decision-making (jobs, housing, lending, etc.).
A law was also introduced in California, which is home to leading AI companies like Google, Meta and OpenAI. While the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) would have regulated the creation of AI by requiring developers and companies to test their models, it didn’t offer any guidelines for the use of AI, which is equally important.
That wasn’t why Gov. Gavin Newsom vetoed the bill, though. Instead, he said it could potentially curtail “the very innovation that fuels advancement in favor of the public good.” Indeed, state-by-state regulations and executive orders could create an uneven playing field for companies to innovate with AI and, once again, may hinder technological progress. While progress must be balanced with the proper oversight, sweeping regulations at the federal level will be far more effective than sitting back as individual states pass a smorgasbord of regulations, each with its own fine print.
What Federal Regulations Need To Succeed
Federal regulations are the best way to ensure AI has proper oversight without hindering its main value proposition: continued innovation. For the next wave of proposed regulations to succeed, they must address both how AI is made and how it is used, with transparency as the foundation. That means they must require AI models to disclose all data sources, require consent and compensation for the use of private data and copyrighted information, and require permission and anonymization for all personally identifiable information (PII).
While regulation regarding AI will inherently be a learning process, given the technology’s novel nature, privacy and transparency are non-negotiable. Unfortunately, this is where current regulations are lacking. Many models remain a black box, and the public is in the dark about how their data is being used. Once again, addressing this regulatory void will not restrict innovation but will instead fuel it. Enabling AI to be accountable for and trusted with varying kinds of data, including PII and IP, will help eliminate skepticism and improve public trust. Many models can now identify when outputs are derived from copyrighted data, in turn, preventing the exploitation of IP. This is the type of oversight that should be mandated federally.
Implementing AI Governance Today
In the absence of sufficient federal regulation, enterprises have little choice but to take AI governance into their own hands. Many challenges to sufficient AI governance exist. In addition to a lack of standardized frameworks, there are often discrepancies between how different departments within an organization are using AI. At the same time, many organizations face technical limitations regarding transparency and explainability—or can’t keep up with the technology’s rapid pace of change.
The solution is three-fold. First, promote cross-functional collaboration across teams and departments to ensure AI policies are standardized. Next, break down AI systems and draft clear standards and policies for each element. Finally, ensure governance frameworks are flexible enough to keep up with change as the system evolves.
While this may sound daunting, I believe the benefits undoubtedly outweigh the workload. Governance frameworks minimize the amount of data collected, ensure users have control over their personal information, require regular audits, and ensure stringent security—to name a few. While it can be tempting to skip over governance altogether in the absence of federal requirements, it’s only a matter of time until the law catches up with reality. As mentioned, insufficient governance can also undermine public trust. Thus, enterprises that do their due diligence regarding AI governance today will save themselves time and headaches down the line.
Regulating AI is an urgent matter, but doing so ineffectively will undermine progress, stifle innovation and sow distrust instead of quelling it. The fact that a federal privacy bill has not yet been passed shows the degree to which we still lack an understanding of what is required to properly oversee this increasingly popular technology. While state legislation may seem like a good way to address the regulatory gap that exists, it’s short-sighted and will likely result in an uneven playing field for AI. Federal regulations are the way forward, but they must be built on a foundation of privacy, transparency and trust. In the meantime, enterprises should take governance seriously, which means taking it into their own hands.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?