Ed Gaudet is the CEO and Founder of Censinet, a healthcare risk management platform, and member of the Health Sector Coordinating Council.

At the CHIME24 Fall Forum, I faciltated a focus group session with healthcare IT leaders on the emerging opportunities and challenges of artificial intelligence (AI). Three major themes quickly emerged: AI presents tremendous potential, AI poses significant risks and AI adoption is outpacing our ability to effectively govern it.

Healthcare’s unique complexities mean AI introduces unique risks that cannot be addressed through traditional governance models. These risks underscore the urgent need to develop and implement structured approaches to governance to ensure safe, secure and ethical AI adoption. However, the industry’s ability to establish effective governance hinges upon the ability to overcome the inherent challenges associated with using AI in healthcare, including:

Protecting Patient Safety

Despite impressive capabilities, AI puts patient safety at risk. AI models rely on large volumes of sensitive patient data that require “training.” Errors can lead to misdiagnoses, inappropriate treatments or other harmful outcomes. Overreliance on AI poses the risk of unintentionally “de-skilling” clinicians and caregivers, fostering a gradual tendency to accept AI recommendations without critical evaluation and verification.

Minimizing Ethical Concerns

The use of AI in diagnosis, treatment and drug development introduces significant ethical concerns, particularly the risk of algorithmic bias. If training datasets lack sufficient diversity or primarily represent a narrow segment of the population, AI systems may perform inadequately for underrepresented groups, exacerbating existing health disparities—or creating new ones. AI use in healthcare should complement, not replace, human judgment.

Ensuring Transparency And Explainability

Many AI technologies function as “black boxes,” making it difficult to understand how they generate conclusions or recommendations. This opacity poses a significant challenge in healthcare, where explaining care decisions is vital for legal and ethical reasons. Moreover, trust between patients and providers is paramount. Whether it’s a miscalculated co-payment or a life-threatening misdiagnosis, confidence in AI will crumble if providers cannot adequately explain its results and recommendations. As such, healthcare organizations will require full transparency from their AI vendors and technologies.

Driving Cross-Functional Engagement

Effective AI governance necessitates input from various functions and departments within healthcare organizations. While it can be challenging to bring so many functions together, diverse representation ensures a comprehensive approach to governance and accounts for the vast diversity of AI use cases.

Proposed AI Governance Model For Healthcare

Given the complexity and challenges surrounding AI adoption, a thoughtful and well-rounded governance model is essential—one that balances regulatory requirements, ethical considerations, technical standards and the perspectives of all stakeholders.

Cross-Functional Governance Structure

To start, a cross-functional governance structure is key. Organizations should form an AI governance committee, ideally led by a senior executive such as a CIO or CAIO. This committee should bring together representatives from clinical, administrative, technical, risk and legal teams. Regular meetings will help ensure alignment on AI strategy and provide a platform for identifying and mitigating emerging risks.

Clear Policies And Procedures

Organizations need well-documented guidelines for AI adoption, using established frameworks like IEEE UL 2933 and NIST’s AI Risk Management Framework. These policies should cover everything from technology evaluation and risk assessments to ROI analysis and ethical reviews. Exceptions to standard protocols should also have clear pathways for approval.

Risk Management And Ethical Considerations

Every AI technology should undergo rigorous risk assessments both before adoption and throughout its lifecycle. Ethical guidelines should address fairness, transparency and human oversight, supported by dedicated ethics review boards. Additionally, employees should have accessible channels to report concerns anonymously without fear of retaliation.

Technical Standards And Quality Assurance

Technical standards and quality assurance play a critical role in ensuring reliability and safety. Organizations must establish strict standards for AI systems, including protocols for testing and validation using diverse, representative datasets. These measures help maintain consistency across internally developed, purchased or integrated AI tools.

Stakeholder Engagement And Education

Involving patients, providers, developers, vendors, policymakers and ethicists in the AI conversation ensures a broad range of perspectives. Education is also key; AI literacy programs should be available for everyone, from board members to frontline staff, and digital literacy campaigns should help users understand both the capabilities and limitations of AI tools.

Continuous Monitoring And Evaluation

Organizations should implement regular assessment mechanisms, reporting systems for incidents and evidence-based feedback loops that incorporate insights from real-world use into continuous AI development and refinement.

Regulatory Compliance And Alignment

Staying aligned with emerging AI guidelines, including current or future executive orders on AI, is critical. Collaboration with regulatory bodies can help shape practical standards tailored for healthcare.

Incentive Structures

Financial and professional incentives should reward ethical development and deployment, while governance metrics should be integrated into quality measures and reimbursement models. Recognition programs can also celebrate organizations that set the benchmark for AI governance best practices.

Getting Started

Many healthcare organizations are just beginning the complex journey of building a robust AI governance framework. Here are my recommendations for immediate focus:

Standardize AI adoption.

Organizations should use a balanced scorecard to evaluate and prioritize AI initiatives or new technologies, considering factors such as patient safety, ethics, transparency, regulatory requirements and return on investment.

Proactively manage AI vendor risk.

Healthcare organizations must work closely with AI third-party vendors to understand capabilities, limitations, fourth-party risks and controls. Contracts should include provisions for algorithm updates, bias testing and data privacy safeguards.

Address AI in existing systems.

Many legacy IT systems and software already installed throughout the organization are incorporating new AI features, creating challenges for governance. As such, it is essential to create and maintain an up-to-date inventory of AI technologies and monitor their use and integration into critical workflows.

Adopt existing best practices.

Examining real-world examples of AI governance in action can jumpstart an organization’s progress. An often-cited successful example is the Mayo Clinic’s implementation of an AI governance framework that emphasizes transparency, accountability and ongoing evaluation.

Conclusion

AI’s transformative potential in healthcare is undeniable, but its adoption must be governed by a strict commitment to patient safety, ethics, trust and transparency. By understanding and implementing the AI governance model proposed in this article, healthcare organizations can fully realize AI’s tremendous potential while firmly upholding medicine’s founding principle: First, do no harm.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Share.

Leave A Reply

Exit mobile version