Emre Kazim, co-CEO of Holistic AI, is an expert in AI ethics and governance and holds a Ph.D. in Philosophy from King’s College London.
The European Union (EU) AI Act enforced its first deadlines in February 2025, marking a shift in global AI governance. For technology leaders, the stakes are high, and failure to comply could cost millions. The regulation impacts any business that uses AI and sells products or services in the EU. This includes any systems that a business developed itself or purchased via a third-party software supplier.
Noncompliance carries significant penalties, including fines of up to 35 million euros or 7% of global annual revenue, along with reputational and operational risks.
Effective since August 2024, the EU AI Act introduces a risk-based framework to classify AI systems as prohibited, high, limited or minimal risk. For CTOs and CIOs, understanding the Act’s implications is essential to navigating the impact on innovation and compliance.
At its core, the Act defines the red lines for AI use in the EU. These are systems posing “unacceptable risks” or those that conflict with EU values such as human dignity, freedom and privacy. For CTOs and CIOs shaping AI development and deployment strategies, aligning with these regulations and tracking deadlines should be top priorities.
EU AI Act Origins
My involvement with the EU AI Act began through my work with the Organisation for Economic Co-operation and Development (OECD). Early discussions centered on the EU’s ambition to set a global standard for AI governance, much like it did with the General Data Protection Regulation (GDPR), but this time with a dual focus: fostering trust and enabling innovation. While GDPR is primarily about safeguarding personal data, the EU AI Act takes on the much more complex challenge of regulating AI systems themselves.
Through the drafting process, it became clear just how difficult it is to regulate a technology that evolves faster than legislation can keep up. Unlike data privacy, AI governance isn’t just about policies; it requires a deep technical understanding of how AI models function, how they make decisions and where risks emerge.
Common Misconceptions
In my experience working with customers, I’ve seen three persistent misconceptions that shape how companies approach compliance with the Act:
1. “Our legal team can handle this.” Many assume that, like GDPR, AI compliance falls squarely on legal teams. Unlike data privacy laws, however, the Act requires in-depth technical analysis of AI models, risks and behaviors—work that legal teams aren’t trained to do alone.
2. “We’ll just extend our cyber or privacy solution.” Traditional governance tools built for cybersecurity or data privacy aren’t equipped to assess AI-specific risks like bias, explainability and robustness. AI requires governance frameworks designed to address its unique life cycle.
3. “Compliance will slow us down.” Companies that embed AI governance into development cycles actually accelerate deployment. Clear risk assessments and compliance frameworks remove roadblocks, making it easier to scale AI safely and with confidence. The added benefit is that compliance with the Act is straightforward.
Prohibited AI Practices
The EU AI Act outright bans eight AI practices due to their potential for harm, regardless of whether an entity is developing, deploying or using them:
• Manipulative Or Deceptive AI: Systems that subtly influence human behavior such as embedding undetectable cues in content.
• Exploitation Of Vulnerable Groups: AI targeting children, financially distressed individuals or other at-risk groups for manipulation.
• Social Scoring And Behavior-Based Classification: AI that categorizes individuals based on personality or behavior, leading to unfair treatment (e.g., social media-based hiring decisions).
• AI-Driven Predictive Policing: Profiling-based AI predictions of criminal behavior without human oversight.
• Untargeted Facial Recognition Data Collection: Scraping biometric data from sources like CCTV or online platforms, aligning with GDPR protections.
• Emotion Recognition In Work And Education: AI systems that infer emotions in workplaces or schools are restricted, except for health and safety applications.
• Biometric Categorization Of Sensitive Traits: AI using biometric data to infer race, political beliefs or sexual orientation is not permitted, except under strict legal conditions.
• Real-Time Biometric Identification In Public Spaces: Live facial recognition by law enforcement is largely banned, with exceptions requiring prior authorization and oversight.
CTOs and CIOs must conduct detailed risk assessments to ensure compliance, particularly as enforcement timelines approach.
Key Steps For CTOs And CIOs In 2025
Currently, our teams are helping customers with EU AI Act readiness. In our calls, we are advising customers to take the following steps:
• Conduct Comprehensive AI Audits: Identify all AI-powered software used internally or sourced from third-party vendors to map potential compliance risks.
• Implement AI Governance Protocols: For example, Unilever—one of our customers— has established standardized policies for transparency, fairness and bias mitigation to align with EU regulatory standards.
• Engage Legal And Compliance Teams: Ensure AI models adhere to EU regulations and assess whether any case-by-case exceptions apply, such as biometric identification in law enforcement or security applications.
• Review Vendor Compliance: Require compliance assurances from AI vendors before deploying their services to mitigate third-party risks.
Preparing For The Future: Why CTOs And CIOs Must Act Now
Enforcement of prohibited AI practices will take effect first, followed by codes of practice for general-purpose AI systems like foundational models and LLMs and then high-risk AI regulations.
To stay ahead, CTOs and CIOs should establish a robust governance framework to ensure compliance, minimize risk and drive responsible AI adoption. A standardized approach will streamline AI projects, enhance trust and position organizations as leaders in AI. Consider implementing an AI governance software platform to help manage all AI use cases throughout the organization, not just with regulatory compliance activities but with AI safety, ROI and efficacy.
As the EU continues to take a leadership position in regulating AI, we must ensure that innovation and accountability go hand in hand.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?