The EU AI Act and Corporate Governance: What Boards Need to Do Now

The EU AI Act isn't just a compliance exercise. It's a governance mandate — and most boards aren't ready.

Roshan Ghadamian·

What the EU AI Act Actually Requires

The EU AI Act, which entered phased enforcement beginning in 2024 with full application of high-risk AI provisions from August 2026, establishes the world's first comprehensive legal framework for artificial intelligence. For corporate governance, the critical provisions are more demanding — and more structural — than most board briefings suggest.

High-risk AI classification. The Act categorises AI systems by risk level. High-risk systems — those used in employment, credit scoring, law enforcement, critical infrastructure, and other specified domains — face the most stringent requirements. If your organisation deploys AI in any of these areas, the governance obligations are substantial and non-optional.

Risk management systems. Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. This isn't a one-time risk assessment. It's a continuous process: identify risks, estimate and evaluate them, adopt risk management measures, and test those measures. The system must be updated throughout the AI system's lifecycle.

Human oversight. Article 14 mandates that high-risk AI systems be designed to allow effective human oversight. This includes the ability to fully understand the AI system's capacities and limitations, to properly monitor its operation, and to decide not to use the system, to override its output, or to intervene in real time. For governance, this means you need structural mechanisms for human intervention — not just a human "in the loop" as a formality.

Transparency obligations. Deployers of high-risk AI systems must inform individuals subject to the system that they are interacting with AI. They must also provide meaningful information about the system's logic, significance, and envisaged consequences. This creates a governance requirement: someone must be accountable for what is communicated, when, and whether it's accurate.

Record-keeping and traceability. High-risk AI systems must generate logs automatically. Deployers must keep these logs for at least six months (or longer under sector-specific requirements). This isn't just a data retention requirement — it's a governance trace requirement. You need to be able to reconstruct what the system did, when, and why.

Why This Is a Governance Problem, Not Just a Compliance Problem

Most organisations are approaching the EU AI Act as a compliance exercise: identify which systems are in scope, document them, check the requirements, produce evidence of compliance. This is necessary but insufficient.

The Act creates **ongoing governance obligations** that can't be satisfied by a one-time compliance project. Risk management must be continuous. Human oversight must be operational, not nominal. Transparency must be maintained as systems change. Logs must be kept and made available. These aren't things you do once and file away. They're things you do continuously — which means you need governance infrastructure, not just a compliance team.

The board accountability problem. Under the Act, obligations fall on both providers (who build AI systems) and deployers (who use them). Most organisations are deployers. The board is ultimately accountable for ensuring the organisation meets its deployer obligations. But most boards have no mechanism to verify that human oversight is actually being exercised, that risk management is actually continuous, or that transparency obligations are actually being met. They receive assurance from management — but without infrastructure to verify it.

The cross-functional problem. EU AI Act compliance touches legal, technology, operations, HR, procurement, and risk management. No single function owns it. This creates a governance coordination challenge: who is accountable for the end-to-end compliance posture? In most organisations, the answer is unclear — which means, in practice, no one is.

The change management problem. AI systems change. Models are retrained. Data sources shift. Use cases evolve. Each change can affect the risk classification, the required risk management measures, and the transparency obligations. Governance must be able to detect relevant changes and trigger re-evaluation. Static compliance documentation can't do this.

What Boards Need to Do Now

Boards that wait for their compliance teams to present a finished plan are already behind. Several actions are needed now — not next quarter.

Map your AI inventory. You cannot govern what you cannot see. Before any compliance work begins, the board needs a complete inventory of AI systems deployed across the organisation. This includes third-party AI embedded in SaaS tools, procurement systems, HR platforms, and customer-facing products. Many organisations are surprised by how many AI systems they deploy without governance awareness.

Classify risk exposure. For each AI system in the inventory, determine whether it falls within a high-risk category under the Act. This requires legal analysis (is the use case in scope?) and technical analysis (does the system meet the definition of an AI system under the Act?). The classification determines the governance obligations.

Assign clear accountability. Designate a senior leader (or cross-functional body) with explicit accountability for EU AI Act compliance. This isn't a CISO problem, a legal problem, or a technology problem. It's a governance problem that requires cross-functional coordination with board-level visibility.

Assess governance infrastructure gaps. Ask a specific question: "For each high-risk AI system, can we demonstrate continuous risk management, effective human oversight, transparency compliance, and log retention — today?" If the answer is no, you have a governance infrastructure gap, not just a process gap.

Build the governance trace. Start generating governance evidence now. Every decision about an AI system — deployment, modification, risk assessment, oversight exercise — should be documented in a system that creates an auditable trail. When a regulator asks "how do you govern this AI system?", you need to point to infrastructure, not a policy document.

The Difference Between AI Compliance and AI Governance

This distinction matters enormously, and most organisations conflate the two.

AI compliance is satisfying specific regulatory requirements. It's bounded: there's a list of requirements, and you either meet them or you don't. Compliance can be achieved through documentation, process design, and periodic verification. It answers the question: "Are we meeting our legal obligations?"

AI governance is the structural capacity to make, enforce, trace, and learn from decisions about AI. It's continuous, adaptive, and organisational. It includes compliance but extends beyond it. Governance answers the question: "Do we have the institutional infrastructure to manage AI responsibly — including requirements we haven't anticipated yet?"

The practical difference becomes clear when something changes. A new AI system is deployed. A model is retrained with different data. A use case shifts from low-risk to high-risk. With compliance alone, you need to detect the change manually, reassess the requirements, update the documentation, and reverify. With governance infrastructure, the change is detected structurally, the relevant constraints are applied automatically, and the governance trace is generated without manual intervention.

Compliance is a snapshot. Governance is a system. Organisations that build only compliance will find themselves in a perpetual catch-up cycle — reassessing, redocumenting, and reverifying every time something changes. Organisations that build governance infrastructure will find that compliance becomes a byproduct of how they operate.

The EU AI Act's emphasis on continuous risk management, ongoing human oversight, and lifecycle traceability implicitly demands governance, not just compliance. Boards that understand this distinction will invest accordingly.

The Extraterritorial Dimension

The EU AI Act applies to organisations outside the EU if their AI systems affect people within the EU. This extraterritorial scope means that many organisations that don't consider themselves "EU companies" nevertheless have EU AI Act obligations.

For multinational boards, this creates a governance harmonisation challenge. Different jurisdictions are developing different AI regulatory frameworks — the EU AI Act, the US executive orders and sector-specific approaches, the UK's principles-based framework, China's algorithmic regulations. A board governing AI across multiple jurisdictions needs governance infrastructure that can enforce different constraints in different contexts while maintaining a coherent institutional posture.

For non-EU boards, the Act creates a forcing function for governance modernisation. Even if your domestic jurisdiction has lighter AI regulation, serving EU customers or affecting EU residents triggers the Act's requirements. This makes EU AI Act readiness a governance capability, not a regional compliance exercise.

The practical implication: governance infrastructure needs to be jurisdiction-aware. A constraint that applies in the EU may not apply in Australia — but the governance system needs to know the difference and enforce accordingly. Manual, document-based governance cannot do this at scale. Infrastructure-based governance can.

What Governance Infrastructure Looks Like

The EU AI Act's requirements map directly to governance infrastructure capabilities:

Risk management (Article 9) requires a system that can identify, assess, and manage risks continuously. In governance infrastructure terms, this means constraints that are defined once, enforced at the moment of action, and updated as risk assessments change. Not a risk register that's reviewed quarterly — a system that prevents actions that violate risk management decisions.

Human oversight (Article 14) requires mechanisms for humans to understand, monitor, and intervene in AI system operation. This means governance traces that show what the AI system did, why, and whether a human reviewed or overrode the output. The trace must be automatic — relying on humans to manually log their oversight exercises defeats the purpose.

Transparency (Article 13) requires information to be provided to affected individuals. Governance infrastructure ensures that transparency obligations are enforced structurally: the system cannot deploy an AI decision without generating the required disclosure. This is enforcement at the moment of action, not retrospective verification.

Record-keeping (Article 12) requires automatic log generation and retention. Governance infrastructure generates these logs as a byproduct of operation — every action traced, every decision recorded, every constraint check logged.

The pattern is consistent: the EU AI Act requires capabilities that are continuous, automatic, and structural. These are the characteristics of infrastructure, not process. Boards that invest in governance infrastructure will find EU AI Act compliance achievable and sustainable. Boards that rely on manual processes will find it expensive, fragile, and perpetually incomplete.

Related Glossary Terms

Related Comparisons

See governance infrastructure in action

Constellation enforces corporate governance at the moment of action — for both humans and AI agents.