DAGO.ADVISORY VOL.01 · Q0.2026 METHODOLOGY v1.0 POLISH MID-MARKET · €50M–€500M NEXT DAGO INDEX RELEASE · 2026.05.04 EN PL · soon
§ Regulatory POV April 2026 · 9 min read

Why most Polish AI governance frameworks are 18 months behind the EU AI Act

Eight months into the EU AI Act’s phased enforcement, a pattern is visible in the Polish mid-market. The frameworks adopted to look good in board packs have not been updated for what actually landed.

Eight months into the EU AI Act's phased enforcement, a pattern has become visible in the Polish mid-market: most internal AI governance frameworks have not been updated for what actually landed. They look current in board packs. They will not look current in audit — and the audits are closer than most boards think.

What happened

Between late 2024 and early 2025, something like two-thirds of mid-cap Polish listed companies adopted an "AI policy" or "AI governance framework." These documents were, on the whole, drafted carefully. They cited the Regulation. They invoked lawfulness, fairness, transparency, human oversight, and accountability. A board committee approved them. A communications team announced them.

But almost all of them were written against the Act as it was imagined in mid-2024 — a regulation of principles — rather than the Act as it arrived in 2025 and 2026: a detailed, article-level instrument with specific obligations on specific risk tiers, specific testing requirements, and specific documentation a competent authority can demand.

The frameworks described the right values. They did not describe the operating reality the Act now requires. That is the 18-month gap.

Three specific gaps

01 · GPAI integration treated as a procurement question

As of August 2025, general-purpose AI model providers are subject to specific documentation, transparency, and systemic-risk obligations under Articles 53 to 55. Critically, downstream integrators — which is to say, most Polish mid-market companies using GPT-4, Gemini, Claude, or any GPAI — inherit obligations as well. They must retain sufficient documentation to explain what the integrated model does, what data flows to it, which use-cases cross the "high-risk" threshold, and what internal human-oversight process governs it.

We have yet to see a Polish mid-market framework that operationalises this. Most treat "using OpenAI" as a vendor decision documented by the procurement team, not a regulated integration governed by the AI Act. This is the single most common misalignment we find: a framework that passes the board but does not instruct the procurement, data, or product teams in any way that would survive a records request.

02 · Risk-tier classification performed once, never refreshed

Article 6 and Annex III define the "high-risk" tier with precision. Most frameworks classify the company's AI use-cases at the moment of drafting — typically a workshop with a Big-4 facilitator — and then never again. This is a dangerous stance.

Why: the line moves. A marketing personalisation system that was clearly limited-risk in 2024 becomes, in 2026, uncomfortably close to the profiling boundaries of Annex III if the dataset absorbs protected attributes, or if a new use-case is bolted on top. Companies whose risk-tier classification lives only in the annex of a board memo have no mechanism to notice when a system quietly crosses a line. This is the second misalignment: frameworks that lack an operating cadence for re-classification.

03 · Prohibited practices pushed to vendor terms of service

Article 5 prohibits specific practices — real-time biometric identification in public spaces (with narrow exceptions), emotion inference in employment and education, social scoring, and certain cognitive-manipulation techniques. Most Polish frameworks quote Article 5 verbatim and move on.

What they do not do is audit the vendor stack. Off-the-shelf HR platforms with "sentiment analysis" modules, retail analytics systems with biometric footfall tracking, insurance scoring tools built on opaque feature sets — a non-trivial number of these touch Article 5 on first inspection. A framework that delegates the prohibited-practice boundary to the vendor's own terms of service has, in practice, delegated it to no one. This is the third misalignment, and it is the one most likely to generate the first round of public enforcement actions.

Why now

August 2026 is when the high-risk obligations under Chapter III of the Act come fully into force. By that date, a Polish company operating a high-risk system must have a functioning risk management system (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency for deployers and users (Art. 13), human oversight (Art. 14), and accuracy, robustness, and cybersecurity (Art. 15) — each of which requires evidence, not assertions.

A framework adopted in early 2025 that was never refreshed cannot produce that evidence on demand. The records need to have been kept, not to begin being kept. This is the deadline pressure most boards have not internalised: the audit window opens in four months, and the trail needs to reach backwards.

What the board should do in the next ninety days

Three moves, in this order.

  1. Commission a framework-versus-reality delta. Not another legal memo. A short engagement that maps your adopted framework against the currently enforced Act and specific UKNF guidance, identifies every gap at article level, and ranks each by enforcement probability and cost to close.
  2. Designate a named accountable officer inside management. Not a committee. A person whose job description includes the classification refresh cadence and who signs every material AI procurement decision.
  3. Start producing the evidence the Act requires, now, even for systems that will not be in-scope until 2027. Record-keeping cultures do not switch on. They accumulate. The company that starts in April 2026 is in a different class of readiness in August 2026 from the one that starts in July.

Everything else — the strategy, the use-case portfolio, the model choices, the vendor rationalisation — is downstream of these three. Boards that act on them now will have a defensible position in twelve months. Boards that do not will rediscover, the hard way, that a policy sitting in a drawer is not a policy at all.

Author

Michał Skwarczyński is the founder of Dago Advisory. Dago Advisory is the senior-led AI-readiness consulting firm for the Polish mid-market, grounded in the quarterly Dago Index.

enplde