Blog and news
November 4, 2025

Inside the Black Box: Risk and uncertainty for AI in EU law

What is intelligent about AI?  

In 1955, McCarthy and his colleagues, in their promise to invent AI, argued that human errors result in exciting new possibilities. Even a person who is ‘almost asleep, slightly drunk, or feverish’ can say things that ‘almost make sense’ (1955). ‘Perhaps’, they reasoned, ‘the mechanism of the brain is such that a slight error in reasoning introduces randomness in just the right way.’

Over one summer, these aspirational scientists and engineers set out to invent a machine, programmed by a computer, that could demonstrate this same type of ‘random’ intelligence. Their original AI hypothesis was funded by the Rockefeller Foundation, and while they were not successful that summer, the idea gained traction.  

While the unexpectedness of a human mind is the inherent domain of phenomenologists and psychologists, it is not yet clear what projections there are for counselling a machine that demonstrates aberrant intelligences because of its AI augmentations. The deleterious impact on society that such aberrations will have, is quickly coming into focus. Recently Deloitte provided consultancy research to a government, with many errors where LLMs had provided incorrect references.  

It is not easy to write legible compliance aids to help organisations comprehend technology law, because liability and responsibility when it comes to AI, require different ways of thinking. Who is, and who will be, responsible for making sure high-risk categories stay within their boundaries after deployment? AI risk categories are, after all, are dependent on not only how the technology is used by humans, but also on how the technology itself behaves - including, as it were, in unpredictable ways. AI is predictably unpredictable, but  pre-existing conceptualisations about what computers and machines are 'meant to do' are not as sophisticated as e.g. human moral codes. In other words: how can an unpredictable technology be regulated to protect fundamental rights? To best ask these questions, the assumptions from the earliest days of AI must be revealed, reviewed, and resolved.

Is EU AI regulation happening inside a black box?

The EU’s AI Act (Regulation (EU) 2024/1689) differs from other jurisdictions’ AI regulation frameworks across the world (see our Observatory). The AI Act’s success is reliant on, as well as founded in, human behavioural scrutiny within conditions of uncertainty. This will happen at all AI lifestyle stages, in design, deployment, and post-market surveillance. Whether the AI Act can successfully protect users’ fundamental rights, while also providing companies with understandable implementation guidelines, is becoming increasingly uncertain. To identify these issues, here the most important problems arising for AI and machinery regulation, and standards setting, are outlined.

AI regulation and the policymaking surrounding regulatory development appears to be occurring within a ‘black box’, where corporate lobbying is known, but is not always transparent. A black box is the most securely bounded component of any machinery, designed to only be accessed after a machine has crashed. Though AI does seem to be ‘crashing’, or at least the hype may be dying down, we must look within the metaphorical black box of regulation now.

We can do this by taking the AI Act’s approved text from the European Journal seriously; by considering the overarching tensions in compliance for providers and deployers, by predicting risks and hazards arising for users; and overall, by couching the landscape in historical, legal and political economy debates. This will help to provide dialogue for these processes, but will also provide better grounding for sandbox methodologies, where testing must happen before deployment. Red lines must be drawn now to prevent unacceptable risks becoming normalised into law and corporate practices.

In 2025, who is responsible for, and who is impacted by, AI?

Two sets of actors prevail in the AI Act. They are the ‘providers’ of AI; and the ‘deployers’ of AI, defined in Art. 3(3) and Art. 3(4) respectively. Liability and accountability are, ideally, part of the rationale for naming the roles of these actors. These actors’ behaviours operate at specific points in the AI lifecycle - whether the actor is a person, a group of persons, or an organisation. However, the stress on these categories of actors has begun to supersede focus on the other people impacted by AI -  users who do not play a role in developing or deploying AI. These AI ‘users’, are far less visible in the Act than providers and deployers.

The term ‘user’ does appear in the Act,  such as in Article 71(4), which dictates that the database for high-risk category AI must be ‘user-friendly’; and a note in Annex IV(g) indicates that a description of the product ‘user-interface’ is part of the technical documentation provided to a deployer, where a user is any person.  

But the real users of AI are consumers, citizens, and workers. People are, and will be, increasingly affected by AI integration, and vulnerable persons’ wellbeing and livelihoods are already facing increased risks. The concerns rising are reflected in the AI Act’s ‘ex ante’ epistemology, where regulation oscillates around a user’s ‘intended purpose’, and ‘foreseeable misuse’ of AI (Recital 65, Article 9). But purpose, use, and misuse, in fact, depend both on the programmed, and unprogrammed, and on the evolving functionality of AI systems, tools and applications, whether the component is installed into a machine or present in software.  

AI is unpredictable, which creates new risks and hazards for users

Regulation is usually orientated towards predictable uses of and intended purposes for machines and tools, where accidents, hazards and risks can be predicted - these are standard topics in health and safety law and research. However, AI’s autonomous behaviours and inferences leading to predictions and decisions, can make liabilities and accountabilities particularly difficult to build into AI regulation. It is challenging to develop clear guidelines and standards that support company compliance, both within the CEN-CENELEC framework, and the European Commission’s legislative processes for both AI systems and machinery (from the author’s first-hand knowledge). AI’s unpredictability, once such an attractive selling point for the technology, and the ensuing hazards and risks of this uncertainty, have become the bane of regulatory processes.  

The final agreed definition of AI in the Act is as follows:

Article 3(1): ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The official definition makes it clear that an AI system must demonstrate levels of autonomous behaviour, adapts after deployment, and makes explicit, as well as implicit, inferences. On this basis, AI ‘behaviours’ are expected to not always be predictable. For example, an AI-based recommender system could show you items that are unlike your own tastes and preferences, invading privacy in a variety of unwanted ways. An observer, whether an employer or a CCTV camera, cannot necessarily know whether an AI’s behaviour is a mutation that has happened through the technology’s self-evolving properties, or whether aberrant AI behaviour has been caused by misuse of the technology, perhaps after a user is hurt and the damage reported.

The AI Act is well-known for its risk categorisation focus, defining risk as ‘the probability of an occurrence of harm and the severity of that harm’ (Art. 3, 2). Probability, ideally, is a quantifiable measure, but risk is impossible to quantify when hazards are unpredictable and arise in unknown future environments. The challenge is to regulate against risks and hazards without normalising or ‘building in’ an expected, even celebrated, set of AI risks and norms to this effect.

The Janus face of unpredictability  

Companies will also need to reconsider how they publicly and legally declare that their AI system has unpredictable properties. Declaring unpredictability may be attractive for marketing campaigns, for example where the LLM Claude’s providers tell us that there has never been a better time for its invention, given society’s emerging and endemic risks for planetary survival.

However, when AI is categorised as high-risk, the provider company is subjected to increased compliance requirements. AI providers must indicate a system’s intended purpose to place the product on the market and reasonably consider ‘foreseeable misuse’ to cross check for specific risk classification. The logic proceeds that for AI to thrive, the occurrence of hazardous and unpredictable machinic ‘behaviour’ should not only be permitted but itself, protected, so that innovations can proceed. Risks are, in fact, expected to arise.

This is where ‘strategic ambiguity’ in writing the standards and accompanying guidelines for technology comes in. Andrej Savin argues that European policymakers outsource legal uncertainty to corporations, and law is, therefore, strategically ambiguous. Mügge argues that compromises are inevitable in AI regulation. Normalising strategic ambiguity into AI law is risky, however, which brings us to the fundamental question.

How much risk and uncertainty is tolerable?

All AI products with high-risk capabilities must go through conformity assessment procedures, impact assessments, and must be tested in a sandbox, or be subject to similar activities, but these processes also generate a multitude of questions. Will simulated environments for testing include human participants and data? Will sandboxes accurately reflect real-life conditions? What protections exist for human subjects in these risky environments? Will sandboxes be reliant on synthetic data, pre-trained data, or real-life data, and how will this data be sufficiently protected, given the weakness of most of the world’s data and privacy laws?

These vital questions must now be asked: how much uncertainty, and how much risk, is acceptable for an AI system to be provided to an organisation or person, and deployed by that organisation or person? Some argue, any risk is too much risk. After all, a machine or computer that seemed ‘almost asleep, slightly drunk, or slightly feverish’, would be considered unfit for purpose and returned to the manufacturer, hopefully under warranty.

So, where do we draw the line on built-in technological unpredictability? How much uncertainty, if any, should be tolerated in the legal making and compliance space when it comes to AI, and how can policymakers best approach this new normal?

From Professor Moore:

My views in this piece are mine and mine alone. I am the appointed European Trade Union Confederation (ETUC) representative, working within the JTC21 CEN-CENELEC committee as appointed ‘AI expert’, with notified bodies from EU government representatives, industry representatives, and other Annex III members, where we are working on the harmonised Quality Management Systems standard for the AI Act; and the Guidelines for the Machinery Regulation.

Inside the Black Box: Risk and uncertainty for AI in EU law

Sign up to our newsletter

Stay up to date with IFOW research, insights and events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org.

Read our full privacy policy including how your information will be stored by clicking the link below.