The Proposed AI Act from the EU: An Engineer’s Expounding

Amit Kumar Mishra
7 min readDec 1, 2021

In April 2021, the EU proposed a regulatory framework for artificial intelligence (AI) related projects and products. As far as I know, this is the first of its kind regulation in the world. There are a few approaches from a few other countries. However, none of them go as deep as the EU’s proposal. Since then quite a few agencies and organizations have given their interpretations of the proposed regulation. For a legal perspective, the readers can refer to the report from Stanford’s Law Faculty. Similarly, McKinsey’s report gives the implications of the proposed regulation for business. In the current brief article, I shall expound the EU’s AI Regulations from a practitioner’s point of view. I shall focus on aspects that might be of interest to fellow AI engineers and innovators.

We shall look at three aspects, viz. Why, How and What! We shall quickly discuss why we need such regulations. Then, we shall discuss how the EU plans to execute the regulations. Lastly, we shall look into what is inside the proposed regulation.

(PS: I have seen a few sceptical reports trying to point at the flaws of the new regulation. These concerns are completely unfounded. Firstly, it is better to have some regulation as compared to no regulation. Secondly, as we shall discuss later in this article, the EU regulation is highly sensitive towards creating a just and ethical AI milieu. Lastly, it is commendable that the EU regulation has endeavored to keep itself highly agile. Hence, any concerns that may come out later in the future could be incorporated into the regulations quickly.)

Why do we need AI-regulation?

The domain of AI has been evolving and expanding at a rapid rate in the past few years. As per a report from Gartner, the number of organizations implementing AI in some form or other has increased by 270% over four years only! Most of the applications affect or impact “natural persons” in some way or the other. AI and machine learning algorithms, in limited form, would be part of many products. These algorithms are new, not fully explainable (yet) and have been shown to carry biases which are not well measured. Often, AI is perceived as this sci-fi thing that will replace natural persons. However, many semi-autonomous systems are going to use AI algorithms more often. These semi-autonomous systems empowered by AI would be more and more ubiquitous. Hence, a set of guidelines is necessary to make sure that these systems do not enhance unfair biases.

The above is a point of view (POV) from regulatory authorities. There is also a strong POV from industries that would welcome such regulations. Of late, there is a lot of concern amongst potential customers and users of AI-empowered products. At times, this creates a state of fear which (mostly) is an over-reaction. Having a clear AI regulation will make sure that concerns from customers and end-users are minimized. It will also empower innovators and industries to work in a less uncertain milieu. The road-map would be much clearer for them. This, in fact, would foster AI-related innovations.

The EU’s AI regulation strives to follow what they call the “EU Approach: Human Centric; Sustainable, Secure, Inclusive & Trustworthy (HSSIT)” AI systems.

How to regulate?

The how is mostly more complicated than the why! The regulatory documents give an excellent background and also discuss how they have come to the regulation proposed currently. I shall focus on two major aspects, viz. how was the current approach chosen and how do they plan to enforce it?

The strength of AI regulations could be made of any level. One can have AI regulation that is extremely prescriptive where every project related to AI would need to go through detailed examination by regulatory bodies. On the other end of the spectrum, one can have AI regulation that relies on the due diligence performed by the industries and innovators. The EU regulation follows “Horizontal EU legislative instrument following a proportionate risk-based approach + codes of conduct for non-high-risk AI systems”. This means that the regulation will act as a guiding umbrella-framework for all the member states who, in turn, would need to define their respective national AI regulations. Secondly, AI systems would be divided into classes based on their potential level of concern and would have to undergo proportionate regulatory control.

In terms of enforcing the regulation, the onus has been left to each member-state who would need to define their own AI regulation based on the EU level regulation. Each member state would need to form new national supervisory authorities (which would be under the EU AI Board).

The strength of a regulation can, often, be judged based on the punishment that follows from non-compliance! In the EU AI regulation, the fines proposed are quite severe.

  • For non-compliance of data rules a firm can be fined up to 30 million Euros (or 6% of its annual turnover).
  • For other non-compliances a firm can be fined up to 20 million Euros (or 4% of its annual turnover).
  • And for incorrect or misleading info a firm can be fined up to 10 million Euros (or 2% of its annual turnover).

What is in the regulation?

Let us get to the gist of things now! We shall discuss some of the interesting parts of the regulation. As mentioned before, I shall only discuss those parts which seem interesting to me as an engineer and innovator.

Before we go into the details, let me reiterate the fact that the regulation proposes a “Horizontal EU legislative instrument following a proportionate risk-based approach + codes of conduct for non-high-risk AI systems”. It has categorized AI applications into four levels of risks.

  1. Unacceptable risk AI applications.(e.g. social scoring);
  2. High-risk AI applications (e.g. safety and health AI/bots);
  3. Limited risk AI applications (e.g. chatbots (that can bias ppl));
  4. Minimal risk AI applications (.e.g. (very few) games).

The following are some of the interesting titles in the regulatory documents. The readers are advised to go through them in the main document if they are interested to get it from the horse’s own mouth! I shall present my take on some of these later.

  • Title I discusses the definition of AI.
  • Title II discusses the list of prohibited AI practices.
  • Title III discusses the high-risk AI classification.
  • Title IV discusses the transparency “obligations” for certain AI applications.
  • Title V discusses the planned action that would support Innovation in the domain of AI.
  • Title VII details the proposal for EU-wide database of high-risk AI systems.
  • Title VIII discusses regulations on post-market monitoring of products.

What is AI? (Title I)

The regulation gives the following definition of AI. AI is a difficult-to-define term. Different persons and industries have different interpretations. However, once it is in a regulation this definition will become the official definition! AI techniques and approaches are defined as one or more of the following.

  • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
  • Statistical approaches, Bayesian estimation, search and optimization methods.

As a practicing AI engineer and researcher I am happy with this definition. This is inclusive and holistic. Another interesting thing I noted is that the regulation refers to humans as “natural persons” :)

What is prohibited? (Title II)

This is an important title that defines prohibited AI applications. The following are some of the interesting prohibited applications. Readers can refer to the regulation for the complete list.

  • AI deployment that can distort person’s behavior or harm (or can harm) them physically or psychologically
  • Exploiting vulnerabilities of natural persons
  • Social scoring systems
  • Real-time remote biometric ID system unless its for proven criminal/victim (with legal authorization)

It can be noted that real-time remote biometric ID systems are taken very seriously. If a biometric ID system is not real-time or not remote then they fall into the high-risk AI applications. It can also be noted that if any member state wants, it can frame further regulations to categorize any particular real-time remote biometric ID system as a high-risk AI application.

What are high-risk AI systems? (Title III)

Following are some of the domains and applications where the incorporation of AI systems would be considered as a high-risk situation. (Readers can refer to the regulation for the complete list.)

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure
  • Access to education and vocational training
  • Employment, workers management and access to self-employment
  • Access to and enjoyment of essential private services and public services
  • Use in law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

If your project is about any of the above then be warned 😈! As you may guess most of the AI applications will fall into this category!!

What to do for high-risk AI systems? (Title III)

Title III prescribes the following steps if a product or application falls into the high-risk category.

  1. Establish a risk-management system
  2. Have a data-governance & management practices
  3. Draw a thorough technical documentation (drawn before system is in market)
  4. Record-keeping: all events (logs) are recorded for traceability
  5. Transparency to users which means to have detailed and clear documents about the the intended purpose, the level of accuracy, robustness & cybersecurity and foreseeable circumstances of misuse!
  6. Human oversight: facility for natural human supervision (if needed)

The following figure (reference) shows the action-flow needed for high-risk AI applications.

Conclusion

As AI engineers and innovators, we are going through interesting times. It is really helpful to have clear regulations in terms of the projects we can and can not work on. The EU AI regulation is a great step in the right direction. We shall have to wait till it becomes law. It will also be interesting to note the regulations drawn by the member states as well as by other nations.

--

--

Amit Kumar Mishra

An engineer, innovator and engineering educator, currently working as a Professor with the Department of Electrical Engineering at the University of Cape Town.