Artificial Intelligence: Draft Regulations at the European Commission

In Brussels, on 21 April 2021, the EU Commissioners presented draft regulations for artificial intelligence (AI).

Following recent years of discussions, the Commission’s alleged goal is to “turn Europe into the global hub for trustworthy artificial intelligence (AI);”

In its press release, the European Commission announced that the combination of the “first-ever legal framework on AI”  and a “new Coordinated Plan with Member states” will “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.”

Primarily based on risk assessment, the regulation has created categories of “unacceptable, high, limited and minimal risks”. These categories are illustrated in the following examples.

  • Any AI classified as an “unacceptable risk” will be prohibited. This includes “AI systems that are considered a clear threat to the safety, livelihoods and rights of people.” Examples would include systems that manipulate human behavior to circumvent users’ free will, and systems that allow “social scoring” by governments.
  • Any AI identified as “high risk” will have to comply with strict obligations before they can be put on the market. Examples include transport systems that could endanger the life and health of citizens; systems used in education and employment (scoring of exams, CV-sorting software…); law enforcement; robot-assisted surgery; or credit scoring which may deny loans to some citizens. All REMOTE biometric identification systems based on AI are particularly considered as high risk. As a basic principle their use in real time, in public spaces for law enforcement purposes is banned, with only rare exceptions (when strictly necessary to search for a missing child, or to identify a suspect of a serious criminal offence).
  • “Limited risk” AIs will have specific transparency obligations. When using AI systems such as chatbots, for example, users should be made aware that they are interacting with a machine so they can make an informed decision of whether to continue or not. Another example which must be clearly transparent is the “deep fake”, which reproduces an individual’s voice or his image to have him say or do what he wants.
  • “Minimal risk” AI will be freely allowed (video games, spam filters, etc.).

The EU draft regulation also stipulates fines and penalties of up to 6% of global turnover.

Regarding facial recognition, consumer protection organizations and several MPs consider this provision to be insufficient, thus a heated debate in the European Parliament can be expected.

Share This