European Regulations on Artificial Intelligence : Where is this leading us?
An agreement aimed at European regulations to govern artificial intelligence was achieved on 8th December between the EU member States and the European Parliament. Heralded as a World first, the agreement is intended to establish a European-wide text which would come into force at the earliest in 2025. The preparatory work was launched in 2021, before Chat GPT and other general public applications had revealed the day-to-day impact which such technologies bring with them.
What do these future regulations include?
The regulations will include a definition of the systems dependent on AI. According to the European Parliament, “The priority is to ensure that the AI systems used in the EU are safe, transparent, traceable, non discriminatory and environmentally friendly. The AI systems must be supervised by people rather than automation, in order to avoid harmful results.” The regulations also intend to establish a common definition from the technological standpoint which can be applied to future innovations.
The principle guiding the intended law concerns the systematic evaluation of risks and the setting of different rules according to the level of risk.
Risks which are considered as unacceptable would be AI systems such as mass biometric recognition, social scoring (which is practiced in China), the manipulation of behaviours etc. Some exceptions are however already included, for example for the police in the context of the war against terrorism, the search for victims of human trafficking etc.
The bill mentions a prohibition for “AI systems which manipulate human behaviour in order to bypass freedom of choice.” The notion of freedom of choice is very ancient and has led to numerous philosophical discussions, without ever achieving any agreement on its meaning. Certain critics have pointed out the difficulty of including it in a regulation which aims to establish rules and to protect the general public.
The systems considered as “high risk” are those which have “a negative impact on safety or fundamental rights.” They will be subject to evaluation before being released and throughout their life cycle, as more powerful versions of a system could impact the level of risk.
In practical terms, an appendix to the bill quotes an example of a system used “to evaluate students in educational and professional training establishments and to evaluate the participants for the tests commonly used in order to be accepted in educational establishments”, or again AI “intended to be used to make decisions for promotions and dismissals in the context of contractual professional relations.” It also covers “AI systems intended to be used to send or establish priorities for dispatching emergency services, including emergency fire and rescue services.”
Regarding more general AI systems including generating systems such as ChatGPT, the bill introduces requirements for “transparency”. The model must be conceived such as to not generate any illegal content, and must comply with the rules of copyright and sounds, pictures, texts must mention their artificial origin.
Low risk systems (such as “chat services” provided to customers) must inform their users that they are interacting with an AI system.
A technological upheaval which raises concrete questions
The possible applications of AI affect all domains: health, law, education, armed forces etc.
The questions to ask, and to ask oneself as a user, are numerous. Who decides? On what criteria and by what methods? Who has access to the data and are they adequately secure? How are the data used and for what purpose? Is the AI system linked to a human chain of responsibility which can be questioned?
A recent example which appeared in Nature magazine illustrates the possible upheavals caused by AI in day to day life. An applied mathematics laboratory in Denmark has used an enormous existing data base covering the entire Danish population to estimate, among other things, the probabilities of a premature death. The information includes lifetime events linked to health, education, profession, income, home address and working hours, recorded daily.
Based on the assumption that events share similarities with language, and using an AI technique similar to the ChatGPT model, the tool was able to “predict” premature death (at between 35 and 65 years) with greater efficiency than the existing models. The benefit of such a system for insurance and health insurance companies is obvious and would jeopardise the basic idea of the mutualisation of risks between individuals. In an indeterminate time-frame, this type of use could lead to greater prenatal selection for insurance cost reasons.
An unstable compromise?
The compromise achieved by the European Parliament and the EU member States is the fruit of discussions between multiple players and divergent economic and political interests. For France, various opinions were expressed, either calling for greater control (for example prohibiting facial recognition systems in public spaces) or for more flexibility. The bill introduces a so-called “regulatory sand-pit” which has already been used in the domain. It consists of defining a framework in which companies may test innovations with relatively few constraints.
The stakes are enormous and the marketplace is currently dominated by American and Chinese players. The European approach differs through its call for regulation, whilst other nations are satisfied with voluntary codes of practice, based on general principles such as those put forward by the OECD.
In the United States, control is achieved through a recently signed presidential executive order. It is therefore more easily adaptable according to the evolution of the systems and requests by the technology players. It is probable that the pressure calling for “more flexibility”, or the cancellation of “excessively stringent” rules, will increase in the years to come.
Many of the players are pointing out the speed with which AI innovations are being developed and deployed. This theme was underlined by an expert, Philippe Dewost, during his address at the “Université de la Vie 2023” (2023 Life University by Alliance Vita). It has become urgent to work towards a robust consensus to position humanity at the centre, in AI as well as in bioethics.