Call for a Pause in Artificial Intelligence Developments: What About it?

14/04/2023

A call has been made by “Future of Life Institute” (FLI) an American non-governmental organisation to take a pause in artificial intelligence developments.

Who is making the call?

An institute under influence…

This institute for influence and lobbying (Think Tank) created in 2015 has chosen to evaluate transforming technologies which could potentially represent large scale “extreme” risks for humanity. 4 major risks have been studied: artificial intelligence, biotechnologies, nuclear weapons and climate change. The idea in practice is to lobby the national and international institutions (United States, European Union, UN etc), by the dissemination of information and training, by funding research and finally by organising events and conferences. According to the European Union Transparency Register, the (relatively limited) funding of the institute in 2021 totalled some 4 million euros the majority of which (3.5 million) was the result of donations by the Musk Foundation, owned by Elon Musk. That same year, the institute issued recommendations for the AI Act which is to introduce European regulations for AI systems.

In 2015, during its launch, the think tank began by defining its objectives for artificial intelligence. It intended to counter what it called the most common myths on the subject, such as: “We still have time before super intelligence can become a reality”, “AI may become harmful”, “AI may develop a form of conscience”, “AI cannot control humans”, “machines do not have objectives”…   In 2017, it organised a conference on the benefits of artificial intelligence. It featured a panel of leaders in the field of IT, including Ray Kurzweil (Google), Demis Hassabis (DeepMind), Jaan Tallinn (Skype) to exchange on the scenarios surrounding the advent of this “super-intelligence”. The conference which was held in Asilomar in California resulted in a self-proclaimed declaration of principles for artificial intelligence, known as the “Asilomar principles”. All the thinking at the conference was founded on the presumption that a “super intelligence”, which would be more powerful that human intelligence would see the light of day at some moment in time. However, many questions have been raised in the scientific field (Nature article in 2020 or the French CRNS joint work by the artificial intelligence research group) and philosophically: what exactly is intelligence? Is intelligence simply a set of tasks to be accomplished? What place is there for emotional, relational, corporal or even spiritual intelligence?

According to the institute, the questions are essentially concerned with the time-scale when this super intelligence will appear and on the speed at which society will manage to adapt to such advent. The call for a pause in its development is therefore not new, it is a follow-on from its previous activities.

…uncontrolled signatures

Concerning the signatories of this call, no check has been made on their roles or their professions. At its launch in March 2023, it claimed 1000 signatures of leaders in the world of IT (research scientists, professors, start-up CEOs…), by 3rd April it had collected 3,300 signatures and 20,000 by 11th April 2023 which is relatively few on a worldwide scale.

Why does this call come just after the appearance of Chat GPT ?

The call mentions ChatGPT from its very introduction: “We call on the AI laboratories to take an immediate pause of at least 6 months in the training of AI systems more powerful than GPT-4″. The call positions itself from the outset in reaction to the recent developments of “Chatbot” ChatGPT (in particular its latest version GPT-4) whose limitations and approach have already been described in a previous article. The call likens ChatGPT to a general artificial intelligence (GAI) and therefore close to a human intelligence, although the model is unreliable (incorrect, inconsistent or even imagined answers) and its sources are neither referenced nor authenticated (as indicated by Laurence Devillers, professor of IT applied to the Social Sciences at La Sorbonne in an interview for France Inter).

“Recent AI systems are now competing with humans for general tasks”.

This statement is based solely on two references which overestimate the most recent developments by OpenAI, the company which created ChatGPT: the first is an article published by OpenAI and which is not considered as a scientific publication and the second is an article which has not yet been published, by research scientists at Microsoft (which has massively invested in OpenAI). The likening by Microsoft of ChatGPT to a premise of Artificial General Intelligence (AGI) incidentally constitutes the commercial spin by that company which is among the initial co-signatories of the call, with a view to make believe that a revolution is at hand.  However, the voices of several research scientists such as Yann Lecun, AI director at Meta previously Facebook and who has not signed the call, or the French research scientist Chloé Clavel, associate Professor of affective IT at Telecom ParisTech, indicate that it is far from a technological revolution and that the novelty is in its accessibility to the general public.

The call appears to be part of a well-organised communication plan to valorise what has recently been developed in the field and to attract ever more funds. Remember the very strong initial links between Elon Musk and OpenAI of which he is one of the founders. The call is also part of the commercial battle being waged between the giants of IT: Google (with BARD), Microsoft whose first “Chatbot” Tay made a blunder with racist comments on Twitter and which more recently has invested in OpenAI, Amazon (Alexia), Apple (Siri) or Facebook, which is developing its own chatbot. By calling for a pause, is Elon Musk attempting to catch up following the announcements made by Twitter’s competitors? Whilst supporting the call for a pause, Elon Musk has in fact just announced massive investments in AI for Twitter.

What major risks for humanity justify such an alarm signal?

The first risk identified in the call is that of generalised manipulation with the inflation of automatic fake news: “Should we allow machines to flood the information channels with propaganda and counter-truths?”  Sam Altman, the boss of OpenAI, and creator of chatGPT, has himself admitted being “somewhat frightened” by his creation if it were to be used for “large-scale disinformation or cyberattacks”. “Society needs time to adapt”, he declared to ABCNews mid March (article in Libération)

The second risk is as follows: “Should we automate all jobs, including those in which we achieve fulfilment?”. This alarmist vision requires stepping back and a closer analysis of the impacts on employment and the use of the tool in different sectors (see our previous article on the subject). The predictions of early achievement of these transformations, which justify the urgency of a 6-month moratorium are in complete contradiction with the rhythm of the major transformations of the economy which occur much more gradually, as indicated in this editorial for Les Echos.

The other risks mentioned are pure science-fiction: “Should we develop non-human spirits which could possibly be more numerous, more intelligent, making us obsolete and even replace us?”, “Should we risk losing control of our civilisation?”. They are surfing on the fear of replacement of human tasks or even of humans themselves without any justification. Such claims feed the myth of the creation of more intelligent non-human spirits.

What are the major risks not mentioned in the call ?

Several far more realistic risks and in the short term are not mentioned in the call.

For instance, the uncontrolled use of personal data by these new AI systems. The free version of ChatGPT has been tested in just 2 months by 100 million users, which is better than the social networks like Tiktok which took 9 months. ChatGPT is an enormous extractor of personal data by asking for an e-mail address as well as a telephone number. As already mentioned previously, ChatGPT does not comply with any of the provisions of the European regulations governing personal data protection (RGPD). Thus, at the end of March 2023, Italy became the first state to prohibit it for non-compliance with those regulations.

The same applies to the polarisation and reinforcement of opinions caused by such a type of recommendation tool. They propose content influencing behaviour the same as all the social networks like Youtube, Twitter, Tiktok, Instagram… (see the documentary “Behind our smoke-screens”. The AI algorithms are accused of playing a role in encouraging suicide of people suffering from depression. Thus, the OECD recently reviewed cases of influence by algorithms of recommendations on behaviour. The report mentions in particular the case of Molly who committed suicide as a consequence of continually viewing her social network. This phenomenon, which has become massive, now appears in the American national statistics on suicide with, since the introduction of social networks in 2009, a 70% increase in the rate of suicide for women aged between 15 and 19 years old and 151% for young girls between 10 and 14 years old. More specifically linked to the influence of Chatbot, in March 2023, was the involvement of a conversational robot (Eliza) in the suicide of a Belgian family man. He entered a spiral of depression by discussing with chatbot on climate change and its catastrophic consequences. Instead of preventing suicide, the robot simply reinforced the depressive in his convictions. Among the exchanges discovered after his death, his widow noted that Eliza never allowed itself to contradict her husband, but on the contrary supported his complaints and encouraged his fears.

What are the measures proposed by the call ?

  • “We call upon the AI laboratories to take an immediate pause for at least 6 months in the training of AI systems more powerful than GPT-4″
  • The first measure which consists in a 6-month moratorium of all development and training of Artificial Intelligence models is not only unjustified but especially completely impossible to implement inasmuch as AI development is conducted by private and public players of all sizes worldwide, who are masters of the rhythms of their development in the absence of any American let alone international regulations.
  • “AI research and development should be refocused on the manufacture of powerful systems at the cutting edge of the latest technology, which are more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal.”
  • These proposals appear reasonable, are not new and add nothing to the recommendations made in particular by the European Union which proposes to develop an AI which is trustworthy, licit, ethical and robust. This demands traceability, explainability as well as robustness. The European proposals go further than this call by specifying human supervision, protection of personal data, objectives of social and environmental well-being and non-discrimination…
  • “In parallel, AI developers must work hand in hand with the political decision-makers to considerably accelerate the development of robust AI governance systems. These must include at the very least: (…) solid public funding for technical research into AI security.”

It should be noted that the political decision-makers have not waited for this call before funding projects around trustworthy AI in particular in France with collaborative research projects between industry and academics supported by the state on trustworthy AI in France.

In conclusion, these proposed measures are in part unrealistic, inadequate or unambitious in relation to the challenges of AI and the benefits expected by society. Are private individuals, so involved in the development of AI systems the best placed to be calling for regulation? More fundamentally, may one entrust on private players the task of self-regulation and construction of their own ethics?

Restez informé de nos dernières actualités

Articles récents

Reported Suicides

Reported Suicides

Reported suicides : Lauren Hoeve and Thomas Misrachi On Saturday 27th January 2024, the media reported two suicides,...