Does the Regulation of Artificial Intelligence Through the EU AI Act Go Far Enough?

Source : left/Ala z via WikiCommons (GFDL); links/Pxhere (public domain

On 14 June, the European Parliament passed a draft law to regulate artificial intelligence that must now be further discussed with the EU Council and the Commission. However, the AI Act expected to be announced by the EU at the end of 2023 provides insufficient answers to the essential questions raised by this new technology.

This article is part of transform!’s Economics Working Group Blog Series
Foreword by Peter Fleissner:

Author: ChatGPT
Final editing: Peter Fleissner

ChatGPT (Generative Pre-trained Transformer) is a software that uses artificial intelligence to communicate with users in writing. It uses the latest digital learning technology. Trained on a large amount of text, it generates responses that sound natural and are intended to be relevant to the conversation. The chatbot was developed by California-based company OpenAI and released to the public in November 2022.

This article was created by requesting that ChatGPT abbreviate an original 15-page version of the article. This allowed for the inclusion of newer information (from more recently than September 2021) that would have been unknown to ChatGPT.

(Peter Fleissner, Vienna, 07 July 2023)     

In recent decades we have experienced a staccato of tragic events on a global scale: Financial and economic crises, the sovereign debt crisis, pandemic, migration waves, subversion of democracy, the war in Ukraine, fascisation and armament. At the same time, we are witnessing increasing chaos in the commanding heights of states and in the major political parties, indecision in the European Parliament and disunity between national governments within the EU. This has led to a backlog of necessary reforms and an ever-growing need for action. At the same time, we are witnessing ever faster scientific and technological breakthroughs that have a direct impact on people’s both at work and in their free time, not to mention the functioning of businesses.

ChatGPT – a software with amazing features

The latest breakthrough took place a few months ago in the field of artificial intelligence (AI). Artificial intelligence was already being thought about in the USA shortly after the Second World War, but it took half a century for a pioneering achievement in this sector to become marketable. A few months ago, OpenAI was able to release the third version of the GPT (Generative Pre-Trained Transformer) language model under the name ChatGPT, an artificial intelligence (AI) with amazing capabilities that can be used (at least for now) free of charge by anyone with internet access via a computer or mobile phone. The software uses a version of the GPT3 language model developed by OpenAI (the improved fourth version is already on the market). This model, which starts from a formal hypothesis of how natural language works, is linked to a learning mechanism at the software level. First, through “self-supervised training”, it learns to predict the next word from random pieces of text (online forums, social media, newspaper or book articles from the internet, etc.), compare it with the original text, and correct its prediction. Then, through “supervised learning”, it is made to give answers to previously posed questions, improved by pre-fabricated answers (“fine-tuning”). The model is further optimised in a final step called “reinforcement learning” where the answers are qualitatively evaluated. After some intermediate steps, which serve to remove unwanted pieces of text, only the best result remains.

~The new chatbots embody what is currently the best low-threshold access to the “general intellect” (Marx), the world’s written objectified knowledge. In capitalism, AI represents another area where dead labour prevails over living labour. ~

ChatGPT can be instructed to interact with the user, create texts, edit, and write programme codes. It recognises its own mistakes, points out other people’s mistakes and refuses to answer inappropriate requests. Automation through AI is increasingly taking over routine mental tasks and has already begun to leave its mark in a variety of sectors — from online services to tasks in the clerical departments of manufacturers.

In order to avoid discriminatory, racist or sexist results or a biased answers representing the perspectives of political ideologies, ChatGPT has built-in digital filters that prevent or modify such results. When asked directly about how and why it does this, ChatGPT replied, “As an artificial intelligence, I am not a moral actor and I have no personal beliefs or opinions. My goal is to respond to users in an objective and informative way and not express political beliefs.”

ChatGPT is the latest version of “information processing machinery” (German: “Informationsverarbeitenden Maschinerie” [IVM]), a term coined by Berlin sociologist Frank Adler1 in 1978. The IVM consists of sensor technology, the actual information processing mechanism and the actuator/effector. Technological advances in recent decades, both in hardware and programming technology, have made it possible to achieve a huge increase in computing speed, compactness and reliability. The new chatbots embody what is currently the best low-threshold access to the “general intellect” (Marx), the world’s written objectified knowledge.

~The training of AI requires huge amounts of data (which may be copyrighted) and the use of people working for extremely low wages, often from the Global South.~

The development of AI began in 1966 with ELIZA at the Massachusetts Institute of Technology (MIT). ELIZA was created by Joe Weizenbaum as an early example of a computer system designed to simulate a psychoanalytic therapist. It used simple text recognition and text generation, but no real artificial intelligence. Nevertheless, ELIZA laid the foundation for the development of modern chatbots and virtual assistants.

The development of AI also entails some intrinsic problems. The volume of investment required is enormous. Microsoft, for example, has invested $10 billion in OpenAI. The training of AI requires huge amounts of data (which may be copyrighted) and the use of people working for extremely low wages, often from the Global South. The energy consumption during training and use is considerable. Furthermore, chatbots are not error-free and can provide false information if they lack valid empirical data and references are often missing.

International regulatory efforts

After the sudden success of ChatGPT last year, experts and legislators in various countries are working hard to develop rules and draft legislation to govern AI.

In the US, efforts to regulate AI are ongoing. Several bills have already been passed or proposed on the state and local level. The federal government is currently holding hearings and forums to prioritise AI regulation. Politicians at the highest levels meet regularly with AI experts and researchers. The White House has already published a draft bill on AI and the National Institute of Standards and Technology (NIST) has published a framework for trustworthy AI. The Biden administration is also planning a public assessment of generative AI systems.

China published its first ethical guidelines for AI in April 2023. These emphasise the right of people to make their own decisions about the use of AI services. The Cyberspace Administration of China (CAC) has proposed new rules for generative AI, including standards for truthfulness, non-discrimination and prevention of potentially harmful output.

~ AI regulatory efforts are moving forwards globally to ensure ethical standards, minimise the potential risks, and maximise the benefits of using AI. ~

Latin American countries are also addressing the challenges and opportunities that AI presents them. A summit of authorities on ethical concerns was held in the Chilean capital, focusing on the protection of human rights and the development of inclusive technologies. UNESCO has already issued a recommendation on the ethics of AI, which many member states are working on adopting. Latin American civil society groups have raised concerns about the potential repressive uses of AI tools and possible privacy violations. Brazil is considering a regulatory project with an emphasis on the protection of people, especially in the healthcare sector. In Argentina, AI is to be used in provincial elections to speed up the process, and in Costa Rica a regulatory framework is being worked on in collaboration with UNESCO.

Overall, these developments show that AI regulatory efforts are moving forwards globally to ensure ethical standards, minimise the potential risks, and maximise the benefits of using AI.

The EU’s proposal

In June 2023, the European Parliament became the third of the three core EU institutions, after the Council of the European Union and the European Commission, to pass a bill to regulate AI.2 This law, called the AI Act, requires transparency for high-risk AI systems and once the legislative process is complete it will be the world’s first legal regulations for AI. The bill received majority support. In the next phase, the so-called “trilogue”, EU lawmakers and member states will negotiate the final details of the law. By the end of 2023, the law should be passed – a global first for the regulation of artificial intelligence.

Substantive improvements by the Parliament

In addition to the Commission’s proposal, the European Parliament called for a ban on biometric surveillance, emotion recognition and predictive policing by AI systems. Tailor-made regulations for general-purpose AI and basic models such as ChatGPT were also called for. In addition, legal complaints about AI systems should be able to be brought forwards. AI regulation follows a risk-based approach and prohibits AI practices that could endanger or discriminate against human beings. Any AI that interacts with humans must disclose to the communicating parties, without prompting, that the output of the chatbot was generated by a computer system. Risk management rules will also be imposed on AI systems.

MEPs extended the Commission’s proposal to encompass certain high-risk areas: they proposed that AI systems used to influence voters in political campaigns as well as predictive text-suggestion apps used by social media platforms with more than 45 million people (under the Digital Services Act) be classified as high-risk.

Voting behaviour of the Left group

The Left in the European Parliament – GUE/NGL (Confederal Group of the European United Left – Nordic Green Left), the smallest of the 7 Europarliamentary groups, tabled a total of 16 amendments. It was demanded that the high-risk areas should be clearly identified and that the regulations in these areas should not be watered-down or qualified. In addition, the high-risk systems defined in Annex III of the AI Act should be included. These include, for example, biometric identification and categorisation of individuals, management and operation of critical infrastructure, and systems that decide on the allocation or assessment of individuals in educational institutions, employment, human resource management, access to self-employment and, more generally, the use of public services and benefits, law enforcement, migration, asylum and border control, administration of justice and democratic processes.

If their amendments had been adopted, the Left Group would probably have voted in favour of the law. However, it was left with the only option of rejection or abstention. As the Left Group agreed with the plenary’s call for a ban on facial recognition, it finally abstained.

Gaps in the AI Act

Although the attempt to regulate artificial intelligence across the EU is commendable and positive, the EU AI Act provides insufficient answers to the fundamental questions raised by this new technology. In capitalism, AI represents another area where dead labour prevails over living labour. This is in contrast to the common goal of the communist parties as formulated by Marx, namely, to achieve the “emancipation of labour”3.

~ The EU should do more to stand up to US technology giants through the development within the EU of an independent AI on a public-service basis like what it achieved with the CERN or the 2G mobile communication standard. ~

Aside from matters of ideological principle, the EU should do more to stand up to US technology giants on a geopolitical level through the development within the EU of an independent AI on a public-service basis. Such a project should be accompanied by an EU-wide popular democratic conversation to define the limits of where AI can legally be used and what data can be used to train it.

A similar approach to new advanced technologies has already proven successful in the EU. The experience of building the international nuclear research centre CERN demonstrates the benefits of a community-based infrastructure that has produced groundbreaking scientific achievements worldwide. CERN has successfully and transparently collaborated with relevant research institutions around the world and continues to do so. Another example of a collaborative initiative is the EU Commission’s establishment of the world’s first mobile phone standard, 2G, which allowed competing companies to start producing compatible phones, leading to rapid mobile phone penetration in the EU.

In light of the growing ecological crisis, it does not make sense to neglect energy requirements in the development, application, and training of artificial intelligence. Instead, encouraging research on alternative AI projects that do not require the large energy consumption nor the complex training with large amounts of data (such as LIVE AI, which requires only a small software and can continuously learn) should be a priority.

There is an urgent need to consider the long-term cultural impact of AI. It is still unclear how chatbots will affect the learning behaviour of young people and adults, and how this will impact the innovative strength of companies. Will people’s creativity be preserved when direct and immediate interactions with the environment are replaced by responses from AI? These questions seem more significant than the, in my opinion, unfounded expectations of transhumanists regarding a dethronement of humanity and a seizure of power by AI.

~ Contrary to utopian hopes in the 19th century, the later rapid “scientific and technological progress” did not mean that alienated and exploited labour was replaced by free labour and “large quantity of disposable time”.~

Most alarming is that the EU draft ignores questions regarding the impact of AI on the world of work. Automation has led to massive job losses, especially in industrialised countries, accompanied by a deterioration of social systems. Warnings about the potential loss of jobs would be appropriate to proceed with caution or even stop the introduction of AI in particularly hard-hit industries. Contrary to the utopian hopes of some Marxists in the 19th century, the later rapid “scientific and technological progress” did not mean that alienated and exploited labour was replaced by free labour and “a large quantity of disposable time”4. This freely disposable time, which capitalism objectively produces through competitive processes and ever new technologies, becomes in the capitalist guise unemployment and/or joblessness, which is not accompanied by freedom but by pauperisation.

In line with the neoliberal orientation of the EU, the EP draft focuses on market-based regulations that are to be fulfilled by AI. Areas such as human rights are often mentioned, but their concrete compliance remains open. And finally, I emphasise my ceterum censeo: as with many other laws, the military sector and intelligence services are excluded from regulation, yet it is precisely in these areas that a high degree of transparency and control is needed.

References:

  1. Adler, Frank (1978) Zu einigen Grundmerkmalen der wissenschaftlich-technischen Revolution, Arbeitskreis wissenschaftlich-technische Intelligenz, Vienna, p.41
  2. “Regulation of the European Parliament and of the Council (COM/2021/206 final) Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.
  3. Interview with Karl Marx, Chicago Tribune, January 5 1879. www.marxists.org/archive/marx/bio/media/marx/79_01_05.htm
  4. Grundrisse: Notebook VII – The Chapter on Capital Marx, Karl. Grundrisse: Foundations of the Critique of Political Economy. Penguin Classics. London: Penguin books, 1993. Pp 708

 

This article is part of transform!’s Economics Working Group Blog Series