One of the leading topics in the IT industry for a long time has been the topic of AI (Artificial Intelligence). The technical development of artificial intelligence brings many opportunities, but also certain risks, especially those related to the great freedom and constantly growing applications of AI. In everyday life, we already encounter solutions such as Google Bard, DALL-E2, or Chat GPT from OpenAI, presented in November 2022. But artificial intelligence algorithms began to be used much earlier in various fields, for example, the broadly understood e-commerce market, which uses these algorithms to popularly profile customers’ interests and therefore recommend specific products to them in online stores.

What risks do lawyers see in the development of AI?

The reality shows that the development of technology also entails the development of legal regulations that are intended to ensure the safe use of these technologies, or at least limit the risk of their improper use. Therefore, there are also growing challenges in the legal area, the task of which is primarily to define the framework for the ethical and proper use of this type of solution in many areas of life, as well as to predict and prevent the use of these solutions in a manner contrary to the law and enabling their use to commit crimes, excessive surveillance of society or violation of human rights. The example of China, which uses a social scoring system based on access to big data and artificial intelligence, can help to raise awareness of these risks, and with the development of AI, the citizens of this country are increasingly more efficiently surveilled using specific mobile applications (approved by the authorities) for download. or video monitoring system.

Another issue that underlies these legal analyses is the risk of errors in the operation of artificial intelligence, which provides data necessary to make decisions in enterprise operations. This may seem not very obvious and distant to the average person, but once we realize that AI-based algorithms can, for example, measure the efficiency of work performed by a given employee, influence the assessment of his work, remuneration, bonus systems, and even the decision to being laid off from work, we can better understand these threats that can affect anyone. For this reason, some organizations prohibit their employees from using AI technology as part of their job. The European Union is the first organization in the world to try to regulate the area of artificial intelligence from a legal perspective, which will be important not only for entrepreneurs but also for employees and consumers.

Progress of work in the European Union in the field of AI

Observing the pace of legislative work on the draft EU legal act, it is clear that the topic of artificial intelligence aroused great interest and discussion in the European Union, and the draft of this legal act has undergone major changes several times.

Almost 3 years ago, in April 2021, the European Commission presented a proposal for a regulation on artificial intelligence, i.e. the AI Act (AIA). For obvious reasons, the project did not take into account the currently most popular generative and basic artificial intelligence systems – they were added to the project only during the work, which was influenced by, among others, growing interest in OpenAI’s GPT Chat or Google’s Bardem.

Then, on June 14, 2023, the European Parliament adopted the so-called AI Act, which is intended to regulate the creation and use of software based on artificial intelligence. This is the first legal act of this type on a global scale. After lively discussions on the project within the so-called trilogue between the Council, the European Parliament, and the European Commission on February 2, 2024, the draft regulation was adopted unanimously.

The next stage in the procedure for adopting the AI Act will be voted in the European Parliament committees on the expected date during the session on February 13, 2024, and then during the session in March or April 2024, the regulation may be finally adopted by the European Parliament. Only then will the final version of the draft AI Act be known. For now, you can only read the preliminary versions of the draft regulation, which were leaked to the Internet in the second half of January 2024.

The regulation will enter into force 20 days after its publication in the EU’s Official Journal, with prohibitions on prohibited practices coming into force 6 months after publication and obligations regarding AI models one year after publication. However, the remaining provisions will enter into force only after 2 years, except the classification of artificial intelligence systems (they must undergo a compliance assessment under other provisions of EU law as high-risk systems and hence an additional year).

It should be emphasized that this legal act has the form of an EU regulation. In practice, this means that, unlike more frequently used EU directives, this regulation is directly applicable and no longer requires implementation into the national legal order by individual EU Member States – its entry into force is certain, and (as opposed to indirectly applicable directives) Member States do not may delay its entry into force at the stage of adapting their national rules, as is often the case. The regulation will therefore apply in the same way in every EU country, and an example of a similar, already well-known solution is the famous EU roaming regulation, which entered into force on June 15, 2017, and turned out to be a revolution on the telecommunications services market for both customers and telecommunications operators.

An interesting fact, however, is the fact that subsequent legal regulations regarding, for example, liability for AI are to take the form of directives, which means that they will contain basic legal guidelines from the EU legislator, which each Member State will then be able to expand based on national regulations during implementing the guidelines from the directives into their national law, and their actual use will likely be delayed.

Areas regulated by the EU Artificial Intelligence Regulation

As already mentioned above, the final version of the regulation will probably be known only in March or April 2024, but based on what has already appeared in the virtual space, several important issues that this legal act touches on can be indicated.

The most important thing for every citizen of the European Union is certainly the list of unacceptable practices in the field of artificial intelligence, which include:

    • ✔cognitive-behavioral manipulation of people by using subliminal techniques or influencing the weaknesses of a specific sensitive group (e.g. age, disability, mental disorders), as well as other techniques that are intended to deliberately manipulate a person;
    • ✔scoring of citizens, i.e. the classification of natural persons based on their behavior, socioeconomic status, and personal characteristics;
    • ✔biometric categorization systems that categorize natural persons according to sensitive or protected attributes and characteristics, as well as based on inferences about these characteristics or properties;
    • ✔biometric identification is used in publicly accessible places only by law enforcement agencies or entities acting on their behalf for law enforcement purposes;
  • ✔emotion recognition used in law enforcement, border management, workplace or educational institutions;
  • ✔creating or expanding facial recognition databases using mass scraping of biometric data from social media or CCTV footage;
  • ✔forecasting the risk of citizens committing a crime.

The regulation also regulates issues such as:

  • ✔the obligation to inform users that they are dealing with content generated by AI – very important due to the increasingly frequent appearance in the virtual space of films that illegally use the image of famous people to persuade viewers to take advantage of a given offer (this applies more often financial fraud crimes);
  • ✔distinguishes four categories of risk of using AI-based systems and imposes the obligation to implement appropriate security measures,
  • ✔protection of persons whose rights have been violated by a decision issued using AI through the possibility of filing a complaint with the appropriate state authority and then appealing against its decision in court;
  • ✔financial penalties that may reach up to EUR 35 million or 7% of the total annual global turnover (including for violating prohibitions on the use of unauthorized AI systems).

The European Union is a leader in the legal regulation of artificial intelligence and this is certainly a step in the right direction. Nevertheless, the leaders in the development of technologies based on artificial intelligence are the United States and China, and technological development is progressing rapidly and, unfortunately, the law cannot keep up with it. Taking into account the time when these legal provisions will come into force (up to 2 years after the adoption of the AI Act), the EU legal regulation may no longer be compatible with AI technologies that will be created after its adoption.

On the other hand, this situation is also an opportunity for the development of Polish entrepreneurs who, observing technological progress and legal regulations in this area, begin to notice these opportunities. It is also increasingly clear that AI needs strong control by the human factor, which may alleviate fears of job loss in positions where human work could be replaced by AI.

Written by: Izabela Wilczkowska

Footnotes: 

  1. ✔Documentary film titled “China is taking over the world”
  2. ✔https://www.linkedin.com/posts/dr-laura-caroli-0a96a8a_ai-act-consolidated-version-activity-7155181240751374336-B3Ym/

Let’s start
cooperating

Send us a message whether you are thinking of a career change, looking for exceptional talent or just would like to meet for a coffee and chat.

Post a comment

Your email address will not be published.

Related Posts