top of page

AI Act & EU

Writer's picture: Bahar ŞahinBahar Şahin

A robot and human interaction illustration

The fact that the usage areas of artificial intelligence are increasing day by day and that it is one of the most open areas for innovation confirms itself every day. However, ethical concerns regarding artificial intelligence are discussed. The European Union Artificial Intelligence Law, which is based on this discussion, is also the first.


In addition to the European Parliament's political consensus on the Artificial Intelligence Act ("AI Act") on April 21, it is anticipated that the law will be put to a vote in the next plenary session.


Artificial Intelligence Act Overview


Since the subject is too technical and already overworked, naturally, while preparing the act, the Commission received expert opinions from more than 300 companies and organizations in the industry, and a total of 1215 stakeholders contributed to the act.


Different policy strategies have been produced by the Commission to make regulations on artificial intelligence. The different policy strategies produced on impact analysis are basically as follows:

  1. Establishment of EU legislation that will enable the creation of a voluntary labelling scheme

  2. Determining an industry-specific approach

  3. Establishing horizontal EU legislation in line with a proportional risk approach

  4. Establishing horizontal EU legislation by a proportional risk approach and determining protocol rules for non-high-risk artificial intelligence systems

  5. Establishment of EU legislation establishing mandatory rules for all artificial intelligence systems regardless of the risk they pose

After receiving all opinions and conducting an impact analysis, the Commission concluded that it would be more beneficial for both the industry and individuals to establish the horizontal EU legislation to be drawn up by the proportional risk approach, as well as the protocol rules for non-high-risk artificial intelligence systems.


With the preferred policy, it has been stated that individuals will rely more on artificial intelligence, legal predictability will be provided for companies, and finally, there will be no concern for EU member states to act jointly for the common market. Indeed, with the growth of the market, uncertainty has arisen in terms of initiatives after the previous decisions against artificial intelligence and data scraping and facial recognition software, and compliance processes have been significantly affected. On the other hand, it is worth noting that it is a development that can be beneficial in the decision-making and business development processes of startups that want to open up to the European market.


Providing a different perspective, the Commission stated that the draft law on this issue would ensure that preventive but at the same time effective actions are taken by artificial intelligence developers and users, and that targeting high-risk artificial intelligence systems, it will prevent violations of fundamental rights and freedoms and the security of individuals, and that it will effectively prevent violations. stated that it would establish an enforcement procedure.


Also, the Commission has included the costs for companies and users in terms of compliance activities in its impact analysis.


Artificial Intelligence and Fundamental Rights


The European Union is of the opinion that artificial intelligence initiatives can affect some fundamental rights. Recently, the ethical aspects of artificial intelligence applications have been discussed intensively, and, although the applications themselves are applications that can be used every day, it is estimated that some fundamental rights may result in a violation.


In terms of fundamental rights, for example, within the scope of the European Union Declaration of Fundamental Rights, human dignity (article 1), respect for private life and protection of personal data (articles 7 and 8), non-discrimination (article 21) and equality between women and men (article 23). ) stated that they would improve the application about their rights.


The deterrent effect of freedom of expression (art. 11) and freedom of assembly and association (art. 12), the right to an effective remedy and a fair trial, the presumption of innocence and the right to defence (articles 47 and 48) and the right to good administration in general intended to prevent.


In addition to the effects listed above, the Commission has stated that it will have a positive effect on the rights of private groups to the extent possible. Examples of these rights are fair and equitable working conditions for workers (article 31), a high level of consumer protection for consumers (article 28), children's rights (article 24) and the integration of people with disabilities into society (article 26). On the other hand, it is also mentioned that the protection of the environment at a high level and the improvement of the quality of the environment (article 37), the importance of which has been understood in recent years, will have an impact when its relation with the health and safety of individuals is considered.

Although it limits the freedom of science and art and the freedom to conduct business, it has been stated that the act will benefit the sector in general.


Prohibited Artificial Intelligence Systems


Artificial intelligence applications prohibited within the scope of Article 5 of the act have been counted. First of all, there are artificial intelligence applications that can reveal harmful results to the person or another person, and that can affect the behavior of the person beyond the person's consciousness with subliminal techniques.


Secondly, artificial intelligence systems that take advantage of the age, gender, and mental or physical disability of some vulnerable groups to seriously deviate their behaviour towards that group and cause physical or psychological harm to themselves or others are prohibited. For example, it would be beneficial for the health of people and their relatives not to have a chatbot that encourages the person to harm himself or his relatives due to a psychological disorder.


Thirdly, artificial intelligence that can take place in the common market by public authorities or on behalf of public authorities, which will result in the social behaviour of real persons or scoring them on personality traits, and as a result, based on these scores, the ill-treatment of individuals or their disproportionate and unfair treatment according to their behaviour. systems are prohibited. For example, China's social credit system is an example of this.


Finally, the use of real-time remote biometric identification systems in public places is prohibited for criminal purposes. However, as an exception to this rule, the use of this artificial intelligence system has been specified for three main purposes, and these purposes are as follows:


  1. Targeted searches to identify potential victims, including children

  2. Preventing a specific, significant and imminent threat to the life or physical safety of natural persons or a terrorist attack

  3. Use for the purpose of finding, locating or executing the sentence of an accused who has been arrested or sentenced under the execution laws of EU member states


High-Risk Artificial Intelligence


It was stated that even if it took place in the market without being included in the groups specified in the legislation within the scope of the draft, it would be considered high risk if it satisfies two conditions. High-risk artificial intelligence is not directly defined in the draft and will be considered high-risk artificial intelligence directly if the conditions are met. In this context, we believe that the strategy made in the context of policy strategy is restrictive in terms of the market but healthier in terms of fundamental rights.


Conditions regarding high risk in Article 6 of the AI Act are as follows:

  1. A product that is intended to be used as a safety component, is a stand-alone product or is a product considered within the scope of EU harmonization legislation in Annex-II of the [Draft]

  2. The product with the safety component AI system or the AI system itself as a product is required to undergo a third-party conformity assessment to place that product on the market or put into service by EU harmonization legislation in the legislation listed in Annex II

At the same time, it has been stated that the artificial intelligence systems listed in Draft Annex-III will also be considered high risk. These systems are briefly as follows:

  1. Categorization or identification of people with biometric data

  2. Natural gas, electricity, etc. Ensuring the management of critical infrastructure such as

  3. Measuring students' abilities and knowledge with systems based on education and vocational training in general

  4. Use in the context of work, self-employment or employee management

  5. Assess for access to and benefit from essential private and public services

  6. Using in some ways within the scope of the implementation of laws

  7. Use within the scope of immigration, asylum and border control management

  8. Using judicial authorities to investigate and interpret facts and events within the scope of law enforcement in concrete cases


Artificial Intelligence Law Content


Within the scope of the Artificial Intelligence Draft Law, first of all, the definitions that are necessary for the sector are included. After the definitions were made, banned artificial intelligence applications were mentioned, as explained in detail before.


In the third section, high-risk artificial intelligence systems are classified. In addition to classification, these systems include provisions on data and data governance, filing and record keeping, transparency and the obligation to inform users, human surveillance, robustness, consistency and security. It also includes rules for obligations to other stakeholders and the creation of a new AI board.


In the fourth chapter, transparency obligations are regulated in terms of some artificial intelligence systems. In these systems, transparency is required in terms of applications that communicate with people, detect people's emotions based on biometric data and categorize them in social terms, or create or manipulate content such as deep-fake.


In the fifth chapter, measures supporting innovation are included. In this context, it is aimed to implement systems that aim to reduce the burden of legislation on SMEs and enterprises and enable EU member states to shape legislation according to developing technology.


The sixth, seventh, seventh and eighth chapters contain provisions on governance and implementation. The provisions regarding the "European Artificial Intelligence Board" and the organization of the board, the determination of local competent authorities and the EU database and independent high-risk artificial intelligence systems are included in order. Arrangements have been made for the Board to take quick action against possible violations.


The protocol that can be used voluntarily for non-high-risk artificial intelligence systems is included in the ninth chapter. This protocol is mandatory for use with high-risk AI systems, but can also be used by non-high-risk AI systems providers and other voluntary commitments regarding environmental sustainability, accessibility for people with disabilities, stakeholder involvement in the design and development of AI systems, and diversity of development teams. may also contain.


Potential Impacts on Interventions


First of all, it can be said that this act affects many initiatives in the ecosystem because it is seen that the protocol, rule and framework to be created can affect edtech and insurtech solutions. In this context, the harmonization process is of great importance for initiatives. Taking actions such as protecting personal data, filing, analyzing, cleaning and keeping records of data, conformity assessments and fulfilling responsibilities will be beneficial.


On the other hand, it would be beneficial to follow and contribute to the "regulatory sandbox" practices that can be created so that the law does not burden SMEs and entrepreneurs. In this process, startups, developers, consultants and also investors have a role.


For the processes of establishing and supervising artificial intelligence systems, the creation of a human control protocol gains importance in line with the Commission's recommendation. Although it is seen as an act that limits innovation, as seen in the impact analysis prepared with the opinions of the stakeholders, it is aimed to ensure the balance of interests as much as possible to prevent human rights violations. In this context, it is envisaged that EU member states will establish predictable, measured and clear obligations.


At the same time, the cost of harmonization should be taken into account for startups that want to open up to the European market. Although an average cost calculation has been made, it may vary according to the structure of the enterprise. At the same time, in terms of artificial intelligence initiatives that are in the investment round, if the act becomes legislation, it will gain importance.


At the end of the day, predictable rules are being created for unpredictable artificial intelligence systems. Although there is a goal of harmonization of laws with the development of the systems, the importance of the lawmaker's cooperation with experts on artificial intelligence is seen.

 

This article is prepared based on the EU Artificial Intelligence Act and its annexes.

Yorumlar


bottom of page