EU Legal Framework

Topics: Legal Framework; EU

ℹ️ EU Artificial Intelligence Act

The European Union is leading the way with the EU Artificial Intelligence Act (AI Act), initially released on 21 April 2021. The Act is the first-ever comprehensive legal framework that sets out harmonised rules for the use, deployment and development of AI of Artificial Intelligence in the European Union.

This is a significant piece of legislation which will apply to providers, users, importers and distributors of AI systems, ‘horizontally’ (ie across all sectors) and will have a wide territorial reach (eg it can apply to non-EU organisations that supply AI systems into the EU).

TIMELINE

  • The discussion has been ongoing since 2018, with an  European Commission’s (Commission) original draft out in 2021, the position of the Council of the European Union (Council), and the European Parliament’s (Parliament) position (2023). {See here for a detailed timeline of the EU Artificial Intelligence Act (AI Act) progress so far.}

  • On 14 June 2023 the European Parliament (the “Parliament”) approved, after negotiation, its version of the draft EU Artificial Intelligence Act (the “EU AI Act”).

  • On 8 December 2023 the representatives of the European Parliament, EU Member States and European Commission reached a provisional agreement on the EU AI Act.

  • On 2 February 2024 EU Member States vote unanimously to approve AI Act.

  • 21 May 2024 – The European Council formally adopted the EU AI Act.

  • Scheduled to take effect in the spring of 2024 and with a grace period in place (likely two years from implementation).

  • June-July 2024 The AI Act will be published in the Official Journal of the European Union. This serves as the formal notification of the new law.

  • After entry into force, the AI Act will apply by the following deadlines:

    • 20 days later – The AI Act will “enter into force” 20 days after it has been published in the Official Journal.

      • From this date, the following milestones will follow according to Article 113:

        • 6 months later – Chapter I and Chapter II (prohibitions on unacceptable risk AI) will apply.

        • 12 months later – Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties) and Article 78 (confidentiality) will apply, with the exception of Article 101 (fines for GPAI providers).

        • 24 months later – The remainder of the AI Act will apply, except;

        • 36 months later – Article 6(1) and the corresponding obligations in this Regulation will apply.

        • Codes of practice must be ready 9 months after entry into force according to Article 56.

Governance

How will the AI Act be implemented?

  • The AI Office will be established, sitting within the Commission, to monitor the effective implementation and compliance of GPAI model providers.
  • Downstream providers can lodge a complaint regarding the upstream providers infringement to the AI Office.
  • The AI Office may conduct evaluations of the GPAI model to:
    • assess compliance where the information gathered under its powers to request information is insufficient.
    • investigate systemic risks, particularly following a qualified report from the scientific panel of independent experts.

‘Brussels effect’: As the first legislative proposal of its kind in the world, it can set a global standard* for AI regulation in other legal systems, just as the GDPR has done, thus promoting the European approach to influence generative AI regulation globally.

A ‘risk-based’ approach

The main approach to regulation comprises three structural dimensions:

  1. the classification of AI according to risk levels 2.according to the specific role of actors and
  2. according to sectorial differences

The risk levels have so far been at the centre of attention.. There is generally a distinction between AI systems which cause:

  • Unacceptable risks –> Prohibited AI Practices: AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited as well as intrusive and discriminatory uses of AI, such as “Real-time” remote biometric identification systems in publicly accessible spaces; predictive policing systems (based on profiling, location or past criminal behaviour); emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy). (the higher the risk, the stricter the rules)

  • High risks :AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment. AI systems used to influence voters and the outcome of elections and in recommender systems used by very large online platforms (VLOPs , with over 45 million users) were added to the high-risk list.

  • Limited risks: providers of foundation models would be required to ensure a series of transparency measures :

    • Demonstrate through appropriate design, testing and analysis that reasonably foreseeable risks have been properly identified and mitigated; Only incorporate datasets that are subject to appropriate data governance measures for foundation models, including in regard to the suitability of data sources and possible biases;

    • Prepare extensive technical documentation and intelligible instructions of use that allows downstream providers to comply with their respective obligations;

    • Establish a quality management system to ensure and document compliance with the obligations above;

    • Register the foundation model in an EU database to be maintained by the Commission.

    • Disclose that content was AI-generated and ensure that the system has safeguards against the generation of content in breach of EU law.

    • Publish a summary of the training data used that is protected under copyright law.

  • Low risks: to boost AI innovation and support SMEs exemptions for research activities and AI components provided under open-source licenses, including regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed. (see also the TDM exception section below)

However, the AIA did not address copyright aspects. Its risk-based approach is (quite surprisingly) totally agnostic to intellectual property rights. However, tensions between providers of generative AI and copyright holders led the European Parliament to include some limited considerations with regard to copyright aspects of machine learning.

The most important point is the amendment 399 to Art. 28 para. 4, lett. c) that imposes on providers of foundation models the obligation to train, design and develop the model in compliance with Union and national legislation on copyright prior to making their service available on the market.

The transparency provision (see above) and this amendment seem to facilitate rightsholders in better utilizing the opt-out option within the TDM exception (refer to the section below). However, meeting the transparency requirement for data used in foundational models seems challenging due to several factors: the fragmentation of copyright legislation across various jurisdictions, the lack of a standardized documentation process, insufficient ownership and provenance metadata, and technical obstacles as algorithms can be trained using a wide array of sources, making it very difficult to accurately trace their origins. Finally, a widespread utilization of the opt-out option might result to a significant reduction in available datasets for training AI models and this would in turn affect the quality of the AI-produced outputs, by introducing more biases in AI models and systems; a vicious cycle of concerns/

With the adoption of the Digital Single Market (CDSM) Directive in 2019, the European Union has established a harmonised regulatory framework for copyright legislation.

Articles 3 & 4 | TDM

Especially, for cultural heritage institutions and research, Articles 3 and 4 of the CDSM Directive introduced copyright exceptions (copyright infringement) for text and data mining (TDM), which authorize the types of reproductions made in the context of machine learning and training ML models on publicly available copyrighted works.

Article 3 enables the free use of lawfully available copyrighted works for text and data mining (TDM) for the purpose of “scientific research” (i.e., natural and human sciences) undertaken by “research organisations” (e.g., universities, libraries, research institutes) and “cultural heritage institutions” (e.g., publicly accessible libraries or museums, film archive heritage institutions).

Article 4 permits text and data mining of lawfully accessible works by anyone unless the use of works has been “expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online“(essentially an opt-out).

This EU framework (EU AI Act & TDM exceptions of the CDSM) respects the rights of creators to exclude their works from ML training data. However, there is a lack of clarity on how the opt-out from Article 4 of the CDSM Directive can be used:

-There is currently no legal standard route for how rightsholders can appropriately and expressly reserve their rights.

-It is also unclear how opt-outs from ML training based on the machine-readable reservation of rights will work in practice, as there are currently no generally recognized standards or protocols to express a machine-readable rights reservation.

-One approach which has been suggested is to use robots.txt, a standard used to limit access to web crawlers and search engines, and an express opt-out could easily be included in commercial agreements relating to content. However, it is unclear whether and how opt-outs expressed through these tools will be respected by ML model developers.

There is a growing recognition among stakeholders of the need of an intervention that would provide guidance both to creators and other rightholders seeking means to express opt out of ML training (’supply side’), and to ML developers to respect opt-out requests and to understand what constitutes best efforts to comply with their obligations .

Article 14 protection for works in the public domain {under development}

GDPR {under development}

In addition, GDPR rules may already apply in relation to personal data fed into AI models. Compliance with the proposed EU AI Act will build on established GDPR practices.