Nalagam vsebino...

News

Critical Deadlines for Companies Utilizing Artificial Intelligence 

The adoption of the Artificial Intelligence Act marks a significant milestone for businesses that develop, market, or utilize artificial intelligence (“AI”). This regulatory framework imposes stringent deadlines, requiring entities to meet specific obligations concerning designated AI systems. It is imperative for companies integrating AI into their operations, or whose products and services are AI-driven, to closely monitor these deadlines.

Overview of the Artificial Intelligence Act: 

Official Title: Regulation (EU) 2024/1689 of the European
Parliament and of the Council of 13 June 2024 on laying down harmonized rules on artificial intelligence and amending various EU regulations and directives (hereinafter referred to as the “AI Act”). 
Accessibility: The full text of the AI Act is available at EUR-Lex. 
Publication in the Official Journal of the EU: 12 July 2024 
Entry into Force: 1 August 2024, 20 days post-publication 
General Application Date: 2 August 2026, two years following its entry into force, with specified exceptions delineated below. 

The AI Act’s implementation will be phased, initially targeting the most high-risk AI practices, with subsequent application to lower-risk systems. Certain provisions, however, will come into force at later dates. 

Key Deadlines: 

2 February 2025: 

Chapter II of the AI Act, which governs prohibited AI practices, will become effective. From this date forward, the utilization of AI for purposes and systems outlined as impermissible under the AI Act will be strictly forbidden. This enforcement marks the AI Act’s initial application phase, focusing on the prohibition of specific AI usages. 

2 August 2025 (One year prior to general application): 

Several critical sections of the AI Act will begin to apply: 

  • Chapter III, Section 4: This section mandates the obligations of member states to establish notified bodies and competent authorities, detailing their roles and the notification process for assessing the compliance of high-risk AI systems. Member states must initiate procedures for appointing and establishing these authorities by this date, as they will oversee future compliance assessments of high-risk AI systems. 
  • Chapter V: This chapter outlines the obligations concerning general-purpose AI models, including large language models. From this date, providers and authorized representatives of general-purpose AI models must comply with the obligations stipulated in the AI Act concerning these models. 
  • Chapter VII: This chapter establishes administrative bodies at both the EU and member state levels, defining their competencies. These bodies will begin to be established and operational from this date. 
  • Chapter XII: This chapter requires member states to establish rules regarding penalties and other enforcement measures for violations of the AI Act. From this date, Slovenia must define penalties for non-compliance, allowing for the imposition of fines for breaches. However, the provisions concerning fines for general-purpose AI model providers, which may be imposed by the Commission, will only take effect from the general application date of the AI Act (i.e., from 2 August 2026, and not from 2 August 2025). 
  • Article 78 of the AI Act stipulates that the Commission, market surveillance authorities, notified bodies, and all other natural or legal persons involved in the application of the AI Act must respect the confidentiality of information and data obtained in the performance of their tasks and activities. This means that the authorities will be able to uphold the confidentiality of participants even before the general date of application of the AI Act. 

2 August 2027 (One year post general application): 

Provisions pertaining to high-risk AI systems used as safety components of a product, or where the AI system itself constitutes the product requiring a compliance assessment, will commence. Compliance with the AI Act for such systems must be ensured by this date, one year beyond the general application deadline. 

Exceptions and Extended Deadlines: 

The AI Act also specifies exceptions regarding the application timeline for certain AI models placed on the market prior to specific dates. Notably: 

  • Providers of general-purpose AI models must ensure compliance with all relevant obligations for models placed on the market before 2 August 2025, starting from 2 August 2027. This grants providers an additional year to align existing AI models with the AI Act. 
  • AI systems serving as components of extensive information systems, established under the legal frameworks referenced in the AI Act, and placed on the market or in use before 2 August 2027, must achieve full compliance by 31 December 2030. 
  • The AI Act applies to operators of high-risk AI systems introduced to the market or in use before 2 August 2026, only if those systems’ designs have undergone significant changes since that date. Nevertheless, providers and deployers of high-risk AI systems intended for public authorities must take necessary measures to fulfil the AI Act’s requirements by 2 August 2030. 

Key Milestones for Corporate Compliance with the AI Act: 

Entities engaged in AI must, by 2 February 2025, conduct a thorough analysis to determine whether their AI usage constitutes a prohibited practice and, if so, discontinue such practices. Providers and authorized representatives of general-purpose AI models must begin complying with relevant obligations under the AI Act by 2 August 2025 at the latest. All other obligations concerning AI models must be fulfilled from 2 August 2026 onwards. It is also critical to assess whether the AI model is subject to any exceptions, which may extend the application of the AI Act beyond the general date. 

Managing Associate