The AI Act finally comes into force in Europe
The AI (or Artificial Intelligence) Act is a major step in global legislation and strengthens the European Union’s leading role in the regulation of artificial intelligence (AI).
It was published in the Official Journal of the European Union on July 12th 2024 and has been in force since August 1st 2024, the first international legislation designed to regulate this kind of technology and promote its safe and responsible use.
Objectives of the AI Act
The new regulation governing artificial intelligences aims to create a clear and harmonious legislative framework at European level, while ensuring that citizens’ fundamental rights are respected and encouraging the development of reliable AI systems. The regulation aims to deal with various issues, including:
- building trust in AI;
- ensuring transparency and responsability;
- creating new economic and social opportunities;
- incentivising investment and technological innovation in Europe.
One of the AI Act’s main objectives is to establish shared rules to regulate artificial intelligence and encourage cooperation between member states and businesses operating in that sector. The aim is to maintain the EU’s competitivity at global level, while safeguarding its citizens’ rights and privacy.
Classification of AI systems
The AI Act adopts a risk-based approach to regulating artificial intelligence systems, by subdividing them into various categories according to their potential impact on citizens and companies. The three main categories of risk are:
- Limited risk: AI systems which do not pose significant risk to basic rights and are subject to limited transparency requirements;
- High risk: AI systems used in critical contexts such as biometric recognition, recruitment, student assessment and the management of essential services. These are subject to stricter requirements, including the need to conform to high standards in order to access the EU market. Suppliers of high risk systems will have until 2nd August 2026 to conform with the legislation;
- Banned systems: AI systems considered unaccepatable due to the risk they pose to people’s rights, such as cognitive or behavioural manipulation and social scoring or real time biometric surveillance in public spaces. From 2nd February 2025, these will be banned and subjected to sanctions.
Implementation timescale and compliance support
The AI Act’s application will happen gradually over the coming years. Many of its rules will become applicable 24 months after it comes into force, giving organisations and developers the necessary time to prepare.
Organisations that use high risk systems will have until 2026 to comply, while suppliers of general AI models (GPAI) must conform by 2nd August 2025. Certain rules for high risk systems already on the market will come into force only in the case of significant modifications to their design.
In order to facilitate the transition, the European Commission has created a dedicated AI Office, which will be the agency asked with monitoring the application of the regulations and supporting member states. It will also be responsable for drawing up codes of conduct and promoting the AI Pact.
Italy and its National AI Strategy
In parallel with the publication of the AI Act, Italy has launcehd its National Strategy for Artificial Intelligence 2024-26, aimed at making that country a leader in technological innovation through a secure and inclusive approach.
The strategic plan was developed by a panel of experts, supported by the Digital Italy Agency (AgID) and it aims to promote the development of transparent, reliable AI solutions.
The Italian strategy focuses on four key areas: research, the public sector, companies and training, with the objective of strengthening national competencies and incentivising economic growth through the use of AI.
Translated by Joanne Beckwith