Skip links
The First Law Containing AI

The First Law Containing AI

For the first time in the world, a law on artificial intelligence with the word AI in it was approved in the European Parliament. It will enter into force gradually by mid-2026, upon completion of the approval processes of EU member states.

The AI ​​Act divides AI systems into four categories based on their potential risk: Low Risk, Acceptable Risk, Unacceptable Risk, High Risk.

For artificial intelligence systems classified as high risk, strict requirements are imposed on issues such as transparency, data quality and accuracy, human intervention and security.

Some AI applications that pose unacceptable risks are being banned entirely. There are transparency requirements so that users can recognize content (for example, deepfake images) produced by an artificial intelligence system. The law seeks to establish strict control over the quality and reliability of data used for training high-risk artificial intelligence systems.

Member states will need to establish mechanisms to monitor artificial intelligence systems available on the market and ensure their compliance with the law. In case of violation of the law, large fines can be imposed on companies. This law will have a major impact on the development and use of artificial intelligence technology and could serve as a model for similar regulations around the world.

Lock Bans

Artificial intelligence applications that will be directly banned:

  • Using subliminal techniques to manipulate or deceive people
  • Exploiting people’s weaknesses due to their physical or socio-economic situation
  • Using biometric data to detect sensitive characteristics such as race or sexual orientation
  • Classifying people for social scoring purposes
  • Trying to predict whether a person will commit a crime
  • Trying to expand facial recognition databases by reading images of people from images and video
  • Trying to extract people’s emotions in workplaces and educational institutions

The use of real-time, remote biometric identification for lawful purposes in public spaces is also prohibited. However, there are important exceptions for searches for victims of kidnapping or sex trafficking and other missing persons, persons suspected of serious crimes, or law enforcement officers dealing with the specific and imminent threat of a terrorist attack.


Many AI systems can be considered high risk; In this case, AI service providers will need to ensure that they are used with human supervision, offer appropriate levels of accuracy and security, have comprehensive documentation, and essentially work as promised. These include ensuring the safety of a product; performs biometric identification or emotion recognition operations that are not prohibited; used in recruitment processes; assessing eligibility for public services or visas; There may be artificial intelligence systems used in legal applications or part of critical infrastructure.

The law talks about general-purpose artificial intelligence (GPAI) models, rather than basic models as models such as GPT-4 and Gemini are often referred to. Institutions and companies developing such models will have to provide up-to-date technical documentation to the European Commission’s new AI Office and national regulators.

Developers of the GPAI model will generally have to publish a ‘sufficiently detailed summary’ of the content used in training and will have to explicitly comply with EU copyright law. However, there are exceptions for GPAI models that have been fully open sourced, such as Meta’s Llama models, unless they are very powerful.