The press service of the European Commission reported that the European Union has enacted a law on artificial intelligence, dubbed the first such regulatory document in the world.
The European Artificial Intelligence Act should guarantee the security and observance of citizens’ fundamental rights while developing artificial intelligence technologies.
The law mandates, among other things, that users must receive a clear warning before interacting with chatbots rather than live people and that artificial intelligence-generated content must have a label.
Risky AI systems will be subject to additional rules and require human oversight of their “decisions.” This includes the use of technology in recruitment, loan assessment, and so on.
AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.
The European Commission
Government agencies or companies prescribe a ban on AI systems that threaten fundamental rights, including those that can manipulate behavior and enable the creation of a “social rating.”.
Certain types of use, such as emotion recognition systems in the workplace or systems that categorize individuals or provide real-time biometric identification for law enforcement purposes, may be prohibited, subject to certain exceptions.
Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing.
The European Commission
The law stipulates that by August 2, 2025, Member States must designate a body within the national system that will oversee compliance with this legislation.
This will be the AI Office at the EU level, as well as three advisory bodies: the European Artificial Intelligence Board, a board of independent experts that can signal risks that it has noticed, and an advisory forum that will include a wide range of stakeholders.
Companies that ignore the law’s requirements can face fines of up to 7% of their global annual turnover, and those that submit inaccurate information can face fines of up to 1.5%.
AIThe law will apply most of its rules from August 2, 2026, and in six months, it will ban AI systems deemed unacceptably risky. We will apply the rules to so-called general-purpose AI models in 12 months.
The European Parliament passed the law on AI in March 2024.
In July 2024, the regulatory authorities of the United States, the European Union, and the United Kingdom signed a joint statement on ensuring effective competition in the field of artificial intelligence (AI), which sets out the principles of consumer protection.