Artificial intelligence (AI) is a rapidly evolving technology. It holds the potential for numerous benefits to society and the economy across various industrial sectors. This technology can improve processes, optimize operations, and enable efficient predictions. However, while AI is highly promising, many of its elements and techniques can also have negative consequences for society.
Considering this, the European Union has taken the initiative to propose legislation. This legislation is specifically aimed at ensuring the ethical use of AI and safeguarding fundamental rights.
I. The Regulatory Context
The past decade has witnessed the rapid technological advancement of artificial intelligence, particularly with the emergence of generative AI. This technology includes deep learning models capable of creating content similar to that produced by humans using technologies like natural language processing (NLP). Key examples of these models include ChatGPT, Midjourney, and Bard.
In this context, challenges that emerged with the development of AI systems. These challenges included bias, characterized by a tendency to produce results in favour of or against a person, object, or position, and opacity, where AI systems become too complex for human understanding. This demonstrated the need to regulate AI with mechanisms. These mechanisms aim to create more transparent and safe, ethical, and trustworthy algorithms, aiming for the smooth functioning of the internal market.
II. Analysis of the Regulation Proposal
II.I. Objectives of the AI Regulation Proposal
Among the main objectives of the AI Regulation Proposal are:
- Ensuring that AI systems placed on the market are safe and respect fundamental rights.
- Providing legal certainty to facilitate investments and innovation in the field.
- Making the supervision of AI systems more efficient.
- Facilitating the development of a single market for AI systems to prevent market fragmentation.
II.II. Scope of application of the AI Regulation Proposal
Regarding the scope of application, the Proposal follows an approach that classifies AI systems into risk categories for fundamental rights. Thus, AI systems are classified into four categories: unacceptable risk (and therefore prohibited), high risk, limited risk, and minimal risk.
AI systems representing an unacceptable risk are prohibited, as they pose a significant violation of health, safety, or other fundamental rights. These include subliminal, manipulative, or exploitative systems causing harm; government social scoring systems; and real-time biometric identification systems in public spaces.
High-risk AI systems are allowed, provided they meet certain requirements, primarily related to transparency. Obligations include compliance assessments, which function as algorithm impact assessments; risk management system maintenance; governance; rigorous testing, and the maintenance of technical documentation and records.
Finally, AI systems representing minimal risk have fewer requirements, which are also related to transparency. These systems must alert users that they are interacting with a machine, ensuring that users are aware of the automated nature of the interaction and allowing them to make informed decisions about how to proceed. Additionally, these systems would be responsible for clarifying if they are using technologies such as emotion recognition or biometric classification. They would also notify users when the content of images, audio, or video has been created or manipulated by AI to represent false information.
Furthermore, it is important to note that the regulation would have an impact beyond the borders of the European Union. This means that any AI system producing results used within the EU territory would be subject to it. This implies that both individuals and companies located in the European Union introducing an AI system into the European market or using it within the EU territory would also be subject to the regulations.
III. Next Steps
The AI Regulation Proposal is a global regulatory milestone. Even though it has not been approved yet, organizations should begin adapting to be able to implement their products or services in compliance with the new rules that will emerge.
It is a fact that AI systems will continue to be developed. However, organizations will need to implement risk management systems, compliance assessments, and an internal governance system for AI.
Several organizations have already started developing such systems, such as the International Organization for Standardization (ISO) and the U.S. National Institute of Standards and Technology (NIST), and they can serve as guidelines for defining organizational standards.
Regulating artificial intelligence in Europe is a multifaceted effort that balances the need for innovation with the need to protect individual rights, privacy, and ethical principles. It recognizes the transformative potential of AI while addressing the potential risks it poses to society. Although it may take some time to be approved, the regulation should already be taken into account for the future of AI.