The impact of the EU’s AI Act on responsible and ethical AI development


IƱaki Peeters

AI Solutions Analyst

Artificial intelligence (AI) has rapidly transformed our world, seamlessly integrating into various aspects of our lives, from healthcare and transportation to finance and entertainment. This transformative power, however, comes with inherent risks that need to be carefully managed to ensure the responsible and ethical development and deployment of AI systems. Recognizing this need, the European Union (EU) has recently adopted the Artificial Intelligence Act (AIA), a landmark piece of legislation that aims to establish a comprehensive regulatory framework for trustworthy AI.

The AI Act: A World-First Regulatory Initiative

The AI Act stands as the world's first comprehensive AI regulation, where Europe is the first world power setting a benchmark for other jurisdictions to follow. Its key objective is to promote trustworthy AI while safeguarding fundamental rights and values enshrined in the EU Charter of Fundamental Rights. The Act recognizes that AI can bring immense benefits to society, but it also acknowledges the potential risks associated with certain AI systems, such as discrimination, manipulation, and harm to human safety.


Navigating the Risk-Based Approach

Central to the AI Act's regulatory framework is a risk-based approach, classifying AI systems into four categories based on their potential harm:

1. Unacceptable Risk (Category A): AI systems that pose an unacceptable risk to human safety, security, or fundamental rights are prohibited. This includes systems that enable mass surveillance, predict criminal behavior based on personal traits, or manipulate individuals' emotions.

2. High Risk (Category B): AI systems that pose a high risk of causing physical or psychological harm, discrimination, or environmental damage are subject to stringent requirements, including conformity assessments, audits, and reporting mechanisms. Examples include AI-powered medical diagnosis, autonomous vehicles, and AI-based hiring tools.

3. Limited Risk (Category C): AI systems that pose a limited risk are required to comply with transparency and explainability obligations, allowing users to understand how the systems work and make informed decisions. This includes systems used for recommendation engines, targeted advertising, and chatbots.

4. Minimal Risk (Category D): AI systems that pose minimal risk are not subject to specific requirements but are still encouraged to adhere to good practices and ethical principles. This includes systems used for spam filtering, traffic light control, and language translation.


Impact on AI Service Companies

The AI Act presents both challenges and opportunities for AI service companies. On the one hand, it introduces additional compliance requirements, requiring companies to assess the risk level of their AI systems, conduct conformity assessments, and implement robust documentation and monitoring procedures. This added complexity may increase development costs and raise concerns about the potential impact on innovation.

On the other hand, the AIA also creates a more transparent and accountable AI ecosystem, fostering trust among users and regulators. We believe this will open new market opportunities for AI service companies that can demonstrate their commitment to responsible AI development and use. The AIA also encourages the creation of regulatory sandboxes and real-world testing environments, providing opportunities for AI service companies to experiment with innovative solutions in a controlled setting.


Industry-Wide Implications

The AI Act's impact extends beyond individual AI service companies, shaping the broader AI landscape. The risk-based approach encourages the development of AI systems that are more explainable, transparent, and robust against biases. This can lead to the creation of AI solutions that are more trustworthy and socially beneficial. The AI Act also promotes a more collaborative approach to AI development, encouraging stakeholder engagement and open dialogue. This can foster a more informed and responsible AI ecosystem, reducing the risk of unintended consequences and promoting ethical AI practices.


A Global Push for AI For Good

The AIA's impact extends far beyond regulatory compliance, shaping the broader AI landscape and influencing global standards. By promoting responsible AI development, the AIA encourages the creation of AI systems that are more explainable, transparent, and robust against biases, leading to the development of AI solutions that are more trustworthy and socially beneficial.

Moreover, the AIA fosters collaboration among AI stakeholders, bringing together researchers, developers, users, and regulators to shape the future of AI. This open dialogue and collaborative approach foster a more informed and responsible AI ecosystem, reducing the risk of unintended consequences and ensuring that AI is developed and deployed ethically.



The AI Act marks a significant step towards regulating AI in a responsible and ethical manner. While it introduces additional compliance requirements for AI service companies, it also creates opportunities for innovation and collaboration. The AIA can help to shape a future where AI is used for the benefit of society while safeguarding fundamental rights and values. As AI continues to revolutionize industries and aspects of our daily lives, the AIA serves as a crucial framework for ensuring that this technology is developed and deployed in a manner that promotes trust, transparency, and responsible AI practices. Faktion therefore fully endorses the AI Act and in order to support our customers, we have developed the AI Act Assessment methodology as part of our strategic services, helping our customers to better assess the impact of the AI Act and build a compliance roadmap.

Get in touch!

Inquiry for your POC

Scroll to Top