EU AI Act: tipps and insights

With the EU AI Act, the European Union has introduced comprehensive regulation for artificial intelligence (AI), which has been in force since February 2, 2025. The aim is to ensure the safe and ethical use of AI systems while promoting innovation. For small and medium-sized enterprises (SMEs), this means dealing with the new requirements now at the latest in order to ensure compliance and exploit competitive advantages.

 

Why the EU AI Act was introduced

With the increasing use of AI in various industries, the EU saw the need to create uniform standards that both minimize risks and maximize opportunities. The EU AI Act classifies AI systems based on their risk potential and sets out corresponding requirements. This should not only strengthen consumer protection, but also increase trust in AI technologies.

 

Step-by-step guide for small enterprises

   1. Take stock of the AI systems in use

Create a detailed list of all AI applications in your company. Include both internally developed and purchased systems or individual components of software.

   2. Carry out a risk assessment

Analyze each AI system with regard to its risk profile. The EU AI Act distinguishes between impermissible, high, low and minimal risk. For example, AI systems in the healthcare sector or in personnel recruitment are often considered high-risk.

   3. Implement technical and organizational measures

Specific measures are required for high-risk systems. This applies to almost all of them:

Transparency: ensure that the functioning of the AI system is comprehensible.

Data quality: Use high-quality and representative data sets to avoid distortions.

Documentation: Create comprehensive technical documentation that describes the development and use of the system.

   4. Conformity assessment and certification

Have your high-risk AI systems subjected to a conformity assessment. This can be carried out internally or by external bodies. Allow sufficient time for this and prepare all the necessary documents.

   5. Employee training

Make your employees aware of the requirements of the EU AI Act. This should include how AI applications can be used, which data is suitable and what the company’s requirements are. It is also highly recommended to raise awareness of ethical and legal aspects when dealing with AI.

 

Practical tips for implementation

External advice: If necessary, bring in experts to support you with risk assessment and certification.

The big picture: Don’t just look at the technical systems and data quality, but take the time to scrutinize your areas of application and the potential impact of AI in relation to your values.

Continuous monitoring: Regularly monitor the performance of your AI systems and adapt them to new requirements if necessary.

 

Discussion of the EU AI Act

The EU AI Regulation is undoubtedly a significant step towards the safe and ethically responsible use of AI. Clear guidelines are particularly necessary in sensitive areas such as medicine, justice or HR. However, while the AI Act is intended to create more trust, it also brings new challenges – especially for SMEs.

While large companies maintain compliance departments, smaller companies often lack the expertise and resources to implement the requirements correctly. A lack of practical support could lead to companies avoiding AI instead of using it responsibly. One possible solution would be a graduated model that relieves smaller companies with pragmatic requirements so as not to slow down their ability to innovate.

To prevent companies from outsourcing AI development abroad, where less stringent regulations apply, an internationally coordinated strategy could prevent Germany from being at a competitive disadvantage.

Another critical issue is the dynamic nature of AI. The EU AI Act categorizes systems according to a rigid risk model – but AI is constantly evolving. A technology that is considered safe today could be risky tomorrow. An adaptive model with regular reassessment of the categories would make sense here.

The question of liability also remains open: Who is responsible if an (external) AI solution makes mistakes? Clear regulations are needed here to avoid unnecessary uncertainty for companies; for example, a defined division of liability between manufacturers and users, combined with mandatory transparency standards for external providers.

Finally, issues such as the environmental impact of energy-intensive AI models or retraining opportunities for professions that are being eliminated or newly created as a result of AI have hardly been addressed. An extension of the EU AI Act would be necessary here to ensure the social and environmental sustainability of AI technologies.

 

Prospects: The future of AI in the EU and worldwide

While the EU is taking a pioneering role in AI regulation with the AI Act, other countries are pursuing different, agile and more experimental approaches. The USA is focusing more on self-regulation, while China is developing its own standards. In the future, it will be crucial to promote international cooperation and establish common standards in order to make the global use of AI safe and efficient.

 

Conclusion

The EU AI regulation presents SMEs with new challenges, but also offers the opportunity to strengthen the trust of customers and partners through transparent AI applications that respect the needs of individuals. A proactive approach to the requirements and careful implementation of measures are the key to success in an increasingly AI-driven world.

 

 

* Risk categories

Unacceptable risk (“Unacceptable Risk”):
These AI systems are prohibited because they pose a threat to security or fundamental rights, e.g. social scoring (e.g. government rating systems for citizens) or real-time facial recognition in public spaces (with a few exceptions such as counter-terrorism)

High risk (“high risk”):
AI systems that have a significant impact on security or fundamental rights are subject to strict requirements, e.g. AI in personnel recruitment (automated applicant selection) or critical infrastructures (e.g. AI-controlled power grids). The requirements: Transparency, risk assessment, human oversight, technical documentation

Low risk (“limited risk”):
AI systems with moderate risk are subject to certain transparency obligations, but less stringent requirements, e.g. chatbots must inform users that they are interacting with an AI.

Minimal risk (“minimal risk”):
These AI applications are largely unregulated, e.g. recommendation systems for music or movies.