Skip to main content
Collegas aan tafel maken gebruik van een AI tool
  • Author

    Maarten van Sadelhoff

    |
  • Publish date

    June 10, 2025

    |
  • Share

Use the AI Act as a springboard towards Sustainable, responsible AI

 

In professional settings, AI is widely used, yet the rules of the game remain largely unknownThis is striking, especially considering that the new European AI Act has been in force since 1 August 2024.

The regulation introduces stricter requirements for safetytransparencyand responsible AI useThose who invest now in knowledge and governance are building both trust and innovation capacity. The key is to make AI less abstract and to actively involve employees in its implementation. 

The AI Act: A Framework for Trustworthy AI

The AI Act aims to make the use of AI in Europe safer and more reliable. The regulation is being phased in across EU member states. Since 2 February 2025, AI systems deemed to pose an unacceptable risk have been banned. These include systems that manipulate vulnerable groups (such as children) or rank citizens covertly based on social scoring—practices considered a direct threat to fundamental rights. 

 

In addition, organisations using AI systems must ensure that their employees are sufficiently AI-literate. From August 2025, rules for general-purpose AI models will come into effect, and EU member states must appoint supervisory authorities. The full AI Act will apply from 2 August 2026. 


High-Risk Applications

A core feature of the legislation is its risk-based classification of AI systems. In the financial sector, for example, applications such as credit scoring and customer profiling fall under the high-risk category. These uses are subject to stringent requirements. Organizations must demonstrate how AI supports decision-making, manage data responsibly, and establish clear governance structures. Employees must also be equipped with the knowledge to identify and mitigate AI-related risks. 

Limited Awareness of the AI Act

This is far from a luxury. Research by Conclusion shows that awareness of the AI Act is still lacking. A striking 70% of finance professionals are barely or not at all familiar with the legislation. Only 5% report being fully informed, and just 6% actively follow developments. Still, there is a silver lining: despite limited knowledge, concrete steps are being taken. 

 

Key actions already underway: 

  • 40% are developing an AI policy 
  • 33% are collaborating with legal experts to ensure compliance 
  • 29% are organising training to improve AI literacy among staff 

Fines and reputational damage

Some organizations are on the right track. Those that remain passive, however, should be aware that the AI Act is not optional. The risks of non-compliance are significant: 

 

  • Fines and sanctions: As with the GDPR, penalties for non-compliance can be substantial—up to millions of euros or a percentage of global turnover 
  • Reputational damage: An AI system that produces discriminatory or unreliable outcomes can severely harm a company’s image 
  • Loss of customer trust: Consumers and businesses increasingly expect responsible data and technology practices. Falling short can lead to customer attrition 
  • Operational risks: Poorly trained AI can result in flawed risk assessments or missed fraud signals, leading to direct financial losses 

Inaction is not an option. 

 

1749551761-future-artificial-intelligence-robot-cyborg

A practical approach

For many organisationsworking responsibly with AI still feels abstract. A good starting point is to develop a clear AI policy that defines risks, governance, and responsibilities. Link this to training so employees understand the applicable requirements and how to apply AI responsibly. Ensure that security and data quality are integral to AI projectsThis requires cross-functional collaboration: IT, compliance, legal, and business units must be involved from the outsetOnly then can a solid foundation be laid for safe and effective AI deployment. 

 

The AI act as a catalyst for Sustainable AI

The AI Act is not a brake, it’s a springboard. Organisations that integrate the rules intelligently are laying the groundwork for innovation. Use the AI Act as an opportunity to raise awareness, strengthen governance, and prepare your teams for the future.

 

Whitepaper-Finance-EN

Research report

AI in Finance

AI offers clear opportunities to strengthen the financial sector: processes can be made faster, more efficient and more customer-oriented. Finance professionals recognize the potential of AI, but are balancing between opportunity and hesitation: they fear the risks of AI as well as the fear of missing out on innovation opportunities. Our research highlights that successful implementation of AI in finance requires more than just technological innovation. Have you become curious?

Read more
about this subject

Conclusion

APG takes next step with generative AI | Conclusion

Conclusion

The role of technology in complying with the AI Act

Conclusion

LLM's are the new fire. JEPA has the potential of new electricity

More information?
Get in touch with us

1717067140-adil-bohoudi-edited-2

Adil Bohoudi

AI-strateeg