In professional settings, AI is widely used, yet the rules of the game remain largely unknown. This is striking, especially considering that the new European AI Act has been in force since 1 August 2024.
The regulation introduces stricter requirements for safety, transparency, and responsible AI use. Those who invest now in knowledge and governance are building both trust and innovation capacity. The key is to make AI less abstract and to actively involve employees in its implementation.
The AI Act aims to make the use of AI in Europe safer and more reliable. The regulation is being phased in across EU member states. Since 2 February 2025, AI systems deemed to pose an unacceptable risk have been banned. These include systems that manipulate vulnerable groups (such as children) or rank citizens covertly based on social scoring—practices considered a direct threat to fundamental rights.
In addition, organisations using AI systems must ensure that their employees are sufficiently AI-literate. From August 2025, rules for general-purpose AI models will come into effect, and EU member states must appoint supervisory authorities. The full AI Act will apply from 2 August 2026.
A core feature of the legislation is its risk-based classification of AI systems. In the financial sector, for example, applications such as credit scoring and customer profiling fall under the high-risk category. These uses are subject to stringent requirements. Organizations must demonstrate how AI supports decision-making, manage data responsibly, and establish clear governance structures. Employees must also be equipped with the knowledge to identify and mitigate AI-related risks.
This is far from a luxury. Research by Conclusion shows that awareness of the AI Act is still lacking. A striking 70% of finance professionals are barely or not at all familiar with the legislation. Only 5% report being fully informed, and just 6% actively follow developments. Still, there is a silver lining: despite limited knowledge, concrete steps are being taken.
Key actions already underway:
Fines and reputational damage
Some organizations are on the right track. Those that remain passive, however, should be aware that the AI Act is not optional. The risks of non-compliance are significant:
Inaction is not an option.
For many organisations, working responsibly with AI still feels abstract. A good starting point is to develop a clear AI policy that defines risks, governance, and responsibilities. Link this to training so employees understand the applicable requirements and how to apply AI responsibly. Ensure that security and data quality are integral to AI projects. This requires cross-functional collaboration: IT, compliance, legal, and business units must be involved from the outset. Only then can a solid foundation be laid for safe and effective AI deployment.
The AI Act is not a brake, it’s a springboard. Organisations that integrate the rules intelligently are laying the groundwork for innovation. Use the AI Act as an opportunity to raise awareness, strengthen governance, and prepare your teams for the future.
Research report
AI in Finance
AI offers clear opportunities to strengthen the financial sector: processes can be made faster, more efficient and more customer-oriented. Finance professionals recognize the potential of AI, but are balancing between opportunity and hesitation: they fear the risks of AI as well as the fear of missing out on innovation opportunities. Our research highlights that successful implementation of AI in finance requires more than just technological innovation. Have you become curious?
Read more
about this subject
Conclusion
APG takes next step with generative AI | Conclusion
Conclusion
The role of technology in complying with the AI Act
Conclusion
LLM's are the new fire. JEPA has the potential of new electricity
Always up-to-date
Newsletter
More information?
Get in touch with us
AI-strateeg