Understanding the EU AI Act: Key Rules, Risks, and Opportunities
- aminetahour8
- Oct 13
- 8 min read
Updated: Oct 17

Artificial Intelligence has existed as a field of research since 1950, when Alan Turing first posed the question of whether machines could think (Lawrence Livermore National Laboratory, n.d.). Since late 2022, public interest in artificial intelligence has surged. The number of AI-related jobs, patents, and innovations continues to climb, and Europe today counts more professionals in AI-related roles than even the United States (Leitner et al., 2024). At the same time, the potential economic impact of AI is staggering. Estimates suggest that generative AI alone could add between USD 2.6 trillion and 4.4 trillion to global output each year, with banking expected to be among the sectors that benefit most (Kamalnath et al., 2023). Yet, as with any disruptive technology, rapid adoption brings not only opportunities but also risks - including those that may threaten financial stability.
Recognizing this, international standard-setters and supervisory authorities have intensified their focus on AI’s implications for the financial system. The European Union has taken a leading role with the AI Act, designed to build public trust: most AI applications pose little or no risk, but some can generate considerable hurdles that call for careful governance (European Commission, 2025). In this article, we explore what the AI Act means for organisations: its core requirements, the operational difficulties of implementation, its economic consequences, and how Morrison Finance can help organisations prepare for this new regulatory prospect.
The AI Act at a Glance

The AI Act represents the first comprehensive regulatory framework for artificial intelligence, designed to mitigate its risks while establishing Europe as a global leader in the field. The AI Act is designed to build trust among Europeans in what AI provides (European Commission, 2025). In this regard, the European Commission has put forward adopting a risk-based approach, in which legal intervention would be adjusted according to the specific level of risk that AI system applications pose to users. Based on Madiega (2024), the following explains the four categories of risk for AI systems defined under the AI Act:
Unacceptable risk: These AI systems are banned because they threaten people’s rights and dignity. They include technologies that manipulate behaviour, exploit vulnerabilities such as age or poverty, or classify people by sensitive traits like race, religion, or sexual orientation. They also cover mass scraping for facial recognition, emotion recognition in workplaces or schools (except for safety or medical use), predictive policing based only on profiling, and real-time biometric surveillance in public spaces, except in rare cases like finding missing persons or preventing serious threats.
High risk: These are AI systems with major effects for health, safety, or fundamental rights. For example, AI used in medical devices, recruitment, or education, where outcomes can directly affect individuals’ lives and opportunities. To address these risks, providers must comply with strict obligations such as thorough assessment, data quality controls, cybersecurity safeguards, and, in some cases, fundamental rights impact assessments. The principle is clear: AI systems that influence critical decisions must demonstrate reliability, fairness, and accountability before entering the market and remain subject to continuous monitoring thereafter.
Transparency risk: This refers to AI systems that may mislead or impersonate, even if they are not classified as high-risk. To safeguard trust, users must always be informed when interacting with chatbots, when content has been artificially generated or altered (such as deepfakes), and when AI is deployed in the workplace. Providers producing large volumes of synthetic content are required to embed reliable markers, like watermarks, to ensure outputs can be detected as AI-generated.
Minimal risk: The AI Act imposes no new rules on AI systems considered minimal or no risk, which make up most current applications in the EU. Examples include everyday tools like AI-powered video games and spam filters.

In addition, the AI Act also furnishes rules for general-purpose AI models. These are AI systems trained on massive datasets that can handle many different tasks and be built into a broad spectrum of applications (Madiega, 2024):
General-purpose AI models must safeguard transparency by keeping technical documentation up to date and sharing information with the businesses and institutions that use their systems. They also need to respect copyright law (e.g., through watermarking) and provide a public summary of the training data used to build their models. Open-source systems are exempt from some of these requirements, as they are generally seen as supporting research and innovation.
Some general-purpose AI models are so powerful that they may create systemic risks for society and the economy. When models are trained with extremely high computing power, they are presumed to fall into this category. In such cases, providers must notify the European Commission, carry out ongoing risk assessments, ensure strong cybersecurity protections, and report or correct any serious incidents that occur.
GPAI providers can show compliance by following codes of practice approved by the European Commission or other EU-wide rules. Using these, or recognised harmonised standards, gives them a presumption of conformity. If providers of systemic-risk models do not follow an approved code, they must prove they have equivalent safeguards.
Timeline of the AI Act

The AI Act entered into force on 1 August 2024, but its provisions will apply gradually over the coming years (Kosinski & Scapicchio, n.d.). Key dates include:
2 February 2025 - bans on prohibited AI practices begin to apply.
2 August 2025 - obligations for general-purpose AI take effect.
2 August 2026 - rules governing high-risk AI systems come into force.
2 August 2027 - rules apply to AI systems that are built into products already covered by existing EU safety laws, such as cars, medical devices, or machinery.

Impact of the AI Act on the Stakeholders
Penalties for Non-Compliance with the Act

As explained by experts at IBM, noncompliance with the AI Act can be costly. The regulation sets out several tiers of financial penalties, depending on the severity of the breach:
Severe violations, such as using prohibited AI practices, could result in fines of up to €35 million or 7% of global annual turnover, whichever is higher.
Other breaches, including failures to meet the requirements for high-risk AI systems, can lead to fines of up to €15 million or 3% of global annual turnover.
Providing false or misleading information to authorities can still incur fines reaching €7.5 million or 1% of global turnover.
For start-ups and SMEs, the Act introduces lower fine levels to account for their smaller scale.
Different Perspectives on the Implications of the Act

This analysis is mainly grounded in the findings of Cabrera et al. (2025). Some experts believe that although the AI Act promotes safety and ethics, its strict and complex rules may limit innovation and make it harder for startups and small businesses to compete. Marianne Tordeux Bitker, representing France Digitale, shares this concern, warning that the AI Act’s heavy obligations could create new regulatory barriers and give an advantage to competitors from the United States and China. That said, the main concern is not with the law itself but with how its risk-based approach might limit innovation and slow the development of new AI systems.
On the other hand, some see the AI Act as an opportunity rather than a constraint. Legal and industry voices, such as Tainá Aguiar Junquilho and DigitalEurope, argue that by creating legal certainty and clear rules, the regulation can, in fact, stimulate innovation and strengthen the European AI market. In other words, “despite the uncertainties, there is hope that the regulation will bring the necessary balance” (Cabrera et al., 2025, p. 234). EY’s managing partner for Europe, the Middle East, India and Africa, Julie Linn Teigland, stated in this regard: “It is vital that the EU harnesses the dynamism of the private sector, which will be the driving force behind the future of AI. Getting this right will be important for making Europe more competitive and attractive to investors” (Euronews, 2024).
In short, while opinions on the AI Act remain divided, many now agree that the priority should shift from debate to assuring its efficient implementation and enforcement across the EU.
Long-term Value & Opportunities Under the Act

According to the European Commission’s 2021 impact assessment, certifying a single AI system under the EU-type examination was estimated to cost €16,800 - €23,000, or around 10 - 14% of total development costs.
Implementing a Quality Management System (QMS) to ensure compliance was projected to require €193,000 - €330,000 upfront, plus roughly €71,400 per year in maintenance. These costs can often be distributed across several AI products, thus reducing the long-term financial burden (Renda et al., 2021). Although these figures date back to the early stages of the legislative process, they remain useful in illustrating the expected scale of compliance costs envisioned under the AI Act.
While upfront compliance does incur costs, it functions as a prudent investment: the Act establishes standardised validation, boosts operational efficiency, and builds trust with investors and customers, eventually resulting in a competitive advantage (Informatica, 2025)
Beyond Europe, global trends in investment and public attitudes toward AI regulation offer useful context for comparison. In 2023, AI venture capital investment reached only about $8 billion in the EU, compared with $68 billion in the United States and $15 billion in China, according to Csernatoni (2025). This contrast shows how different governance approaches lead to different outcomes. In Europe, the focus on regulation brings higher upfront compliance costs but aims to build long-term trust and market stability. By contrast, the United States has no comparable AI law, which keeps short-term costs lower for companies but creates greater uncertainty in the long run.
What’s more, a global survey by KPMG (2025) reports that around 70% of people worldwide state that AI should be regulated, whereas only 17% think regulation is unnecessary and 13% remain undecided. Support for AI regulation is strong across countries, from 57% in the UAE to 86% in Finland, which shows broad agreement that oversight is needed to prevent societal risks.
Conclusion

Throughout this insight, we explored the EU AI Act’s key principles, risks, and implications - from its legal framework and enforcement timeline to its economic impact, stakeholder perspectives, and long-term opportunities. In that regard, Morrison Finance assists companies in embracing AI and automation in their financial processes in a manner that facilitates efficiency, accountability, and trust, which are values central to the AI Act’s mission. Ultimately, the real implications of the AI Act will only become clear once its provisions take full effect across industries in the upcoming time.
References
Cabrera, B. M., Luiz, L. E., & Teixeira, J. P. (2025). The Artificial Intelligence Act: Insights regarding its application and implications. Procedia Computer Science, 256, 230–237. https://doi.org/10.1016/j.procs.2025.02.116
Csernatoni, R. (2025, May 20). The EU’s AI power play: Between deregulation and innovation. Carnegie Endowment for International Peace. https://carnegieeurope.eu/2025/05/20/eu-s-ai-power-play-between-deregulation-and-innovation
Euronews. (2024, March 16). EU AI Act reaction: Tech experts say the world’s first AI law is ‘historic but bittersweet’. https://www.euronews.com/next/2024/03/16/eu-ai-act-reaction-tech-experts-say-the-worlds-first-ai-law-is-historic-but-bittersweet
European Commission. (2025, August 1). AI Act | Shaping Europe’s digital future [Web page]. Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Informatica. (2025, July 30). Building trusted data and AI governance in a regulated world. https://www.informatica.com/blogs/building-trusted-data-and-ai-governance-in-a-regulated-world.html
Kamalnath, V., Lerner, L., Moon, J., Sari, G., Sohoni, V., & Zhang, S. (2023, December 5). Capturing the full value of generative AI in banking. McKinsey & Company. https://www.mckinsey.com/industries/financial-services/our-insights/capturing-the-full-value-of-generative-ai-in-banking
Kosinski, M., & Scapicchio, M. (n.d.). What is the EU AI Act? IBM Think. https://www.ibm.com/think/topics/eu-ai-act
KPMG. (2025). Trust and attitudes toward artificial intelligence: A global study. https://kpmg.com/kpmg-us/content/dam/kpmg/pdf/2025/trust-attitudes-artificial-intelligence-global-report.pdf
Lawrence Livermore National Laboratory. (n.d.). The birth of Artificial Intelligence (AI) research. Science & Technology. https://st.llnl.gov/news/look-back/birth-artificial-intelligence-ai-research
Leitner, G., Singh, J., Anton, V. D. K., & Zsámboki, B. (2024, May 15). The rise of artificial intelligence: benefits and risks for financial stability. European Central Bank. https://www.ecb.europa.eu/press/financial-stability-publications/fsr/special/html/ecb.fsrart202405_02~58c3ce5246.en.html
Madiega, T. (2024). Artificial intelligence act. In EU Legislation in Progress. https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf#:~:text=The%20AI%20act%2C%20the%20first%20binding%20worldwide%20horizontal,Some%20AI%20systems%20presenting%20%27unacceptable%27%20risks%20are%20prohibited.
Renda, A., Arroyo, J., Fanni, R., Laurer, M., Sipiczki, A., Yeung, T., Maridis, G., Fernandes, M., Endrodi, G., Milio, S., Devenyi, V., Georgiev, S., & de Pierrefeu, G. (2021). Study to support an impact assessment of regulatory requirements for artificial intelligence in Europe (Final report D5). European Commission, DG CONNECT. https://artificialintelligenceact.eu/wp-content/uploads/2022/06/AIA-COM-Impact-Assessment-3-21-April.pdf



Comments