skip to Main Content

Generating returns with Generative AI?

Balancing risks and opportunities for EU-based firms under the EU AI Act

How Artificial Intelligence has changed our world

The advent of Generative AI is poised to revolutionise industries from finance to healthcare, education, arts, and culture. Over the past few years, the use of big data and AI has increased significantly, including in the financial industry. In response, Germany’s financial regulator, BaFin, issued supervisory principles in June 2021 on utilizing algorithms in decision-making processes in financial services firms. The intention was to guide the industry towards a responsible use of big data and AI and provide orientation to BaFin-supervised entities to control associated risks. Furthermore, BaFin was looking to engage its stakeholders, most importantly the EU Commission, who had committed to drafting guidelines by 2024 together with the European Supervisory Authorities (ESMA, EBA and EIOPA) on the expectation of the financial industry’s use of big data and AI. Additionally, the EU had already proposed its first EU-wide framework in April 2021, aiming to establish clear and favourable conditions for the development and use of AI technology.

Significant developments have taken place since then. Most importantly, this early regulation was the blueprint for the EU AI Act which the EU Parliament passed on March 13, 2024, a major milestone on the journey of putting AI into a solid regulatory context. The AI Act’s intention is to create legal certainty whilst addressing AI-risks following a risk-based approach – the higher the risks, the more obligations need to be fulfilled. But we will come to that in more detail. So, first of all, what happened in the meantime to make this major piece of legislation necessary? Lots and lots of AI research and development sharpened the minds of the machines, culminating in a big bang in 2023, which some have proclaimed the “Year of AI”. 2023 was when ChatGPT became the digital buddy who rhymes, thinks, summarises, creates, reformulates, codes, proofreads (not only for typos but also for complex grammar, writing style, adaptable to that of famous speakers), and is able to do many other things in seconds, freeing hours and hours of our own thinking capacity.

The barriers to using AI tools have dropped significantly, making them broadly accessible to literally anyone with internet access. What we might call the democratisation of access to AI may be another central element of the dawn of a new industrial era. It will be disruptive to all economic sectors. This broad entry of Generative AI into the market comes with many opportunities, while, at the same time, bringing along several risks that we will have to deal with on our journey to the unvisited places Generative AI may take us to. Notably, the World Economic Forum’s Global Risks Report underscores the significance of AI-related risks by ranking adverse outcomes of AI technologies among the top 10 long-term risks in a survey by each stakeholder group from public and private sectors, civil society, academia, and international organisations.

Understanding Generative AI and its implications

Generative AI models can perform a wide range of tasks that traditionally require creativity and human understanding of natural language. They learn patterns from existing data during training and are capable of generating new content such as text, images, or music that follows such patterns. Due to their versatility and high-quality results, they offer opportunities for digitalisation that may add a new element of quality to complementing processes and working steps that were previously solely human-based. This also applies to asset managers and other financial services providers who, like many other firms, are looking to integrate Generative AI into their processes or simply “outsource” some of the unpopular working steps to the machines, allowing them to put more focus on things that matter most. We could think of analysing customer data, transaction history or other information to help identify potential red flags for AML purposes as part of the customer due diligence.

There is good reason for firms to consider making use of Generative AI. The latest research suggests that it could add between USD 2.6 trillion and USD 4.4 trillion in value to the global economy per year. These figures sound high and abstract, but to give an idea of the sheer size: the UK’s entire 2021 GDP was around USD 3.1 trillion. As much as 70% of the work activities that make up an average employee’s working day could be absorbed by Generative AI. The ability to process unstructured data, or, to use a less abstract term, to understand natural language, is the groundbreaking aspect here. Firms, workers, and entire industries will have to adapt to this in one way or another. The challenge that comes with it is that the Generative AI era is just beginning. Instead of seeing the big picture, we are only getting glimpses of what is happening, and in which position it will put us as individuals, as organisations, and as a society. Job descriptions and role profiles will have to transition, and firms will have to start positioning themselves even though this requires the right balance between strategic foresight and crystal-ball glazing.

The EU AI Act: A framework for responsible AI development

Coming back to the law. The EU AI Act is the first of its kind worldwide, and the EU can certainly consider itself an early mover for an industry topic that will significantly shape our economy, our society, our democracy – our world as we have known it so far. Critics of the Act would describe it, like other pieces of EU legislation, as overly prescriptive and thereby potentially stifling innovation whilst building on unclear definitions, though to be fair it is tough to describe a moving target. Like with any regulation, there might be unintended consequences leading to companies – from big industry players to start-ups – exiting the stage. On the other hand, it is noteworthy that the Act represents the first comprehensive piece of AI legislation which sets the standard for one of the world’s largest economic areas. AI system users will benefit from further transparency of AI decision-making processes as well as enhanced accountability by holding responsible developers and users of AI systems for the consequences of their actions, whilst reducing potential harm. Legal certainty arising from a clear regulatory framework may therefore boost innovation by encouraging further development of AI systems within clear boundaries. Start-ups and SMEs from the AI sector would be enabled to develop and train AI models prior to releasing them to the wider public. This so-called “sand box” approach will require national authorities to provide testing environments that are able to simulate real-world conditions.

The Act itself addresses providers (those developing AI systems), users (also called deployers), importers, distributors and manufacturers from EU-based businesses active in AI technology. It is important to note that the AI Act’s extraterritorial scope extends to non-EU firms that develop or deploy AI systems or models, which will have to comply with the Act’s provisions when entering the EU market. The major concept of the Act is to differentiate AI tools according to the inherent risks they pose to users and society at large. The Act distinguishes between four categories:

Risk categoryRegulationExample
Unacceptable Risk (Art. 5)AI systems that may pose threats to people will be banned/prohibited.Untargeted face recognition through CCTV cameras or emotion recognition systems in workplace or education. Social scoring systems that may be used to classify people based on behaviour, socio-economic status or personal characteristics.
High Risk (Art. 6)Conformity assessment, i.e. providers will have to comply with a range of obligations that include risk assessments, governance, documentation or public registration. Deployers have a limited set of obligations like technical and organisational measures ensuring providers’ restrictions are adhered to.Transportation systems, safety, employment, education access, border control, prosecution & justice systems, medical care like robo-assisted surgery.
Limited Risk (Art. 52)Permitted but subject to transparency obligations.Chatbots like ChatGPT, Gemini, Groq, as well as deepfakes.
Minimal Risk (Art. 69)Permitted, therefore out of scope of the Act.AI-enabled video games, email spam filters.

EU AI Act: What’s next and how firms should prepare

The text of the AI Act that finally passed both the EU Parliament and Council is expected to be published in the Official Journal in the second half of 2024 and will be fully applicable 24 months after entry into force, meaning not sooner than around mid-2026. Financial services firms and asset managers will most likely be affected by obligations arising from AI-tools of the Limited Risk category. To prepare for the EU AI Act, firms should take proactive steps and start to establish governance and monitoring measures to ensure compliance with the Act by being transparent about the use of AI in decision-making processes and disclosing AI-generated content. This will impact compliance and risk management strategies and firms need to bear in mind that finding the right wording to label AI-generated content is challenging enough. Firms would be well advised to start compiling an inventory of AI systems and models used, classify systems by risk level and role, raise awareness, assign responsibilities, and establish ongoing governance processes. Even if in some firms AI is not currently used, this will most likely change in the future. Therefore, it’s essential to start preparing now.

As concluding remarks, we can say that we are at a crucial point in time where AI technology is at the forefront of delivering a wide array of economic and societal benefits to a wide range of sectors like banking and asset management, legal affairs, mobility, agriculture or the public sector. At the same time, AI technologies may bring along implications for rights protected under the EU Charter of Fundamental Rights. The ‘human-centric’ approach that policy makers have emphasised throughout the legislative process leading to the AI Act clears out legal uncertainties by distinguishing the risks of AI use cases while making sure that fundamental rights are ensured: non-discrimination, freedom of expression, human dignity or personal data protection and privacy. Let us be optimistic that this regulatory framework will be flexible enough to help us get the most out of AI technology while being stringent1 enough to clear out risks it poses to our society.

There is a lot to say and argue about AI (like the blatant energy consumption) so this article will be followed by others that cover additional aspects of this complex topic.

DISCLAIMER: Please note that Generative AI tools may have been used to review parts of this text and enhance the writing style 😉


Footnotes

1The EU AI Act foresees a number of potential penalties:
Non-compliance with the prohibition of the AI practices: Up to 35 million EUR or 7% of annual turnover (for undertakings);
Non-compliance of AI systems with any of the provisions related to operators or notified bodies: Up to 15 million EUR or 5% of worldwide annual turnover (undertakings);
Supply of incorrect, incomplete or misleading information to notified bodies or national authorities in response to a request: Up to 7.5 million EUR or up to 1% of worldwide annual takeover (undertakings).

Back To Top