How the EU AI Act Will Transform Compliance in the Financial Sector: What You Need to Know

The Impact of the AI Act on the Financial Sector

The recently approved EU AI Act, a comprehensive horizontal legislation, will soon come into force, significantly impacting various sectors, including the financial industry. The Act generally addresses AI tools across different sectors but gives specific attention to financial use cases, such as credit scoring models and risk assessment tools in the insurance sector.

High-Risk AI Systems in Finance

AI systems evaluating credit scores or creditworthiness of individuals are expected to be classified as high-risk due to their significant influence on access to financial resources. Similarly, AI systems used for life and health insurance risk assessment could also be deemed high-risk because of potential serious consequences, such as financial exclusion and discrimination.

Interestingly, the AI Act suggests that AI systems deployed for detecting fraud in financial services should not be considered high-risk. To avoid overlap with existing financial regulations, the Act references these regulations for compliance with certain high-risk AI system requirements. Financial institutions are presumed to meet some of these requirements—like AI documentation, risk management processes, monitoring obligations, and log maintenance—by complying with existing stringent internal governance and risk management regulations.

Compliance and Regulatory Oversight

However, the AI Act remains the primary legislative source that financial institutions must follow to ensure compliance when using high-risk AI systems, such as those for credit scoring or specific insurance practices, especially when providing services to individuals or retail clients. The list of high-risk AI systems is dynamic and will be updated continuously, making ongoing monitoring essential.

Given the ongoing debate on regulating general-purpose AI systems, financial institutions should closely monitor developments in this area. These models could revolutionize content generation and other applications within the fast-paced finance sector.

With the upcoming Digital Operational Resilience Act (DORA), financial institutions need to consider how its requirements interact with those from the AI Act. DORA focuses on ICT risk governance and management, including third-party risk management. As financial institutions increasingly rely on third-party ICT services for AI solutions due to limited internal capabilities, security challenges and governance issues related to internal controls, data management, and data protection will become more prominent.

Supervision and Implementation

In terms of supervision, the AI Act assigns financial supervisory authorities to oversee compliance with its requirements, including ex-post market surveillance. The existing financial supervisory bodies in each Member State will integrate AI Act oversight into their current practices. The European Central Bank (ECB) will continue its prudential supervisory functions related to credit institutions’ risk management processes and internal controls, with National Competent Authorities (NCAs) reporting relevant findings to the ECB.

The AI Act also calls for establishing the European Artificial Intelligence Office, a new EU-level body comprising Member State representatives. This office will facilitate effective and harmonized AI Act implementation, promote the AI ecosystem’s interests, and provide advisory tasks such as issuing opinions and recommendations.

At Yields, our model risk management software is designed to help financial institutions navigate these new regulatory landscapes. Our solutions ensure compliance with AI Act requirements by providing robust documentation, risk management processes, and monitoring capabilities. By partnering with us, financial institutions can confidently deploy high-risk AI systems, secure in the knowledge that they meet all necessary regulations.