The FRIA Guide: Making Fundamental Rights Impact Assessments Work for Financial Institutions

A niche obligation with broad implications
Among the many provisions of the EU AI Act, the requirement to conduct a Fundamental Rights Impact Assessment (FRIA) stands out for its specificity. Unlike most obligations in the Regulation, it applies not only to providers but, in certain cases, directly to deployers.
For financial institutions, this is particularly relevant.
Where AI systems are used in areas such as credit scoring or insurance risk assessment, a FRIA may become mandatory.
Despite this, awareness remains limited. Many firms are familiar with data protection impact assessments under the GDPR, but fewer have considered how fundamental rights assessments differ in scope and purpose.
The legal basis: Article 27
The obligation is set out in Article 27 of the Regulation. In essence, it requires certain deployers of high-risk AI systems to assess the impact of those systems on fundamental rights before putting them into use.
While the exact wording is detailed, the underlying logic is straightforward:
systems that materially affect individuals’ lives must be evaluated not only for technical performance, but for their broader societal effects.
Who must conduct a FRIA?
The requirement applies to deployers of high-risk AI systems in specific contexts, including:
Financial services affecting access to credit
Insurance-related risk assessments
Other applications with significant effects on individuals’ rights
These are precisely the areas where fintech firms are most active.
The rationale is consistent with the broader framework of the AI Act:
systems that influence economic participation and opportunity carry heightened risks to fundamental rights such as non-discrimination and equal treatment.
What does a FRIA involve?
A FRIA is not a purely formal exercise. It requires a structured assessment of how an AI system may affect individuals and groups.
In practical terms, this includes:
1. Defining the system and its purpose
What decisions does the AI system support or automate?
Who is affected by those decisions?
2. Identifying affected groups
Customers, applicants, or policyholders
Potentially vulnerable groups
Indirectly affected individuals
3. Assessing risks to fundamental rights
Relevant rights include:
Non-discrimination
Data protection and privacy
Access to essential services
Transparency and due process
The AI Act emphasises the protection of these rights as a core objective of the Regulation:
The Regulation aims to ensure “a high level of protection of […] fundamental rights” (Recital 1)
4. Evaluating mitigation measures
Are there safeguards against bias or unfair outcomes?
Is human oversight meaningful and effective?
Can decisions be explained and challenged?
5. Documenting and updating the assessment
A FRIA is not static. It must be:
Documented clearly
Updated when the system changes
Integrated into ongoing governance processes
FRIA vs. GDPR DPIA: overlap and divergence
Many organisations will notice similarities between a FRIA and a Data Protection Impact Assessment (DPIA) under the GDPR.
There are clear overlaps:
Aspect | DPIA (GDPR) | FRIA (AI Act) |
|---|---|---|
Focus | Personal data risks | Fundamental rights broadly |
Scope | Data processing | AI system impact |
Trigger | High-risk data processing | High-risk AI use |
The key difference lies in scope.
A DPIA focuses on privacy and data protection, whereas a FRIA covers a wider set of rights, including:
Economic exclusion
Discriminatory outcomes
Procedural fairness
In practice, many organisations will seek to align or integrate both assessments to avoid duplication.
The role of human oversight
A central element of the FRIA is the evaluation of human oversight.
The AI Act consistently emphasises that AI systems should remain subject to meaningful human control. A FRIA must therefore examine:
Whether humans can intervene effectively
Whether they understand the system’s outputs
Whether oversight is more than a formality
This is particularly relevant in automated decision-making contexts, such as loan approvals.
Practical challenges
Implementing a FRIA raises several practical difficulties:
Defining affected groups in complex financial ecosystems
Measuring abstract risks, such as discrimination or exclusion
Ensuring documentation is sufficiently robust for regulatory scrutiny
Keeping assessments up to date as models evolve
These challenges are not purely legal. They require input from data science, risk management, and product teams.
Strategic implications for fintech
The FRIA requirement signals a broader shift in regulatory expectations.
Firms are no longer assessed solely on whether their systems function correctly, but on whether they operate fairly and responsibly in a societal context.
For fintech companies, this has several implications:
Increased scrutiny of algorithmic decision-making
Greater importance of explainability and transparency
A need for cross-functional governance structures
At the same time, firms that develop robust assessment processes may gain an advantage in regulated markets.
Conclusion
The Fundamental Rights Impact Assessment is a relatively narrow provision within the EU AI Act, but its implications are wide-ranging.
It extends compliance beyond technical and legal correctness into the domain of societal impact. For financial institutions, where AI systems shape access to essential services, this represents a significant shift.
Understanding and operationalising the FRIA requirement is therefore not simply a matter of compliance. It is a step towards aligning AI systems with the broader expectations embedded in European law.