🎯

8 Practical FRIA Scenarios

Real-world examples of AI systems requiring fundamental rights impact assessment. Each scenario with affected rights, risk level, and mitigation measures.

Real-World FRIA Scenarios

Learn 8 practical scenarios demonstrating why FRIA assessments are mandatory.

👤

1. Facial Recognition in Public Spaces

AI-powered urban surveillance system

CRITICAL

Scenario Description:

A city council implements facial recognition at traffic lights and public cameras to identify criminals and missing persons.

Affected Rights:

  • Privacy and data protection (Art. 8, EU Charter)
  • Freedom of movement and assembly
  • Non-discrimination (racial bias risk)
  • Presumption of innocence

Mandatory FRIA Requirements:

  • Independent accuracy testing (false positive rates by ethnicity)
  • Gender and racial bias analysis
  • Public consent and democratic debate
  • Right to information about monitoring
  • Appeal mechanism
  • Specific enabling legislation

Relevant AI Act Articles:

Art. 10 (Risk Requirements), Art. 27 (Mandatory FRIA), Annex III, Art. 1 (Biometrics)

💳

2. Credit Scoring Systems

AI for credit eligibility determination

HIGH

Scenario Description:

A bank uses AI to assess loan eligibility, considering variables like age, location, profession and credit history.

Affected Rights:

  • Right to property and access to essential services
  • Non-discrimination by origin or gender
  • Right to explanation of decision
  • Freedom of movement (housing access)

Mandatory FRIA Requirements:

  • Rigorous indirect discrimination testing
  • Accuracy validation by demographic group
  • Explainability: why was the request denied?
  • Right to human review and appeal
  • Transparency about variables used

Relevant AI Act Articles:

Art. 5 (Essential Services), Art. 27 (FRIA), Annex III

💼

3. Automated Recruitment

AI candidate selection and screening systems

HIGH

Scenario Description:

A large company uses AI to analyze CVs, interview videos and social media history to automatically select candidates.

Affected Rights:

  • Right to work and employment access
  • Non-discrimination by gender, age, origin
  • Privacy (social media analysis)
  • Right to equal opportunities

Mandatory FRIA Requirements:

  • Gender, age, ethnicity bias audit
  • Equal opportunities testing
  • Clear consent for social media analysis
  • Explanation of rejection provided to candidate
  • Human review for final decisions

Relevant AI Act Articles:

Art. 4 (Employment), Art. 27 (FRIA), Annex III

🏥

4. AI in Medical Diagnosis

AI systems for medical imaging analysis

CRITICAL

Scenario Description:

A hospital implements AI for cancer diagnosis from X-rays, potentially influencing treatment and prognosis.

Affected Rights:

  • Right to life and health
  • Physical and mental integrity
  • Informed consent
  • Health data privacy

Mandatory FRIA Requirements:

  • Rigorous clinical validation (sensitivity, specificity)
  • Testing for genetic/ethnic characteristic bias
  • Explicit patient informed consent
  • Medical supervision in all decisions
  • System reliability documentation
  • Error reporting and correction mechanism

Relevant AI Act Articles:

Art. 5 (Essential Services), Art. 27 (FRIA), Annex III

🚔

5. Predictive Policing

Crime prediction and behavioral analysis systems

CRITICAL

Scenario Description:

Police use AI to predict where crimes will occur and who might be dangerous, influencing patrols and investigations.

Affected Rights:

  • Presumption of innocence
  • Non-discrimination (documented racial bias in algorithms)
  • Privacy
  • Right to fair trial

Mandatory FRIA Requirements:

  • Rigorous independent racial bias testing
  • Accuracy validation by neighborhood/community
  • Public transparency
  • Mandatory human oversight
  • Cannot be sole basis for investigation
  • Specific legislation enabling the system

Relevant AI Act Articles:

Art. 6 (Law Enforcement), Art. 27 (FRIA), Annex III

🤝

6. Welfare and Social Benefits Systems

AI for benefits eligibility determination

HIGH

Scenario Description:

A social security agency uses AI to detect fraud and automatically determine eligibility for unemployment, housing and social support.

Affected Rights:

  • Right to social protection and solidarity
  • Right to family life
  • Privacy
  • Good administration

Mandatory FRIA Requirements:

  • Socio-economic impact analysis
  • Indirect discrimination testing
  • Decision explainability
  • Right guaranteed to human appeal
  • Clear data collection consent

Relevant AI Act Articles:

Art. 5 (Essential Services), Art. 27 (FRIA), Annex III

🔇

7. Automated Content Moderation

AI for harmful content detection and removal

MODERATE

Scenario Description:

A social network uses AI to automatically detect and remove offensive content, hate speech and misinformation.

Affected Rights:

  • Freedom of expression and information
  • Privacy
  • Right to transparency
  • Good administration

Mandatory FRIA Requirements:

  • Algorithm tested to prevent unjust censorship
  • Impact analysis on freedom of expression
  • Clear and fair appeal process
  • Transparency about moderation rules
  • Human oversight for controversial removals

Relevant AI Act Articles:

Art. 10 (Risk Requirements), Art. 27 (FRIA)

🚗

8. Autonomous Vehicles and Critical Infrastructure

AI in transport and energy systems

CRITICAL

Scenario Description:

A city pilots autonomous vehicles for public transport or an operator implements AI for electrical grid control.

Affected Rights:

  • Right to life and physical integrity
  • Access to essential services
  • Safety and freedom of movement
  • Privacy (movement tracking)

Mandatory FRIA Requirements:

  • Extremely rigorous safety testing
  • Failure scenario analysis
  • Human oversight for critical situations
  • Manual fallback always available
  • Insurance and liability defined
  • Continuous safety monitoring

Relevant AI Act Articles:

Art. 2 (Critical Infrastructure), Art. 27 (FRIA), Annex III

Checklist: Does Your System Require FRIA?

Use this checklist to determine if your AI system is high-risk.

❓ Is it one of the 8 scenarios above?

If your system matches any of the 8 scenarios listed above, it is definitely high-risk and requires FRIA.

❓ Does it affect fundamental rights?

Systems affecting rights under the EU Charter of Fundamental Rights (privacy, non-discrimination, service access, etc.) require FRIA.

❓ Does it make important decisions about people?

If your system automates decisions significantly affecting people (employment, credit, housing, health, justice), it is high-risk.

❓ Does it monitor or identify people?

Any system that identifies, monitors or analyzes people (biometrics, location, behavior) is high-risk.

❓ Does it control critical infrastructure?

Systems controlling energy, water, transport, communications or healthcare are high-risk.

If you answered YES to any question above: Your organization must conduct a FRIA immediately.

Next Steps

📚 Learn More

Explore comprehensive documentation on FRIA and the Artificial Intelligence Act.

Resources →

🚀 Start Now

Contact us to discuss your specific system and start a FRIA.

Request Assessment →

Recognise Your Scenario? Contact Us

Send a brief message and we will respond within 24 hours.