Real-world examples of AI systems requiring fundamental rights impact assessment. Each scenario with affected rights, risk level, and mitigation measures.
Learn 8 practical scenarios demonstrating why FRIA assessments are mandatory.
AI-powered urban surveillance system
A city council implements facial recognition at traffic lights and public cameras to identify criminals and missing persons.
Art. 10 (Risk Requirements), Art. 27 (Mandatory FRIA), Annex III, Art. 1 (Biometrics)
AI for credit eligibility determination
A bank uses AI to assess loan eligibility, considering variables like age, location, profession and credit history.
Art. 5 (Essential Services), Art. 27 (FRIA), Annex III
AI candidate selection and screening systems
A large company uses AI to analyze CVs, interview videos and social media history to automatically select candidates.
Art. 4 (Employment), Art. 27 (FRIA), Annex III
AI systems for medical imaging analysis
A hospital implements AI for cancer diagnosis from X-rays, potentially influencing treatment and prognosis.
Art. 5 (Essential Services), Art. 27 (FRIA), Annex III
Crime prediction and behavioral analysis systems
Police use AI to predict where crimes will occur and who might be dangerous, influencing patrols and investigations.
Art. 6 (Law Enforcement), Art. 27 (FRIA), Annex III
AI for benefits eligibility determination
A social security agency uses AI to detect fraud and automatically determine eligibility for unemployment, housing and social support.
Art. 5 (Essential Services), Art. 27 (FRIA), Annex III
AI for harmful content detection and removal
A social network uses AI to automatically detect and remove offensive content, hate speech and misinformation.
Art. 10 (Risk Requirements), Art. 27 (FRIA)
AI in transport and energy systems
A city pilots autonomous vehicles for public transport or an operator implements AI for electrical grid control.
Art. 2 (Critical Infrastructure), Art. 27 (FRIA), Annex III
Use this checklist to determine if your AI system is high-risk.
If your system matches any of the 8 scenarios listed above, it is definitely high-risk and requires FRIA.
Systems affecting rights under the EU Charter of Fundamental Rights (privacy, non-discrimination, service access, etc.) require FRIA.
If your system automates decisions significantly affecting people (employment, credit, housing, health, justice), it is high-risk.
Any system that identifies, monitors or analyzes people (biometrics, location, behavior) is high-risk.
Systems controlling energy, water, transport, communications or healthcare are high-risk.
If you answered YES to any question above: Your organization must conduct a FRIA immediately.
Explore comprehensive documentation on FRIA and the Artificial Intelligence Act.
Resources →Send a brief message and we will respond within 24 hours.