AI Fairness & Bias Assessment
Calculate demographic parity, equalized odds, equal opportunity, and disparate impact. EU AI Act mandates fairness audits. IBM AIF360, Microsoft Fairlearn.
Why This ML Metric Matters
Why: Fairness audits are required for high-risk AI under EU AI Act. Demographic parity, equalized odds, and disparate impact detect bias across protected groups.
How: Demographic parity = |rate_A - rate_B|. Equalized odds = TPR/FPR differences. Disparate impact = min/max rate ratio (four-fifths rule ≥ 0.8).
- ●Four-fifths rule ≥ 0.8
- ●EU AI Act 2024
- ●AIF360, Fairlearn
- ●Hardt 2016
AI Fairness & Bias Calculator
EU AI Act mandates fairness audits for high-risk AI. Calculate demographic parity, equalized odds, equal opportunity, and disparate impact. IBM AIF360, Microsoft Fairlearn.
📊 Quick Examples — Click to Load
Confusion Matrix by Protected Group
Group A
Group B
Fairness Metrics — Group A vs B
Disparate Impact & Fairness Radar
⚠️For educational and informational purposes only. Verify with a qualified professional.
🤖 AI & ML Facts
EU AI Act 2024 mandates fairness audits for high-risk AI systems
— EU AI Act
IBM AIF360 and Microsoft Fairlearn are industry-standard fairness toolkits
— AIF360/Fairlearn
Hardt et al. 2016 defined equalized odds and equal opportunity
— Hardt 2016
Four-fifths rule: disparate impact ratio ≥ 0.8 typically passes legal scrutiny
— EEOC
📋 Key Takeaways
- • Fairness is multi-dimensional — no single metric captures all aspects
- • Demographic parity: equal positive prediction rates across groups
- • Equalized odds: equal TPR and FPR across groups (Hardt et al. 2016)
- • Equal opportunity: equal TPR only (qualified individuals treated equally)
- • Four-fifths rule: disparate impact ratio ≥ 0.8 is often considered acceptable
- • Context matters: hiring vs. loan vs. criminal justice require different metrics
- • Fairness constraints can reduce overall accuracy — tradeoffs exist
💡 Did You Know
📖 How It Works
1. Group Metrics
For each protected group (A, B), compute TP, FP, TN, FN. Then derive TPR, FPR, precision, accuracy, and positive prediction rate.
2. Demographic Parity
P(Ŷ=1|group) = (TP+FP)/(TP+FP+TN+FN). Parity holds when this rate is equal across groups. Difference measures violation.
3. Equalized Odds
TPR = TP/(TP+FN), FPR = FP/(FP+TN). Equalized odds requires both TPR and FPR to be equal across groups (Hardt 2016).
4. Disparate Impact
Ratio of positive prediction rates. min(rate_A, rate_B) / max(rate_A, rate_B). ≥0.8 passes the four-fifths rule.
🎯 Expert Tips
No single fairness metric
Report multiple metrics. Demographic parity, equalized odds, and disparate impact can conflict — choose by context.
Check intersectionality
Bias can compound across race, gender, age. Stratify by multiple attributes when possible.
Context determines metric
Hiring: disparate impact. Criminal justice: equalized odds. Medical: equal opportunity. Match metric to use case.
Fairness vs. accuracy tradeoff
Enforcing fairness constraints (e.g., demographic parity) can reduce overall accuracy. Document and justify tradeoffs.
⚖️ This vs. AIF360 vs. Fairlearn vs. Manual
| Tool | Demographic Parity | Equalized Odds | Disparate Impact | Mitigation |
|---|---|---|---|---|
| This Calculator | ✓ | ✓ | ✓ | No — assessment only |
| IBM AIF360 | ✓ | ✓ | ✓ | Yes — 10+ algorithms |
| Microsoft Fairlearn | ✓ | ✓ | ✓ | Yes — reduction, post-processing |
| Manual (Excel) | ✓ | ✓ | ✓ | No |
| Google What-If Tool | ✓ | ✓ | ✓ | Visualization + analysis |
❓ Frequently Asked Questions
What is demographic parity?
Demographic parity requires equal positive prediction rates across protected groups: P(Ŷ=1|A=0) = P(Ŷ=1|A=1). Violation = |rate_A - rate_B|.
When should I use equalized odds vs. equal opportunity?
Equalized odds: when both TPR and FPR matter (e.g., criminal risk). Equal opportunity: when only TPR matters for qualified individuals (e.g., hiring).
What is the four-fifths rule?
Disparate impact ratio ≥ 0.8. If the selection rate for a protected group is less than 80% of the majority, it may indicate discrimination (EEOC guidelines).
Can fairness metrics conflict?
Yes. Satisfying demographic parity can violate equalized odds and vice versa. Impossibility results show no single classifier can satisfy all fairness definitions simultaneously.
What are legal requirements for AI fairness?
EU AI Act 2024 mandates fairness assessments for high-risk AI. US EEOC uses disparate impact. Sector-specific rules (credit, hiring) apply.
How do I mitigate bias?
Pre-processing (reweighting, resampling), in-processing (fairness constraints in training), post-processing (threshold adjustment per group). Use AIF360 or Fairlearn.
What is intersectionality in fairness?
Bias can compound across multiple attributes (e.g., Black women). Stratify by multiple protected attributes to detect intersectional bias.
Why does fairness reduce accuracy?
Enforcing parity constraints can prevent the model from using group-specific optimal thresholds. Tradeoff between fairness and utility is often unavoidable.
📊 Fairness by the Numbers
📚 Official Sources
⚠️ Disclaimer: This calculator provides fairness metrics for educational and preliminary assessment. For production audits, use verified libraries (IBM AIF360, Microsoft Fairlearn) and consult legal/compliance experts. Fairness definitions can conflict; no single metric suffices. EU AI Act and sector-specific regulations may impose additional requirements. Results do not constitute legal advice.