DATAResponsible AIML Calculator
⚖️

AI Fairness & Bias Assessment

Calculate demographic parity, equalized odds, equal opportunity, and disparate impact. EU AI Act mandates fairness audits. IBM AIF360, Microsoft Fairlearn.

Concept Fundamentals
P(Ŷ=1) equal across A
Demographic Parity
Group fairness
≥ 0.8 (4/5 rule)
Disparate Impact
EEOC threshold
TPR parity across groups
Equalized Odds
Conditional fairness
Bias audit & compliance
Application
Responsible AI
CalculateUse the calculator below to run neural computations

Why This ML Metric Matters

Why: Fairness audits are required for high-risk AI under EU AI Act. Demographic parity, equalized odds, and disparate impact detect bias across protected groups.

How: Demographic parity = |rate_A - rate_B|. Equalized odds = TPR/FPR differences. Disparate impact = min/max rate ratio (four-fifths rule ≥ 0.8).

  • Four-fifths rule ≥ 0.8
  • EU AI Act 2024
  • AIF360, Fairlearn
  • Hardt 2016
⚖️
EU AI ACT 2024

AI Fairness & Bias Calculator

EU AI Act mandates fairness audits for high-risk AI. Calculate demographic parity, equalized odds, equal opportunity, and disparate impact. IBM AIF360, Microsoft Fairlearn.

📊 Quick Examples — Click to Load

Confusion Matrix by Protected Group

e.g., Gender, Race, Age

Group A

true positives
false positives
true negatives
false negatives

Group B

true positives
false positives
true negatives
false negatives
threshold for positive prediction (informational)
fairness_audit.sh
FAIL
$ fairness_audit --attr=Gender --groupA=80,20,85,15 --groupB=45,5,120,30
Fairness Verdict
FAIL
Demographic Parity
0.2500
Equal Opp. Diff
0.2421
EO TPR Diff
0.2421
EO FPR Diff
0.1505
Disparate Impact
0.5000
Group A — TPR / FPR / Prec / Acc
84.2% / 19.0% / 80.0% / 82.5%
Group B — TPR / FPR / Prec / Acc
60.0% / 4.0% / 90.0% / 82.5%
Share:
AI Fairness Assessment
Fairness Verdict: FAIL
0.50
Disparate Impact Ratio (≥0.8 passes)
Dem. Parity: 0.250Equal Opp.: 0.242EO TPR: 0.242EO FPR: 0.150
numbervibe.com/calculators/machine-learning/ai-fairness-bias-calculator

Fairness Metrics — Group A vs B

Disparate Impact & Fairness Radar

⚠️For educational and informational purposes only. Verify with a qualified professional.

🤖 AI & ML Facts

⚖️

EU AI Act 2024 mandates fairness audits for high-risk AI systems

— EU AI Act

📊

IBM AIF360 and Microsoft Fairlearn are industry-standard fairness toolkits

— AIF360/Fairlearn

🎯

Hardt et al. 2016 defined equalized odds and equal opportunity

— Hardt 2016

📐

Four-fifths rule: disparate impact ratio ≥ 0.8 typically passes legal scrutiny

— EEOC

📋 Key Takeaways

  • • Fairness is multi-dimensional — no single metric captures all aspects
  • • Demographic parity: equal positive prediction rates across groups
  • • Equalized odds: equal TPR and FPR across groups (Hardt et al. 2016)
  • • Equal opportunity: equal TPR only (qualified individuals treated equally)
  • • Four-fifths rule: disparate impact ratio ≥ 0.8 is often considered acceptable
  • • Context matters: hiring vs. loan vs. criminal justice require different metrics
  • • Fairness constraints can reduce overall accuracy — tradeoffs exist

💡 Did You Know

⚖️ProPublica found COMPAS had 66% higher false positive rate for Black defendants vs. White — equalized odds violation
👔Amazon scrapped an AI hiring tool that showed bias against women — trained on 10 years of male-dominated resumes
💳Apple Card faced controversy in 2019 when users reported higher credit limits for men than women with similar profiles
🇪🇺EU AI Act 2024 mandates fairness audits for high-risk AI systems — bias detection is now legally required
🔬IBM AIF360 (2018) is an open-source toolkit with 70+ fairness metrics and 10+ mitigation algorithms
80%The four-fifths (80%) rule: if selection rate for protected group is <80% of majority, it may indicate discrimination
🔀Intersectionality: bias can compound across multiple attributes (e.g., Black women vs. White men)
📊Microsoft Fairlearn provides fairness dashboards and mitigation for classification and regression

📖 How It Works

1. Group Metrics

For each protected group (A, B), compute TP, FP, TN, FN. Then derive TPR, FPR, precision, accuracy, and positive prediction rate.

2. Demographic Parity

P(Ŷ=1|group) = (TP+FP)/(TP+FP+TN+FN). Parity holds when this rate is equal across groups. Difference measures violation.

3. Equalized Odds

TPR = TP/(TP+FN), FPR = FP/(FP+TN). Equalized odds requires both TPR and FPR to be equal across groups (Hardt 2016).

4. Disparate Impact

Ratio of positive prediction rates. min(rate_A, rate_B) / max(rate_A, rate_B). ≥0.8 passes the four-fifths rule.

🎯 Expert Tips

No single fairness metric

Report multiple metrics. Demographic parity, equalized odds, and disparate impact can conflict — choose by context.

Check intersectionality

Bias can compound across race, gender, age. Stratify by multiple attributes when possible.

Context determines metric

Hiring: disparate impact. Criminal justice: equalized odds. Medical: equal opportunity. Match metric to use case.

Fairness vs. accuracy tradeoff

Enforcing fairness constraints (e.g., demographic parity) can reduce overall accuracy. Document and justify tradeoffs.

⚖️ This vs. AIF360 vs. Fairlearn vs. Manual

ToolDemographic ParityEqualized OddsDisparate ImpactMitigation
This CalculatorNo — assessment only
IBM AIF360Yes — 10+ algorithms
Microsoft FairlearnYes — reduction, post-processing
Manual (Excel)No
Google What-If ToolVisualization + analysis

❓ Frequently Asked Questions

What is demographic parity?

Demographic parity requires equal positive prediction rates across protected groups: P(Ŷ=1|A=0) = P(Ŷ=1|A=1). Violation = |rate_A - rate_B|.

When should I use equalized odds vs. equal opportunity?

Equalized odds: when both TPR and FPR matter (e.g., criminal risk). Equal opportunity: when only TPR matters for qualified individuals (e.g., hiring).

What is the four-fifths rule?

Disparate impact ratio ≥ 0.8. If the selection rate for a protected group is less than 80% of the majority, it may indicate discrimination (EEOC guidelines).

Can fairness metrics conflict?

Yes. Satisfying demographic parity can violate equalized odds and vice versa. Impossibility results show no single classifier can satisfy all fairness definitions simultaneously.

What are legal requirements for AI fairness?

EU AI Act 2024 mandates fairness assessments for high-risk AI. US EEOC uses disparate impact. Sector-specific rules (credit, hiring) apply.

How do I mitigate bias?

Pre-processing (reweighting, resampling), in-processing (fairness constraints in training), post-processing (threshold adjustment per group). Use AIF360 or Fairlearn.

What is intersectionality in fairness?

Bias can compound across multiple attributes (e.g., Black women). Stratify by multiple protected attributes to detect intersectional bias.

Why does fairness reduce accuracy?

Enforcing parity constraints can prevent the model from using group-specific optimal thresholds. Tradeoff between fairness and utility is often unavoidable.

📊 Fairness by the Numbers

0.8
Four-fifths Threshold
2024
EU AI Act
10+
Fairness Metrics
66%
COMPAS FP Gap

⚠️ Disclaimer: This calculator provides fairness metrics for educational and preliminary assessment. For production audits, use verified libraries (IBM AIF360, Microsoft Fairlearn) and consult legal/compliance experts. Fairness definitions can conflict; no single metric suffices. EU AI Act and sector-specific regulations may impose additional requirements. Results do not constitute legal advice.

👈 START HERE
⬅️Jump in and explore the concept!
AI