← BackGrades 7-8 | Chapter 2: Understanding AI

AI Bias & Ethics ⚖️

The Dark Side of AI

Session 6: Training Data & Fairness

What Is Bias in AI? 🎯

Bias in AI means the system is unfair - it treats different groups of people differently. Usually unintentionally, but the impact is real.

Key Concept:

  • The Problem: AI learns from data, and data reflects human biases
  • The Result: AI can discriminate against groups of people
  • The Stakes: Bias can deny loans, jobs, freedom, healthcare

Important: This isn't about "bad AI" - it's about who creates data and what data we choose

Case Study: Facial Recognition 👤

One of the clearest examples of AI bias comes from facial recognition accuracy.

The Problem:

  • Microsoft's facial recognition: 99% accuracy for white men
  • Same system: 35% accuracy for dark-skinned women
  • Why? Training data was 80% white faces, 20% others
  • Impact: Dark-skinned women constantly misidentified

Real consequence: If police use biased facial recognition, innocent people get arrested!

Case Study: Hiring AI 💼

Amazon created an AI to screen job applicants. It had major bias.

What Happened:

  • Trained on historical hiring data (mostly men in tech)
  • AI learned to prefer men over women for technical roles
  • Female applicants automatically downranked
  • Amazon removed the system - it was too biased

Lesson: If you train AI on biased historical data, it will be biased!

Case Study: Predictive Policing 🚔

Some police departments use AI to predict where crime will happen. But this led to over-policing minority neighborhoods.

The Problem:

  • AI trained on historical arrest data
  • Historical data showed more arrests in minority neighborhoods
  • AI predicted more crime in same neighborhoods
  • Police sent more officers there (self-fulfilling prophecy!)
  • More arrests → more data showing crime there → vicious cycle

Where Bias Comes From 📊

Sources of Bias:

  • Data bias: Training data is skewed (missing some groups)
  • Historical bias: Data reflects past discrimination
  • Sample bias: Training set isn't representative
  • Labeling bias: Humans making labels are biased
  • Measurement bias: Some groups measured differently

Usually the bias is unintentional - but intention doesn't matter, impact does!

More Real Examples 😟

Other Documented Cases:

  • Credit scoring: Algorithms deny loans to minorities at higher rates
  • Healthcare: Medical algorithm recommended less treatment for Black patients
  • School systems: Algorithms flagged more students of color for discipline
  • Language AI: Gender bias in word associations (nurse=female, engineer=male)

How to Fix Bias 🔧

Solutions:

  • Diverse training data: Include all demographic groups
  • Diverse teams: People from different backgrounds spot bias better
  • Regular audits: Test AI on different groups to find bias
  • Transparency: Tell people when AI is making decisions
  • Human oversight: Have humans review important AI decisions
  • Fairness metrics: Measure accuracy separately for each group

The Fairness Challenge ⚖️

Making AI "fair" is harder than it sounds. Different fairness definitions can conflict!

Fairness Tradeoffs:

  • Equal accuracy: Same error rate for all groups (but requires different thresholds!)
  • Equal outcomes: Same percentage approved from each group (but groups might have different qualifications!)
  • Individual fairness: Similar people treated similarly (but how do you define "similar"?)

Reality: Can't maximize all fairness metrics at once

Your Role: Critical Thinking 🧠

Questions to Ask About Any AI System:

  • What was it trained on? (What data?)
  • Does that data represent all groups fairly?
  • Has anyone tested for bias?
  • Who benefits from this AI? Who might be harmed?
  • Is there human oversight?
  • Can the decision be appealed?
  • Is it transparent how it works?

The Bigger Picture: AI Justice 🌍

This Matters Because:

  • AI is making increasingly important decisions (loans, jobs, parole, education, healthcare)
  • Biased AI can perpetuate and amplify existing discrimination
  • Tech companies have responsibility to test for bias
  • We need YOU: Future developers who understand bias and build fair AI

Hope: Awareness is growing. Companies and regulators are taking bias seriously!

What We Learned 🎓

  • AI bias means the system treats groups unfairly
  • Bias often comes from biased training data
  • Real examples: Facial recognition, hiring, policing, healthcare
  • Bias can have serious consequences (freedom, jobs, safety)
  • Solutions: Diverse data, diverse teams, audits, transparency
  • You should question AI systems and ask about bias!
  • This is about justice and equality, not just tech

Bias Awareness Complete! 🎉

You understand AI's dark side - and how to make it better

Next Chapter: Building the Web with HTML!