Session 6: Training Data & Fairness
Bias in AI means the system is unfair - it treats different groups of people differently. Usually unintentionally, but the impact is real.
Important: This isn't about "bad AI" - it's about who creates data and what data we choose
One of the clearest examples of AI bias comes from facial recognition accuracy.
Real consequence: If police use biased facial recognition, innocent people get arrested!
Amazon created an AI to screen job applicants. It had major bias.
Lesson: If you train AI on biased historical data, it will be biased!
Some police departments use AI to predict where crime will happen. But this led to over-policing minority neighborhoods.
Usually the bias is unintentional - but intention doesn't matter, impact does!
Making AI "fair" is harder than it sounds. Different fairness definitions can conflict!
Reality: Can't maximize all fairness metrics at once
Hope: Awareness is growing. Companies and regulators are taking bias seriously!
You understand AI's dark side - and how to make it better
Next Chapter: Building the Web with HTML!