Everything you need to understand AI, ethics, bias, and how to build better systems. Videos, articles, datasets, and tools.
New to AI? Start with these friendly introductions, then dig deeper.
3Blue1Brown's "What Makes Neural Networks Special?" explains the fundamental concepts in a visual, intuitive way.
Watch (15 min) VIDEO BEGINNERGoogle's "Machine Learning Crash Course" teaches core ML concepts: supervised learning, classification, neural networks.
Read (2-4 hours) ARTICLE INTERACTIVE"A.I. Explained" breaks down complex AI topics like transformers, language models, and generative AI in 10-20 minute episodes.
Listen (10-20 min/episode) PODCAST BEGINNERNo coding needed. Visit Teachable Machine (Google) to train a simple AI classifier by uploading photos.
Try It HANDS-ON FUNDifferent problems need different AI types. Here's how to choose.
Algorithms that predict categories (spam or not spam) or numbers (house price). Used in: email filters, recommendations, predictions.
Learn on Kaggle CLASSIFICATIONInspired by the brain, these can recognize images, understand language, and generate text. Powering ChatGPT & image recognition.
Free Course NEURAL NETSCreates new images, text, audio. Think: DALL-E (images), ChatGPT (text), Stable Diffusion. How they work & risks.
Read GENERATIVEPowers Netflix recommendations, Spotify playlists, Amazon suggestions. How Netflix suggests your next binge-watch.
Coursera Course RECOMMENDATIONTeaches AI to understand & generate human language. Chatbots, translation, sentiment analysis, text summarization.
HuggingFace NLP Course NLPAI that "sees" images: facial recognition, medical imaging, autonomous vehicles. How it works & privacy concerns.
Learn CNN VISIONReal-world cases of AI gone wrong. Learn from them.
What happened: Judges used an AI system (COMPAS) to decide if criminals should be released. The AI was biased against Black defendants, rating them as higher risk of reoffending than white defendants who committed similar crimes.
Why it happened: The AI was trained on historical arrest dataβwhich itself contains racial bias from policing practices.
Impact: Black people were wrongly kept in prison longer based on a "prediction."
Lesson: Be careful with historical data; it can encode past discrimination into the future.
What happened: Amazon built an AI to screen job applications. It consistently rejected women for technical rolesβeven though women were equally qualified.
Why it happened: Amazon trained the AI on 10 years of hiring data. Most hires in tech were men, so the AI learned "tech = male."
Impact: Women's careers harmed. Amazon's diversity goals failed.
Lesson: If your training data reflects past discrimination, your AI will perpetuate it.
What happened: A popular AI in hospitals predicted which patients needed extra care. It was treating Black patients worse than white patients because it used "cost" as a proxy for healthβand the healthcare system has spent less on Black patients historically.
Why it happened: Flawed proxies: cost β health need.
Impact: Black patients were denied care they needed.
Lesson: Be careful what you measure. The "right" metric matters.
Google AI's Responsible AI practices: detecting bias, fairness metrics, interpretability. Practical checklist.
Read GUIDEFast.ai's practical course on bias: how it enters data, models, and outcomes. How to detect and fix it.
Course VIDEOBook by Cathy O'Neil. How algorithms are used unfairly in hiring, lending, criminal justice. Accessible & eye-opening.
Book BOOKMIT's interactive workshop: explore bias in real datasets, learn how to build fairer models.
Visit MIT Media Lab INTERACTIVEWhy your training data matters. A lot.
Problem: You only train on data from a certain group (e.g., university students). The AI works well for them but fails for everyone else.
Problem: Past data reflects past discrimination (e.g., fewer women were hired). Your AI learns & perpetuates that discrimination.
Problem: How you label training data is biased. Example: Annotators might label "aggressive" speech differently for men vs. women.
Problem: Your data doesn't represent everyone. If your image dataset is 90% Western faces, it's terrible at recognizing other faces.
Ask these questions:
β Who provided the training data? Does it represent everyone I want to help?
β Could this data have been collected in a biased way?
β Are there groups the data is missing or underrepresents?
β How will different groups be affected if the model is wrong?
fairmlbook.org has a full bias checklist.
How to spot fake videos and AI-generated content.
Quick explainer on GANs and deepfake technology. How someone can make a video of a politician saying things they never said.
Watch (5 min) VIDEOTell-tale signs: The eyes don't blink right. Teeth are blurry. Skin tone shifts. Ears don't move.[Practical guide with examples.
Read GUIDEAs AI gets better, misinformation gets easier. What happens when anyone can create fake news? UNESCO's framework for fighting it.
Read ARTICLENewsGuard, Snopes, FullFact: Sites that help you verify claims. Bookmark them!
Snopes TOOLWhere to build AI without coding.
Write Python code in your browser. Integrate with Kaggle datasets. No installation needed. Great for prototyping.
Go to ColabThousands of ready-to-use AI models. Want to classify text, generate images, translate languages? Start here.
Explore ModelsTrain an AI classifier by uploading classes of images. Export it to use in your own project.
Try ItProfessional frameworks for building custom models. Steep learning curve but powerful.
TensorFlow | PyTorchReal data to use in your projects.
Thousands of datasets on everything: sports, health, movies, climate, COVID-19. Perfect for school projects.
Browse DATASETSClassic academic datasets. Great for classification, regression, clustering projects.
Browse DATASETSSearch for publicly available datasets on any topic. Find data on climate, poverty, education, health, etc.
Search SEARCHGovernment & NGO datasets specific to Lebanonβtraffic, economy, health, education.
Browse LOCALAccessible articles & papers on AI topics.
Uplink. What's changing & what jobs survive? Thoughtful, not doom-y.
Read βSimple explanation of transformers and language models. No math needed.
Read βEthical concerns with DALL-E, Midjourney: copyright, environmental cost, cultural appropriation, authorship.
Read βHow AI systems can leak personal data. What companies know about you. How to protect yourself.
Read βTraining large AI models consumes enormous energy. Environmental impact. Green AI solutions.
Read β