Grades 10–11 | Week 2: Hands-On + Exam Prep 1 / 22

Week 2: Let's Get Hands-On

Testing AI, Finding Careers & Prepping for the Exam

50 Minutes | Activities + Career Exploration + Exam Prep

Week 1: Theory ✓
Week 2: Hands-On ← You are here
Week 3: Exam

Today's Plan

1

Quick Recap — what do we remember from Week 1? (5 min)

2

Activity 1: Break the AI — test a real LLM and document what you find (18 min)

3

Activity 2: AI Career Explorer — what jobs exist in AI and what skills do they need? (15 min)

4

Exam Prep — what to expect, what to focus on (10 min)

5

Debrief + Questions (2 min)

Quick Recap — Week 1

Class Question

Without looking at notes — who can explain what a hallucination is and why it happens?

Token — unit of text an LLM reads (~¾ word)

Training — data → learn patterns → fine-tune

Context window — short-term memory, resets each session

Hallucination — confident but false output, not lying

Activity 1: Break the AI 🔨

Your Mission

Use a real AI tool (ChatGPT, Claude, Gemini — whichever you have access to) and try to trigger the limitations we covered last week. Document everything.

Tools you can use: ChatGPT (chat.openai.com), Claude (claude.ai), Gemini (gemini.google.com)

You need a phone or laptop. Work in pairs if needed.

You have 18 minutes. Fill in the worksheet on the next slide as you go.

Your Worksheet 📋

Try each challenge. Write what happened — be specific.

1

Trigger a hallucination

Ask about a very specific obscure fact, a made-up person, or a recent event. Did it make something up confidently?

2

Test the knowledge cutoff

Ask about something that happened recently. What does it say? Does it admit it doesn't know or does it guess?

3

Test an ambiguous prompt

Ask something vague like "Tell me about the big event." What assumptions does the AI make?

4

Bonus: Find something impressive

What does the AI do really well? Note one thing that genuinely surprised you.

What Did You Find? 🔍

Share Out — 3 volunteers

What was the most interesting thing you discovered? Did you successfully trigger a hallucination? What did it say?

Key Reflection

Notice how the AI sounds equally confident whether it's right or wrong. There's no "I'm not sure" signal built into how it generates text. That's what makes hallucinations dangerous in high-stakes contexts.

Real-world implication: If you're using AI for research, medical questions, legal questions, or news — you must verify what it tells you. Always.

Activity 2: AI Careers 🚀

Why This Matters Beyond School

AI isn't just a subject — it's reshaping almost every career field. Whether you go into medicine, law, design, business, or engineering, you will work alongside AI.

The goal of this activity isn't to push you into tech. It's to help you understand what skills are becoming valuable — and which ones AI can't replace.

Let's look at the landscape of AI-related jobs and what they actually require.

The AI Job Landscape

🧠 AI / ML Engineer

Builds and trains AI models. Heavy coding (Python), maths (statistics, linear algebra).

Avg salary: $120,000–$200,000+ USD/year

✍️ Prompt Engineer

Designs the instructions that make AI behave correctly. Needs clear writing + understanding of LLM behaviour. No CS degree required.

Avg salary: $80,000–$130,000 USD/year

🔍 AI Ethics / Policy Analyst

Ensures AI systems are fair, safe, and legal. Needs law, philosophy, social science background.

Avg salary: $70,000–$120,000 USD/year

More AI-Adjacent Careers

📊 Data Analyst / Data Scientist

Prepares and analyses training data. Makes decisions from AI outputs. Needs stats, SQL, Python.

Avg salary: $75,000–$130,000 USD/year

🎨 AI Product Designer (UX)

Designs interfaces for AI products — how the human and AI interact. Needs design thinking + understanding of AI limitations.

Avg salary: $85,000–$140,000 USD/year

🏥 AI in Your Field

Doctors, lawyers, journalists, teachers — every profession now needs people who can critically evaluate and use AI tools. Domain expertise + AI literacy = very valuable.

Premium on salary in any field

What Skills Does AI Actually Need?

Technical Skills

Python programming

Statistics & probability

Data analysis (SQL, Excel)

Understanding of ML concepts

Human Skills AI Can't Replace

Critical thinking & fact-checking

Ethical judgment & values

Clear communication & writing

Creativity & original ideas

The most valuable people in AI combine domain expertise with the ability to critically evaluate what AI produces.

Your Career Reflection Task ✍️

Individual — 8 minutes

Answer these 3 questions honestly. There's no right answer — this is about thinking about your own future.

1

What field or career are you currently interested in? (Any field — doesn't have to be tech)

2

How do you think AI will change that field in the next 10 years? Name one specific change you can imagine.

3

What is one skill — technical or human — you want to develop that would make you more valuable in an AI-influenced world?

💡 Optional Bonus

Search for one real job listing in your field that mentions AI. What does it ask for?

What Did You Discover?

Share Out — 3–4 volunteers

What field are you interested in? What change do you expect AI to bring to it?

Key takeaway: You don't have to become a programmer to benefit from understanding AI. The people who will thrive are those who understand what AI can and can't do — and use that knowledge in their own field.

Worth knowing

The jobs that AI is replacing fastest are routine, repetitive tasks. The jobs that are growing require human judgment, creativity, and the ability to work with AI tools critically.

Week 3: The Exam

Format: 15 multiple-choice questions

Covers: Everything from Weeks 1 & 2

What it tests: Understanding of concepts, not just vocabulary

What this means

You won't just be asked "what is a token?" You'll be asked to apply the concept — like "what happens when a context window is exceeded during a long tutoring session?"

What to Study — The Checklist

How LLMs Work

✓ What a token is and why it matters

✓ The 3-step training process

✓ What fine-tuning does

✓ What a context window is + what happens when exceeded

Why They Fail

✓ What hallucinations are + 3 causes

✓ Real-world examples of hallucinations

✓ Knowledge cutoff + no real-time info

✓ Bias in training data + why it happens

The Distinctions That Trip People Up

1

Hallucination ≠ lying

The model has no concept of truth. It predicts plausible text. That's it.

2

Pattern recognition ≠ understanding

It can write a poem about grief without experiencing grief. It doesn't know what grief means.

3

Context window ≠ long-term memory

The app might save your history. The model itself starts fresh every session.

4

More data ≠ no hallucinations

Ambiguous prompts and data gaps still cause hallucinations regardless of model size.

Practice Questions

Try these — discuss with a partner

Q1: A student asks an AI for 5 sources on climate change. The AI provides 5 citations that look real but don't exist. Which limitation caused this?

→ Hallucination — pattern matching without understanding (generates plausible-looking citations)

Q2: A chatbot forgets the student's name halfway through a tutoring session even though they introduced themselves at the start. Why?

→ Context window exceeded — the early part of the conversation was dropped

Q3: An LLM gives better legal advice in English than in Arabic. What is the most likely reason?

→ Bias from training data — far more English legal content exists in the training data

Two Weeks in One Slide 🎯

The Theory (Week 1)

Tokens — unit of text (~¾ word)

Training — data → patterns → fine-tune

Context window — short-term memory with hard limit

Hallucinations — 3 causes, not lying

Bias, cutoff, no understanding

The Practice (Week 2)

Tested a real LLM and documented its failures

Identified how to trigger hallucinations

Explored AI career paths and required skills

Reflected on how AI will affect your own future field

Good luck next week. 💜

Week 3 is the MCQ exam.

Study your handout. Review the key distinctions. Get some sleep.

The exam tests understanding — not just memory.