AI Innovation Challenge ยท Week 2 of 4

Design Your
AI System.

You have an idea. Now you need to think like an AI engineer โ€” not just what it does, but how it works, what it needs, and what can go wrong.

๐Ÿง 

ML Architecture

What type of learning? What model? Why?

๐Ÿ“Š

Data & Bias

Training data sources, quality risks, bias mitigation

โš–๏ธ

Ethics & Limits

Who could be harmed? Where does it break?

Today's Session ยท 60 min

What We're Doing Today

0โ€“10 minIdea Brief feedback + approvalsQuick round โ€” what got approved, what needs rethinking
10โ€“25 minThe System Design FrameworkWalk through the 5 design questions every team must answer
25โ€“45 minTeam work sessionTeams fill in their System Design worksheet
45โ€“55 minHot seat: 2 teams share their designClass challenges their data & ethics thinking
55โ€“60 minPitch deck structure introWhat goes in your slides and how to structure them
Teacher: Return Idea Briefs with written feedback before this session. Teams who need to pivot get 5 min of your time at the start.
The Framework

5 Questions Every Team
Must Answer

System Design Questions
1What does it do? โ€” Describe your system's behaviour from a user's perspective in 2 sentences.
2How does it learn? โ€” What ML type, what model, what training process?
3What data does it need? โ€” Source, volume, format, and any label requirements.
4Where can it go wrong? โ€” Bias risks, failure cases, adversarial inputs, edge cases.
5What are its limits? โ€” Ethical risks, who it could harm, what it should never be used for.
Teacher: These 5 questions map directly onto the pitch rubric. Design well now = pitch well later.
How AI Systems Work

The Flow Every System Follows

๐Ÿ“ฅ Input

What data comes in? Text, image, sensor, user action?

โ†’

โš™๏ธ Model

What ML type? What's it been trained on?

โ†’

๐Ÿ“ค Output

Prediction, classification, generated content, recommendation?

โ†’

๐Ÿ” Feedback

How does the system improve over time?

๐Ÿ’ฌ Example โ€” walk through together
  • Spam filter: Input = email text โ†’ Model = text classifier (supervised ML) โ†’ Output = spam/not spam โ†’ Feedback = user corrections
  • Now apply this to your own idea. What's your Input โ†’ Model โ†’ Output โ†’ Feedback loop?
Teacher: Walk through 1 student idea using this framework before teams work independently.
Critical Thinking

Data, Bias, and
What Can Go Wrong

Step 1

Where's your data from?

Who collected it? When? Does it represent your actual users, or just some of them?

โš ๏ธ A dataset that doesn't represent your users will produce a biased model โ€” even if the training accuracy looks great.
Step 2

What biases could it carry?

Historical bias (reflects old injustices), sampling bias (certain groups missing), label bias (humans who labeled it had their own views).

โš ๏ธ Ask: whose data is overrepresented? Whose is missing? What were the labelers' assumptions?
Step 3

How do you mitigate it?

Diverse data collection, fairness metrics, human oversight, regular audits, transparency about limitations.

โš ๏ธ Saying "we'll just use good data" is not a mitigation strategy. Be specific.
Teacher: Push teams to go beyond "we'll make sure the data is unbiased." Ask for specifics.
AI Ethics

Is Your System Ethical?

Every AI system affects people. Your pitch must show you've thought this through.

Who could be harmed?

If your system makes a wrong prediction โ€” who suffers? Is that harm minor (wrong movie rec) or serious (wrong medical diagnosis)?

Is consent considered?

Does the user know their data is being used? Is there a way to opt out? What happens to their data?

Could it be misused?

Could a bad actor exploit your system? Could it be used for surveillance, discrimination, or manipulation?

Is there a human in the loop?

For high-stakes decisions (medical, legal, financial), should a human always have final say? How is that built in?

Teacher: Great discussion prompt: "What's the worst-case scenario if your system is wrong 10% of the time?"
Special Case

If Your System Uses
an LLM or GenAI

Many ideas will involve a chatbot, content generator, or language model. Teams using LLMs must address these specifically:

๐Ÿค”

Hallucinations

LLMs can confidently state false information. How does your system handle or flag this? What's the risk if a user trusts it?

๐Ÿ”’

Prompt Injection

Malicious users can try to trick your model with crafted inputs. Have you thought about how users might abuse your system?

๐ŸŒ

Cultural & Language Bias

Most LLMs are trained predominantly on English, Western data. How does this affect your use case, especially if aimed at Lebanon or Arabic speakers?

โ™ป๏ธ

Environmental Cost

Training and running large language models consumes significant energy. Is the benefit worth the cost? How might you minimise it?

Teacher: This applies to any team building a chatbot, writing tool, Q&A system, etc.
Activity ยท 10 min

The Hot Seat ๐Ÿ”ฅ

Two teams volunteer (or are chosen). They explain their system design. The class asks hard questions.

๐ŸŽฏ Challenge Questions for the Class
  • Where exactly does your training data come from?
  • What happens when the model is wrong โ€” who's affected?
  • Could your system discriminate against a specific group? How?
  • If I wanted to abuse your system, how would I do it?
  • Why does this specifically need ML โ€” not a simpler rule-based solution?
Teacher: Encourage respectful but tough questioning. This is exactly what judges will ask at the exhibition.
Building Your Pitch

Pitch Deck Structure

1

Slide 1 โ€” Hook: The Problem

Start with a story, statistic, or scenario. Make the audience feel the problem before you introduce your solution.

2

Slides 2โ€“3 โ€” The Solution + How it Works

What your system does + the ML type and process. Include a diagram, mockup, or user flow.

3

Slide 4 โ€” Data & Bias

Training data sources, known biases, and your mitigation approach. Show you've thought critically.

4

Slide 5 โ€” Ethics, Limits & Failures

Who could be harmed, failure cases, and what your system should never be used for.

5

Slide 6 โ€” Future Work

What you'd improve, expand, or research next. Shows ambition and depth of thinking.

Teacher: Remind teams: visuals > text on slides. Judges read fast. Clarity beats density.
Before Next Class

Your Week 2 Deliverables

๐Ÿ“‹

System Design Worksheet

Completed during today's class. All 5 design questions answered in detail.

๐Ÿ“Š

Pitch Deck Draft

6 slides. Rough is fine. Must cover all 6 required pitch elements. Bring to Week 3 for practice.

Week 3 is entirely dedicated to pitch rehearsal and feedback. Come prepared to present your full deck.

Teacher: Let teams know you'll do a quick deck review before Week 3 if they share their slides with you early.