Flagged
A Slow AI game
You are an assistant professor. Your university has run twelve student submissions through an AI detection tool. Each comes back with a probability score. You must decide: flag the submission for investigation, or pass it as acceptable.
You can view each student's file before deciding. You do not have to.
Every flag lands on a real person.
Keyboard: P pass, F flag, V view file
What actually happened:
Review Complete
Innocent students you flagged
AI use you missed
What this game is really about
AI detection does not work reliably. But even if it did, the problem would remain. Every flag initiates a process that treats a student as suspect. That process has consequences: stress, delays, lost scholarships, damaged trust.
The students most likely to be falsely flagged are non-native English speakers, neurodivergent students, and those who use permitted assistive technology. The students most likely to evade detection are the most sophisticated users. Detection tools punish the vulnerable and miss the rest.
The alternative is not to do nothing. It is to design assessments that make the question irrelevant. Oral examinations, reflective portfolios, process-based assessment, and authentic tasks all evaluate understanding without requiring surveillance.
That is what critical AI literacy looks like in practice.