Like all professors, I’ve spent the last two years watching AI creep into student work. I was lucky to get a head start on my colleagues, since I worked as a red teamer for OpenAI before the GPT-4 release. That was an early eye-opener. By 2022 I had started to experiment with podcasts, group work, and other “non-traditional” assignments to avoid the inevitable.
A couple months ago I finished a semester where I think I’ve temporarily “defeated” AI in the classroom. Not in the sense that students don’t use it — I’m sure they do — but in the sense that it’s not yet a substitute for completing the assignments and getting a good grade. Here’s my method, tell me if I’m being naive.
The first thing to admit as an instructor is you cannot spend time “catching” AI. Except in a few obvious cases, it’s too easy to disguise. You don’t have the time, the tools (the detectors are bullshit), or the institutional backing. More importantly, you shouldn’t have to turn your class into a surveillance state. So early on, I decided that instead of trying to catch AI-generated text I would redesign my courses to make it harder for students to use AI in the first place. The result was two of the most engaged and (judging by evaluations) highly rated classes I’ve had in years.
I taught two senior-year seminars: one on the Soviet collapse, another on the global politics of science fiction. (Syllabi here and here.) The Soviet class is usually heavy on readings and discussion. But this time, I added a semester-long collaborative assignment: students had to build a video game. Namely, they designed a turn-based simulation — inspired by this open-source game about Weimar Germany — in which the player takes on the role of Gorbachev in 1985 and tries to avoid the collapse of the USSR.
The class was split into design teams, each one tackling a different game element—foreign policy, domestic reforms, advisor mechanics, external events, and so on. It was a big task. (The game design document is here if you’re interested.) They did research on details of obscure policies, argued about the meaning of “political capital”, invented stats for Gorbachev’s inner circle, and struggled with how to turn the coup attempt into a branching narrative.
The result was exactly what I’d hoped: messy, confusing, surprisingly rigorous, and pretty much un-AI-able. AI could help with a paragraph of background text or a policy card blurb, but it couldn’t create the logic, collaboration, or interpretive decisions required. Students couldn’t cheat because there was no answer to cheat on. You had to build the game together.
We did not come close to finishing the game. (You can play the first few months here.) I don’t care. The students had fun, I had fun, and the next time I teach the class we can pick it up and keep going all the way to 1991.
In the sci-fi course, we focused on in-class workshops and creative writing. Students had to write a short story that incorporated a concept from international relations theory— realism, constructivism, security dilemma, whatever—into a speculative world. Their story had to dramatize the concept in a way that was both true to the theory and narratively coherent. We spent two sessions workshopping drafts together, and the final week was a small-group peer review session.
AI is not helpless here—it’s just not good enough yet. It can spit out a generic sci-fi story but struggles to integrate complex theoretical ideas in a way that’s subtle and consistent. It can be done, but it requires working with AI — tuning responses, thinking of prompts, editing and re-editing — instead of using AI as a ready-made substitute.
That’s probably the core nudge at the heart of current course design: are we making it easier for students to use AI as tool, or as a substitute? Because they will use it regardless. As a tool, AI can sharpen thinking, brainstorm objections, or suggest angles the student hadn’t considered. And while I’m sure most students used AI to generate ideas and text for their stories, the final product required real thought, real choices, and real discussion. It also made classes more interesting. The nice thing about these redesigns is they don’t just avoid AI but actually improve the pedagogy.
That said, I know this model can’t scale easily. Both courses were small—23 students each. When I teach the 500-person Intro course next year, it’s Bluebook Time. And I had assistance. For the Soviet collapse course, the department funded a computer science undergrad to help with the game coding. The assignment would not have been possible without her help, and that kind of support is a luxury many professors don’t have. So even if these sorts of assignments offer a path around AI, it’s a narrow one.
But still. AI is all professors complain about now. It’s boring to listen to. Remember that brief but intense outrage over the fake summer book list? That’s what professors deal with every day, en masse. At stake here is not just cheating but the total erosion of trust inside the university. It’s a problem we’re still figuring out. But this semester was a useful reminder: if AI is making your assignments worse, it’s easier to change the assignments than to change how students use AI.
Great write-up prof! I think you make an important point about the scalability of your solutions to AI. I think another thing to consider is the self-selection bias for these courses.
By the time students reach 400 level courses at U of T, they tend to be already deeply invested in their academic interests and area specializations (not to mention more knowledgeable on university plagiarism policies). Like you mentioned, these seminar classes are small, so individual effort tends to be more visible, impactful and accountable. Thus, the kinds of students who take these classes already tend to put in more effort and avoid "shortcuts" like generative AI.
Contrast this with your average student in an Intro course like POL100 or POL208. They tend to skew younger and may only be taking this class to fulfill a prerequisite, which already by itself lends to less academic engagement. They might be going into the course with the mentality of "I just want to get this done and over with". When you combine that with larger class sizes and a generalized course curriculum, cheating is not only more tempting, but also easier to get away with. Hand-written assignments are one way to address this issue, but I don't think it can truly be solved.
Anyways, just another thing to consider!
Awesome idea to build a game. With no code tools, we can actually expect students to build the game as part of the course.