11 Comments
User's avatar
Jonathan's avatar

Great write-up prof! I think you make an important point about the scalability of your solutions to AI. I think another thing to consider is the self-selection bias for these courses.

By the time students reach 400 level courses at U of T, they tend to be already deeply invested in their academic interests and area specializations (not to mention more knowledgeable on university plagiarism policies). Like you mentioned, these seminar classes are small, so individual effort tends to be more visible, impactful and accountable. Thus, the kinds of students who take these classes already tend to put in more effort and avoid "shortcuts" like generative AI.

Contrast this with your average student in an Intro course like POL100 or POL208. They tend to skew younger and may only be taking this class to fulfill a prerequisite, which already by itself lends to less academic engagement. They might be going into the course with the mentality of "I just want to get this done and over with". When you combine that with larger class sizes and a generalized course curriculum, cheating is not only more tempting, but also easier to get away with. Hand-written assignments are one way to address this issue, but I don't think it can truly be solved.

Anyways, just another thing to consider!

Expand full comment
Seva Gunitsky's avatar

yeah it makes a big difference that the courses are not required and there's a lot of self-selection at work, given the syllabus. but that's an old trick

Expand full comment
Porlock's avatar

As a graduate of a place where classes were generally small, and courses were supposed to require serious thinking even at the first-year level, I think this is brilliant.

Expand full comment
Vishal Sachdev's avatar

Awesome idea to build a game. With no code tools, we can actually expect students to build the game as part of the course.

Expand full comment
Doug Sly's avatar

By avoiding AI, I cannot help but think that discursive writing skills have suffered. This sounds dystopian to me. My only suggestion is have them write essays but then have them defend their essay verbally without the essay in front of them, only their notes.

Expand full comment
Porlock's avatar

Interesting. My own plan for using AI is to let it see my quasi-random thoughts, then do diligent proofreading and illogic-sensing to produce something coherent. This is sort of how one might produce better essays. I don't know how it will turn out.

Expand full comment
BJ  “BeTheChange”'s avatar

That’s an interesting idea! And you can refine it along the way.

Expand full comment
CyrLft's avatar

This is interesting for me to read. And good for you going at it clear-eyed looking for engagement that would pay off in some relevant, serious learning. 👍🏼

At the same time, I wonder if AIs may have played a larger part than you might’ve supposed. Coding? That’s quickly becoming an AI specialty, from what I’ve read (I can’t say directly). And game design? Ethan Mollick has ogled AI game making, including social-theoretically thematic — if I recall on his blog this year and also maybe in his 2024 book Co-Intelligence (I have the Audible — so I can’t easily search what I read there). https://bit.ly/MolliE-2024

Expand full comment
Seva Gunitsky's avatar

you're right they probably used AI to write snippets etc. The coding was done externally - but I wouldn't have minded if they used AI. The real learning was having to select and then collaborate on policy outcomes etc. to be honest i'm not looking to eliminate AI, it can't be done, only to find ways to use well

Expand full comment
Porlock's avatar

Hmmm. Sounds like my idea that I noted (before getting here in the FIFO list) in a comment here. Encouraging.

Expand full comment
Dan's avatar

Your politics of science fiction course sounds like so much fun to teach or to take

Expand full comment