When Cam Grey of Classics walked into the first session of the CETLI Seminar, Using AI Productively in Your Teaching, he thought to himself, “I'm going to hate this.”
It was Fall 2024, and Generative AI was presenting new questions for student learning. Grey was suspicious of the way students engaged with AI tools. First, there were the concerns he had for learning in his discipline: Classics is a notoriously old field, with ample examples of outdated and even problematic scholarship. Most of it is publicly available and has been used to train ChatGPT, so when students use ChatGPT or other LLMs that have been trained on that data to generate their essays or support their learning, it can provide misleading information. Beyond his discipline, Grey was generally concerned with how to protect the process of learning when the act of writing—learning how to think through ideas and then express them in language—is the emphasis of the work students do in class. But Grey also knew AI wasn’t going anywhere, and he felt compelled to “face [his] fear” and better understand its role in his teaching moving forward.
On the first day of the seminar, Grey recalled sitting around the table with the other members of the inaugural AI cohort— faculty from SAS, Design, Law, Nursing, Wharton, and SEAS—when Rachel Hoke and Cathy Turner introduced the first exercise.
They were asked to write a recipe and then prompt ChatGPT to do the same. As a group, they evaluated the results. The experienced chefs in the room had way more to critique—notes on the ingredient list, cook time, oven temperatures—so that the group began to see that the effectiveness of the tool was defined by the expertise a user brought to the experience.
For Grey, this was a refreshing way to think about AI. He was energized by the conversations he had with his peers. “When you're in a conversation with smart people who are thinking about this stuff in thoughtful ways, it stimulates all kinds of ideas.”