Spring semester, and academic year 2022-23 has mercifully come to an end. Yet as one semester ends, preparation (even in small ways) for the next one begins. Plus, to be honest, I teach summer school every year, so it’s not like I’m ever really done.
I had mentioned in my previous post on ChatGPT and theological education that I was thinking about practical strategies for dealing with AI in my courses. What I’d like to do in this post is note some things that I did pedagogically this last year as far as assignments go, as well as some of the things I’m considering trying next year. Here’s an outline in case you want to look at some ideas and skip others:
Things I Have Done
Oral Exam (assignment)
Hand-written Midterm (assignment)
Exit Tickets (activity)
Things I am Considering Doing
The Apostle ChatGPT (activity)
How to Write Well (or Poorly) (activity)
The Student Grader (activity)
Things I Have Done
Oral Exams (assignment)
Last fall I taught a course that is essentially a very basic introduction to the Bible. Long before ChatGPT was on any teacher’s radar, I had decided to do the course final as an oral exam. My basic setup was that each student was asked the same basic set of questions:
What did you learn this semester?
What was your experience of practicing silence like?
How would you explain the four senses of scripture?
How would you use these four senses to talk about some specific passage of scripture (which was always a passage they had written about in a previous paper)
Why did you pick the passages you did for that paper?
The oral exams were 15-20 minutes for each student, and I scheduled them every half hour (the remaining 10-15 minutes I used to fill out the rubric and give feedback).
Again, I had decided on this before AI was a big question, but it would have had the benefit of being very difficult to use AI to complete. It also provided a good opportunity to see what had stuck or been communicated well, and what was less clear or even downright confusing.
The downside is that scheduling was a bit of a pain and oral exams can be time consuming. I only had 25 or so students for the class, and that was the only section I taught, so I can imagine this being much more unwieldy with more sections or more students.
Hand-written Midterm (assignment)
I’ve team taught an honors course each spring for the last three years. We originally had a take home midterm exam featuring six questions, of which the students selected three to answer. This semester, specifically because of uncertainty around AI, we shifted it to an in class, blue book exam (with appropriate accommodations where needed). This meant adjusting some of the questions as well as expectations (e.g. no one needed to have quotations). We also allowed all students to bring with them a sheet, front and back, of notes they wanted to use on the exam.
Positively, we had little to no concerns about AI or other kinds of cheating. Moreover, the real work students did to prepare in creating their “cheat sheet” was where much of their critical thinking was engaged; generally people who did well seemed to have worked hard at organizing a good cheat sheet.
Negatively, it had all the standard issues with anything handwritten (difficulty of writing, difficulty of reading). Students who had accommodations were able to arrange time with the testing center at school and use their computers when appropriate, but this was a hassle for some of them. I suspect it is much less common for college students to do much handwritten work, so it was probably more difficult for them than the same thing would have been for me as a freshman in 1999-2000.
Exit Tickets (activity)
I’ll write another post soon on how I use exit tickets in my courses, but briefly: I use the app Socrative, which has an exit ticket with two standard questions (scale of A to D, how well did you understand today; what did you learn today) and then space for a third, instructor’s question. They’re done on phones/computers, so a student could use AI to generate the text. However, the questions are always pretty specific to course discussions, and it’s overall less effort to write responses to the questions than to figure out the useful prompt for AI.
The exit tickets have been really helpful in my teaching overall, as they give me real time updates on how students perceive their own understanding, what they are picking up and/or misunderstanding in class, and how they write and think. It’s obviously an incomplete picture of their style, but I think it helps a lot with giving them useful feedback when we do get to assignments that actually have some stakes for them.
Things I Am Considering Doing:
The Apostle ChatGPT (activity)
One thing we often cover in scripture courses are looking at the genres of texts, the way they are written, and the concerns that seem to crop up. The letters of St. Paul in the New Testament are especially important here, as he is the biblical figure with the most texts attributed to him.
Since ChatGPT responds fairly well to prompts like “write xyz in the style of abc,” I can imagine, after spending some time talking about the letters of Paul, such as reviewing 1 Thessalonians or Philemon, using ChatGPT in class to generate a brief “Pauline” epistle on some subject (maybe, like in a classic improv class, taking a suggestion from the students). From there, the pedagogical task would be to have them evaluate what it generated. Does it sound Pauline? Why? What features does it have, what features does it not?
Moreover, doing this might provide a fruitful lead-in to the question about the undisputed, disputed, and pastoral letters of Paul. (It might also be a giant failure - teaching involves risks!)
I think there are plenty of variants on this one can imagine (Nicene canons! Articles of the Summa!). You can also imagine a similar version with religious art and DALL-E (generate an icon of St. Paul in the style of the Simpsons or something).
How to Write Well (or Poorly) (activity)
I already do a version of this sometimes with students. I’ll ask them, essentially, “what are the things that students do in papers that are terrible, or bs, or padding?” And they give me a lot of the golden oldies, like “From the dawn of time” openings or finagling fonts/margins/character spacing to extend the length. It’s pretty easy to assemble a list of what not to do, which helps them figure out what to do.
Similarly, students often have to write short, paragraph-ish responses for me - sometimes for graded quizzes, sometimes for in-class activities. I then share some of them (always anonymous) and ask the students to critique them: what does this response do well? What could it do better?
I can see myself, in a class where we are talking about an upcoming writing assignment, using some of these same kinds of insights, but with ChatGPT. It can generate responses quite quickly, which we could critique live any day of class. I generally think of ChatGPT as a BS machine [link], so having students evaluate what is basically BS might be a useful exercise for them.
The risk, of course, is students seeing ChatGPT as a resource for them to produce acceptable (i.e. C quality) work with no real effort. Hopefully this exercise will show them how much better they can do, and how much more they can learn, with just a small, reasonable amount of effort.
The Student Grader (activity)
In the honors course I team teach, we talked a bit about the midterm prompts we had used in previous years and the responses ChatGPT would generate for it (this was a big driver in moving to the in-person, handwritten midterm discussed above). But something else this suggests as an in-class activity is the “student grader” move: provide an AI generated response to a prompt you might use for the class and then have students grade it.1
In general, I’ve seen students respond pretty well to peer review opportunities. The main questions I usually ask in this vein are “what does this response do well” and “what could this response do better,” and by and large students have good and thoughtful answers. With the student grader activity, they’re essentially doing the same kind of thing. It can be done individually, in pairs, in groups. One could provide multiple responses to the same prompt for comparative evaluation. If you’ve already got a rubric for that assignment, you can provide it to students and have them grade with it.
The goal of course is to get students to identify what are the important ideas, arguments, and sources that should be in the response. In this activity they aren’t finding those by generating them themselves but instead evaluating whether an AI answer has done it well.
Conclusion
Obviously, these are just some ideas. I’ll probably keep doing oral and hand-written exams in specific circumstances, but I don’t think I’ll be exclusive about it. They’re basically old school techniques that were quite common at least up to my high school and college years, and there’s wisdom in bringing those back in the right circumstances. I’ll definitely keep doing exit tickets; they’ve honestly been a game changer in my teaching.
I have no idea how the ideas that I haven’t tried yet will go, but if I have an interesting experience or something useful to share in the future, I’ll post again on them.
Obviously, feel free to use/steal anything in here (just let me know how they go!). And I’m always interested in new ideas, so feel free to drop any you have in the comments here.
Note - I’m adapting this from a suggestion my fellow honors co-teacher, Frank Orlando, made at a ChatGPT panel we were on together