Boom, Doom, or Draw
A Post Mortem on my AI and Theology Course
This fall, I taught an upper-level course on theology and artificial intelligence for our Honors program. I wanted to teach the course partly because it fits within my research interests in theology and technology and partly because I wanted to spend time thinking with good students about the current place of AI in education and culture.
If you’d like, you can view the syllabus here.
In the process of designing the course, I went round and around on what I wanted to do with the students in class and what I wanted them to do outside of class. The new AI era has “disrupted” this distinction a bit, such that some of the things I would typically have them do on their own (reading and annotating, writing papers) can also be simulated into assessable artifacts instead. In addition, I neither wanted to avoid the use of AI entirely nor did I want to pretend to be a technical advisor on the various types of AIs available and how best to use them.
Now that the semester is over, I’d like to set out what I did for the course, in case that would be helpful to anyone else, and to reflect on what worked, what didn’t, and what I’d do differently if I do teach it again.
Course Themes
I organized the course around three main themes: theological anthropology, human relationships, and Catholic Social Teaching. In the first block, we focused especially on Noreen Herzfeld’s The Artifice of Intelligence, which is built on a Barthian approach to the human person. We focused on what is intelligence, agency, responsibility, love, and embodiment, all with a recurring emphasis on the idea of humans being made in the image of God.
The second block focused on our relationships with other people, with creation, and with God. In this period we read sections from the Encountering Artificial Intelligence book put out by Matthew Gaudet, Noreen Herzfeld, Paul Scherz, and Jordan Joseph Wales. In this portion, there were two readings that really elicited strong responses from students:
A news article on a man named Travis who has both a human wife and an AI wife: this received a significant amount of horror, but it also prompted a lot of reflection for students on why they found it problematic and whether they thought it was just something they didn’t like for themselves or something they thought was bad for people generally.
The “Techno-Optimist Manifisto” by Marc Andreessen, which lays out his hopes for how technology will radically transform humanity to the point that everything for everyone will essentially cost nothing and all the other problems of the world will be solved. Students overwhelmingly described as “culty.”
Finally, following many of the latter sections of Antiqua et Nova, we looked at how the principles of Catholic Social Teaching might guide thinking about AI and education, labor, healthcare, and warfare. The main focus here was getting them to think through the four pillars of CST (human dignity, the common good, solidarity, and subsidiarity) as a framework for thinking about the role of AI in our lives and society.
Regarding the themes and readings:
What worked: students generally responded well to the Herzfeld text, but even more to the various news articles I incorporated as well. These were often referenced in later class sessions and clearly endured with the students. Students also returned frequently to the idea of the image of God and their own wrestling with that, long after we had finished the section on theological anthropology.
What didn’t: I had originally planned on more “lab days” of doing AI-related things in class, but it was quite difficult for me to figure out how to do this. At least one of them ended up being canceled due to illness on my part as well, although the idea for that ultimately turned into the topic of the second essay. The most helpful one was the first one, where students worked to develop an AI/Technology policy for the course, one they largely respected the rest of the semester.
What I’d do differently: I’d almost certainly change the two textbooks. They’re both very good in their own ways, but a difficulty with AI and academic publishing is the timeline. Herzfeld was finishing her book right as ChatGPT became public, and so there are already ways in which it feels dated. Plus, for my own interests, I’d be interested in reading new things. I’m not entirely sure I’d go with an actual book this time, anyway, but I’m also not set on what I’d replace that with.
Activities and Assignments
In figuring out what to have students do, both in class and for assignments, I wanted to do some things that were AI-resistant and some things that allowed for AI incorporation.
The AI-Resistant Assignments and Activities
Notebooks journals: At the end of every class, they had a composition notebook in which they were to write for about three minutes everything they learned that day in class, which I usually followed up with a further question for them to write a couple sentences on (early on I referred to these as “brain dumps,” later they were just “journals”). I collected them at the end of each class and returned them at the beginning of the next with a brief comment or feedback (sometimes longer, depending on what they wrote). I bought 20 composition notebooks during the back to school sale because they were 38 cents each.
Harkness seminars: students selected into groups of three (five groups total), and each group had a day of class when they were responsible for facilitating a harkness discussion on the text for that day. As a group they were responsible for developing discussion questions, and each individual had primary tasks (facilitation, note-taking, mapping). Although they could (and many surely did) use AI in preparation for the harkness, they were on the hot seat in course in a way that still pressed them to work without it.
Oral final exam: in oral exams I tell students to expect 2-3 big questions, built around major themes from the course. I don’t generally provide the questions in advance or create a study guide, but I do ask students what they think might be on it or how they might guess what it will be. In practice, I always have one question that I ask every student in the exact same way, and I have a second question that is adapted to work that particular student did in the semester (most commonly it draws on one of the essays or papers they did). These are always supplemented by numerous follow up questions as I try to push them on their thinking and their grasp of the material. I block off 30 minutes for each student: 20 for the exam itself, 10 for me to write up my feedback and grade (they only receive this after everyone has had their exam, not when they are in the room with me).1
For the oral exam, the two questions I asked this time were:
In light of all that you have read and discussed in the course this semester, tell me what you think Saint Leo should do about AI in the next four years with respect to students, coursework, college life and experience, athletics, etc.
Based on what you know to currently be possible with AI, whatever you have done with AI yourself, and what you think might be possible in the near future with AI, do you think AI is generally positive or generally negative for human flourishing?
I’ll say more about what came out in the oral exams at the end of this post.
AI-Integrated Assignments
I had two essay assignments that allowed for the integration of AI. In both, the students were to write an 800-1000 word essay on a prompt and, if they used AI in any form to help with the paper, they had to submit the transcript of that use (here are the assignments for Essay 1 and Essay 2).
In both cases, about half the students submitted a transcript and half didn’t. Of those who didn’t, about half of those included a statement in the essay or in the submission folder that they did not use AI. In general, there were only one or two students in each case that I was suspicious about, but not enough that I pressed on it.
My impression from the students who submitted transcripts was that the majority of them were using it to (a) improve particular paragraphs or the thesis statement from what they’d written or (b) use it find quotations from the readings that fit with their overall position in the essay. I don’t mind (a) but I don’t love (b).
There was one student who, for the second essay, the transcript itself was 26 pages long in PDF form. To be honest it was pretty eye-opening for me in terms of how, even in a context where the essay was in a real way generated by ChatGPT (and so still a problem in my view), there was significant evidence of critical thinking occurring on the part of the student. He clearly had a position going into his AI process, gave a lot of direction, a lot of feedback, made decisions among available options, etc. It was not an example of “chat and paste,” although it also wasn’t what I typically am looking for students to do either. The positive thing for me about it was seeing a transcript of essentially real-time decision making by a student, which is not something a standard essay can typically do. It was also done by an overall excellent and engaged student and went far beyond what I think typical AI usage in college looks like.
What Worked
Notebooks: the journal notebooks were excellent. The students said they were more useful for them and forced them to think more than if they were typing answers. I think I also came to have a better and more sustained understanding of each student through consistent review of what they wrote. Also, the notebooks were great for having students periodically do brainstorming, think-pair-share, or other activities during class. I have typically used the app Socrative for exit tickets in classes, but I’m planning to switch to the notebooks for all the courses where I use that.
The oral exams: I have really come to like oral exams, whether at midterm or final, and that proved the case here as well. It’s fairly clear in an oral exam who has prepared, thought through some issues, and can do their own thinking about ideas. It also provides them opportunities to ask questions, to clarify their positions, and to challenge the professor. They are time consuming, but when feasible they are entirely worth it.
The essays: overall I was very happy with these. The essays themselves demonstrated the same typical range of thoughtfulness and effort that college essays usually do. I learned a lot about student use of AI from what they put in transcripts. I fully recognize some may have used AI in ways I don’t think they should, or may have shared incomplete transcripts. But I also think allowing for it with some accountability via transcripts helped many of them be more honest about their usage, which I think is valuable. I’m less certain I would do this in a course where AI was not itself part of the subject.
What Didn’t
Honestly, I’d keep all the assessments I did and would do them again in the AI and Theology course if I teach it again. I have previously done harkness discussions and oral exams, and would use them in other courses as well.
What I’d Do Differently
Harkness: the main change I would do here is a lot more work to prepare the students for how to do them well. The biggest challenge students had with the harkness was (1) how to craft a good discussion question and (2) how to facilitate a discussion that doesn’t get anchored by one other student. The problem they ran into with questions was that they asked a lot of “the author says x, do you agree/is that true?” Close-ended questions lead to close-ended discussions. Further, in many ways, (2) ended up flowing from (1), as it was hard to establish a rhythm. So I need to do more early in the semester in helping students craft questions.
Closing Reflections
To close, there are a handful of things that struck me over the course of the semester or that were especially brought forward in the oral exams.
Students are much more ambivalent about artificial intelligence than I think many people realize. All of my students said they had used AI for coursework prior to my course. Many of them, however, indicated doing so out of anxiety or out of poor time management. Most of them expressed judgment of their friends, roommates, or teammates who used AI to “chat and paste” their way through college. The uses of AI they found helpful were generating quizzes to help them study, getting feedback on assignments they had completed according to an available rubric, or other “tutoring” type methods. They mostly felt a sense of inevitability when it comes to AI.
I didn’t do any kind of scientific survey on their attitudes and so forth, so this is admittedly anecdotal. But as many universities, including mine, are pivoting hard into AI integration, we need to maintain a critical eye both on what students need (in a fuller sense of formation, not only in the job sense of formation) and what students want. I also think we need to be more discriminating in response to AI than straightforward boomerism or doomerism.Students have an overwhelmingly “instrumental” view of artificial intelligence and of technology in general. Frequently throughout the semester they would talk about AI in the sense of it being an option to use or not use, one they were generally quite confident they could make good choices about. There was no sense that they were susceptible to the affordances of design, that the regular use of technologies habituates them to certain uses, or that acquiring such habits might make other skills decline. Overall, they had no real sense of an “ecological” view of technology or the way that (a) technology changes cultures and (b) places pressures and constraints on people in how they act. I should have done more to make this case for them in the semester, so that’s definitely something I’ll address in the future.
The single most impactful topic I saw throughout the semester was the potential impact of AI on labor and their future careers. This was the subject of the second essay, but it was something students addressed from early in the semester. There was sizable range between optimism about how AI might aid them in their work and how AI might undermine or even eliminate their work. For many, the primary purpose of college is job preparation, so this is an understandable concern.
However, something that helped crack a little the exclusive focus on jobs was their reading of John Paul II’s Laborem Exercens, especially his distinction between the “subjective” and “objective” dimensions of work. Both in the second essay and the oral exams, numerous students were able to recognize the idea that work has dignity and meaning in and of itself, that there is a moral component to our work and that ultimately the purpose of work is for the flourishing of humans. Setting artificial intelligence into the “objective” dimension helped many of them navigate their hopes and their fears about AI.
Will I teach the course again? Several students said I should. The university wants more AI courses. I also have my own ambivalence about it. If nothing else, I think the course gave students an opportunity to reflect critically on the AI world they’re living in, and that’s worthwhile. So we’ll see what happens.
Krista Dalton has a great writeup on her approach to oral exams, in case you’re interested in that.


This is great! Thank you for sharing your application and experience with the AI course. We need more of this, for sure!
Thanks for this work Steve.