Last week I was lucky enough to be on a panel about ChatGPT and theological education that was hosted by the Catholic Theological Society of America.1 This post is a lightly edited version of what I presented then, supplemented by sources that informed my thoughts. The panel also included the great theologians David Turnbloom of the University of Portland and Anne Michelle Carpenter of St. Mary’s of California (and soon the Danforth Chair in Theological Studies at Saint Louis University).
The panel was followed both by breakout room discussions and then a general Q&A that delved further into pedagogical questions and concerns about practical strategies, student well-being, and the broader intersections of theology and artificial intelligence. I hope in some subsequent posts to get into these other questions.
Good evening, thanks everyone for joining us, and thanks Dave for starting us off.
To start off, ChatGPT is a potentially useful tool for theological students and educators. However, its overreliance can pose a risk to the development of critical thinking skills and originality in student work. There is also a concern about ChatGPT perpetuating biases present in its training data, which could negatively impact marginalized groups in the theological community.2
Second, as an overall question for thinking about artificial intelligence and higher education, there’s the underlying question of what we understand the purpose of higher ed to be. Why are we teaching our students, or what are we teaching them for? The answer to this animates a lot of responses to the advent of ChatGPT. In my own reading of the discourse around higher ed and artificial intelligence, most reactions fall into three general buckets.
Academic Integrity, aka Discipline and Punish
The first bucket is something like how do we catch and/or punish the use of AI, or (more charitably) the academic integrity bucket. So is use of ChatGPT plagiarism, or cheating, or something elsethat needs to be monitored, forbidden, caught, or sanctioned? Where is the line, if there is one, between acceptable and unacceptable use of AI in assignments, in class, in drafting emails, or in writing letters of recommendation? I add that last one because the issues in this bucket are not restricted to our students, but apply also in various ways to our colleagues, our administrators, and ourselves.
I suspect some of the punitive impulse comes out of either the same satisfaction some teachers get from detecting plagiarism, and some comes out of the inertia of being comfortable with how one teaches already and not wanting to adapt that to a world with fairly effective artificial intelligence.
But positively, I think the impulse also comes out of some sense of what we are trying to do in teaching theology. We think that the substance of what we teach is worth knowing, and the skills and practices that are part of that are worth developing. If artificial intelligence can create a reasonable facsimile of what a critically thinking human would do with the same prompts, some proportion of students will take that more efficient path towards the grade (even if it were only a B or C) rather than the harder path towards understanding.
Assignment Revision and Elimination
This feeds into a second bucket, the assignment revision and elimination approach. I design and teach online courses pretty regularly, as I suspect many of us do. At my institution, these always include online discussion boards with a fairly straightforward, AI-answerable set of prompts. Moreover, students and professors alike widely dislike the discussion boards, but it satisfies certain expectations of our accrediting body and is, again, a path of least resistance. Many of these prompts, or the prompts for our summary essays, our book reviews, and our midterm exams, can now be answered by ChatGPT. In fact, the first paragraph of what I said earlier was generated by ChatGPT (because why not?).
In some cases, this induces trying other types of assignments or activities, like oral exams or in class presentations, which do have their place. Probably some of the common assignments professors do now could stand to disappear. I think it is just good pedagogical practice for us to regularly review the types of activities and assignments that we do and consider how well they help students learn what we think they ought to learn.
Engaging AI in Teaching
The third bucket builds on these other two, focusing on how to integrate or engage artificial intelligence with one’s teaching. I’ve heard a fair amount about this from my colleagues in other departments. In particular, both an economics professor and a political science professor have noted that ChatGPT makes it easier for students to write scripts for statistics programs. Learning the program itself is not a part of the course, so AI assists students with a non-essential part of the course and frees up time to focus on more central concerns.
I’ve also seen comparisons to the incorporation of the calculator into math classes. From my own experience as a math major during undergraduate studies, I think it is fair to argue that students who understand, more or less, the underlying math are much more successful at making use of calculators as an aid, while those who are struggling often compound or fail to catch errors when using the calculator. If AI is taken as a tool to support and supplement learning, it could be great; the problem is when it replaces that learning.
Then again, the variability and adaptability of large language models make them a bit different from the calculator, which has a much more restricted use. Yet I hear often some version of “jobs in the coming decade will integrate AI in various ways, so it is good for students to learn to use it now so that they are competitive for those jobs.” That’s probably true in a lot of disciplines, but I’m less convinced about that with respect to theology or most of the careers that theology usually feeds people into. And I’ll be honest, I haven’t yet figured out good ways that I might incorporate or engage AI in the classroom. I am also a bit hesitant to bring it up because (frankly) I don’t want to give students the idea of using it for assignments and, in truth, some of my thinking about AI and education is still in the “discipline and punish” bucket.
A Brief Return to the Purpose of Higher Ed
To close, let me circle back to my framing question from earlier: what is the purpose of higher ed? Bret Devereaux’s recent opinion essay in the New York Times makes the case that college is more than job training, but is also about forming people to be good citizens of society. He notes the opposition in society to funding for the liberal arts and humanities and the perception that business-oriented majors lead to more financially lucrative, or at least stable, careers. He quotes from a 1947 report from a presidential commission advocating for education that would “give to the student the values, attitudes, knowledge and skills that will equip him to live rightly and well in a free society.” This includes, even presumes, the future employability of the student, but goes beyond it to forming good and thoughtful citizens.
As liberal arts institutions, I think it is important for Catholic colleges and universities (and the discipline of theology itself) to attend both to the “employability” aspect of education and the “good citizen” aspect. But I think they also have the responsibility to foster their students’ attention to the transcendent, the highest good, what is most fundamental in life. A Catholic vision of the human sees the person as more than a worker, more than a citizen, but most essentially as a person made in the image and likeness of God and who has an inherent, inalienable dignity because of that.
If Catholic higher education is to pursue all three of these purposes, our response to AI, to its promise and its peril, ought to be framed in light of all of them, not only to its techno-economic, “employability,” dimension.
Special thanks to Mary Jane Ponyik, Mary Kate Holman, Elyse Raby, and Christina Astorga for organizing this panel.
Not to spoil it, but this first paragraph was generated by ChatGPT, which I note later in the post.