Lesson Learned: Don't Use AI in Sensitive Situations
/The Office of Equity, Diversity, and Inclusion (EDI) at Peabody College, Vanderbilt University, used ChatGPT to generate an email about the Michigan State campus shooting, and it wasn’t received well. This story illustrates issues of accountability (administrators taking responsibility), but failing compassion in a time of tragedy and failing integrity (consistency).
The email referred to “shootings,” which is not accurate. Otherwise, it sounds like boilerplate, but not that much different from typical emails a campus community receives in these types of situations. Compare that email to one sent from the vice provost and dean of students, which sounds more emotional but is still common.
Perhaps the only giveaway was a line at the bottom:
(“Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.”)
On the one hand, I admire the writers’ honesty, doing what faculty are increasingly asking students to do: to identify whether and how they use AI for their writing. But of course, the choice reflects poor judgment.
Student backlash was swift and fierce. Using words like “disgusting” and “sick and twisted,” students called on administrators to “Do more. Do anything. And lead us into a better future with genuine, human empathy, not a robot.” A senior said, “Would they do this also for the death of a student, faculty, or staff member? Automating messages on grief and crisis is the most on-the-nose, explicit recognition that we as students are more customers than a community to the Vanderbilt administration. The fact it’s from the office of EDI might be the cherry on top.”
University officials responded quickly. In a follow-up email to students, an EDI dean wrote, “While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College. As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.” Could ChatGPT have written that too?
This is a precarious time for universities, as faculty grapple with how to use AI tools and what policies best serve students and academic goals. Using AI as a starting point for such a sensitive message may never be acceptable, and it’s certainly too soon now. Faculty will have a difficult time enforcing AI policies if they use tools in ways that contradict the spirit of their own guidelines.