Teaching and Trust in the Age of Artificial Intelligence
As the role of AI persists as a question mark in the landscape of higher education, Senior Managing Editor Mira Wilde ’28 and Staff Writer Caroline Paluska ’29 spoke with five professors about their thinking and use of the technology in the classroom.
As Caroline and I entered the always-hectic add-drop period last semester — with Caroline in her first year and me in my second — we began to notice a trend among our predominantly humanities schedules: changes, mentions, and cautions in our syllabi focused entirely on artificial intelligence (AI) use in class. Some of our classes were assigning shorter out-of-class writing assignments or opting for in-class exams. Surprisingly, no professor’s policy on AI read the same: some were more lenient while many opted for an outright ban.
While this article began as an inquiry into different approaches to testing and examinations among Amherst professors, it has morphed into a snapshot of what five Amherst professors are thinking about AI. Over the past two semesters, we reached out to 14 professors, of whom five agreed to be interviewed. This lack of cooperation could be explained by a few things, but namely an overwhelming number of professors responded that they didn’t feel they had much to say on topics regarding how their department and the college were dealing with AI. What was clear to us was the lack of a unified vision from the college on AI in classrooms, especially in humanities courses — were professors just as lost as students?
AI has swiftly taken over discussions of education worldwide, especially at colleges and universities. At Amherst, students, staff, and faculty were recently given access to three new AI services courtesy of the college; in 2024, the Provost’s Annual Retreat on Teaching and Learning was dedicated entirely to thinking about generative AI and the liberal arts. But these changes have not been enacted without their own criticism, with students at Amherst encouraging more careful thought about AI at Amherst and calling out the corrosive effects of AI on the morals of the liberal arts.
Pedagogy, AI, and the Classroom
While there was no singular approach to AI among professors — personally, in classrooms, or at Amherst — we found that among the faculty we spoke to, there was a general acceptance that AI was not something that could be ignored, or simply mentioned once during the syllabus read-through and then neglected.
Two of the professors we interviewed, Senior Lecturer in English Benigno Sanchez- Eppler and Henry S. Poler ’59 Presidential Teaching Professor of Music Klara Moricz, demonstrated a tension between professors’ concern over AI’s capabilities and an interest in experimenting with the tool. Sanchez-Eppler described himself as someone who has been tinkering with computers, scripts, and coding since the technologies became available to him, so his engagement with AI feels like a natural continuation of this trend. “My major permission for using this stuff [is] actually defining it like a really fancy toy, and not taking myself very seriously with what I do with it,” Sanchez-Eppler said.
Moricz completely understood this approach to AI, but didn’t adopt the method herself, sharing that “[for] some people, especially those who grew up with [technology], their instinct is to integrate new technology into teaching.” But for her, her instinct is “to exclude technology from teaching and create a different space. And it is good that we are different.”
Professors’ differing approaches to AI could explain why the college has not yet announced a uniform AI policy in the Student Code of Conduct. In an email correspondence with The Student, Assistant Professor of Anthropology Victoria Nguyen shared that “right now there is no single script for detection or enforcement, and that is probably appropriate given disciplinary differences.”
Almost all the professors interviewed said AI had impacted their classrooms in one way or another, with some deliberately introducing the tool while others set up structures to block its use.
Nguyen shared that she began experimenting with AI in some of her classes because “it felt disingenuous to ignore a tool that students were apparently already navigating and grappling with on their own.” For certain assignments, Nguyen had students assess and grade AI-generated work, as well as hosting discussions on citation practices and the differences in using AI for brainstorming, structural organization, or editing. To Nguyen, engaging with AI was challenging, as it required a very critical balance between encouraging its use and demonstrating its weaknesses. “My goal was to provide basic AI literacy and to highlight where human judgment and critical reasoning remain really indispensable in our work. And most students very quickly recognized the limits of these systems,” she shared.
For other professors, the potential consequences of classroom AI use weren’t worth the experimentation. Professor of Sexuality, Women’s and Gender Studies Krupa Shandilya shared that she is giving students blue book exams for the first time in 15 years. This decision wasn’t an easy one for Shandilya, but to her, there was no winning in the effort to assess students with AI as a tool in their back pocket. While she noted that there are benefits to blue book exams, such as having to build arguments in a systematic and coherent fashion on the spot, Shandilya also recognized that “[one] of the cons [of blue book exams] is that you cannot sit with the idea. You have to have it there and then.”
Moricz similarly adopted a non-engagement approach to AI. While she recognized that AI education is necessary, especially in college, Moricz remained concerned with AI’s effects on her teaching. “In this way, I think learning about AI and what to use it for and what not to use it for should be a part of education,” she said. “But for the humanities especially, I think it’s very important to keep away from it and engage in, as I say, age-old methods of communication.”
Some of the professors we interviewed used AI for their students’ benefit, while others used it to support their own instruction. Sanchez-Eppler — whose classes revolve around complex configurations of student groups and Google Drive folders for sharing about reading and writing — has used Gemini to write a code that will create the grouping folders for him. This help has been indispensable for Sanchez-Eppler. “Creating those folders took me three days before ... Once I learned to do the scripts, and to load the scripts, and debug the scripts, I had those things done in 20 minutes,” he said.
Sanchez-Eppler also employs AI for content-related purposes. He uses AI to generate tabulations on various concepts related to his classes, including definitions, short biographies of the developers of ideas, and timelines of theory development. “I print it, hand it out, acknowledge it, and whatever I have handed out does have an emphasis [that] ‘this is not complete,’” he said. One of the main tenets of Sanchez-Eppler’s classes is experimentation with learning. To him, using AI in a classroom was part of this pedagogical approach: “Any use that I do in class, it’s prefaced with this business of ‘we’re playing.’”
On the other hand, James J. Grosfeld Professor of Law, Jurisprudence and Social Thought, Lawrence Douglas — who frequently uses blue book exams in his classes — has not changed many of his testing approaches, including not backing down from assigning take-home essays. “In my experience, there has been very little discrepancy in the past between how the students perform in class setting and how they do in a take home setting,” Douglas said.
Effect on Trust Between Professors and Students
Trust is a critical foundation to any effective learning environment. Its relevance is amplified by Amherst’s adherence to the liberal arts educational framework, where professors and students regularly interact in one-on-one and small-group settings, fostering mutual respect and easily accessible channels of communication. Many professors expressed concern about how AI could potentially erode this well-established trust between and within students and professors, the very thing that arguably makes Amherst Amherst.
To Nguyen, this concern about trust is a two-way street between students and their professors. “Just as students may worry about fairness or transparency in assignments, faculty are also navigating questions about how AI might be used in grading or feedback, which introduces its own set of ethical considerations,” she said. For Nguyen, the core of a student-professor relationship is engagement. As the forms of student work and professor response become increasingly digitalized, the risk of interference or infiltration by non-human technologies, such as generative AI, increases exponentially. “If students are outsourcing writing and faculty are outsourcing evaluation, we have to ask what remains of the pedagogical relationship,” she said.
Moricz spoke from personal experience about how AI tests this relationship and creates the potential to harm both the professor’s and the student’s perceptions of each other. Explaining her recent transition to in-class essays, she recounted frequent instances in which she encountered strange passages in her students’ take-home work that she could not help but question if they were AI-generated, and yet possessed no reliable means to affirm or deny this suspicion. This became the driving source for her revised approach to essays: “That’s [what] I hate about it: It made me paranoid. For me, the idea that I didn’t trust my own students was an awful feeling. That’s when I decided I was unwilling to deal with AI.”
Trust, for some professors, has been left to be found in the smallest gestures — the ones that reveal a student stands behind the words on the page. “Now I even appreciate grammatical mistakes, because I feel that’s real and there is a grammatical mistake which I can correct and from which the student will learn,” Moricz said.
Despite grim concerns about AI, professors still expressed wholehearted faith in their students’ character. “I believe my students are truly engaged in an effort to develop their skill set independent of this [technology],” Douglas remarked. Shandilya reaffirmed this perspective: “I absolutely trust our students to be honest and to have integrity when they’re submitting their work.”
For many, the concerns stem not from students’ character but from the unprecedented access to — and prevalence of — powerful generative technologies. In fact, Moricz sympathized with students about the allure of AI: “When the tool is there, people will use it. I think you have to be incredibly principled to say that you will never touch it.” Shandilya’s primary concern was also not of students’ ill-intent but of the “slippery slope” of its use, where “you’re not quite sure where to draw the line between what the AI has said and then what [a student’s] thought is.” The countless ways AI can appear in academic work (e.g., brainstorming, drafting, revising) are nearly impossible to detect, raising complex questions about how to define and regulate permissible uses of AI in the classroom.
These questions and concerns raised by student AI usage are ones far from easily answered. Certain professors, however, identify central principles of education that remain unfailingly true. Douglas, addressing the variety of faculty approaches, suggests “whatever kind of pedagogy one embraces, it is meant to be something that encourages the development of skills, independent from reliance on these technologies. The technology may ultimately be a helpful device, but it really has to come from the student.”
To Nguyen, the complex effects of AI can and should be used by faculty to encourage more honest relationships with students based on a mutual understanding of classroom educational goals. “AI can, of course, strain that commitment [to truth], but it can also prompt more explicit conversations about what we owe one another in the classroom and what meaningful academic work requires,” Nguyen said.
Erosion of Learning
While professors varied in their selected approaches to AI in the classroom, they all agreed that AI poses a threat to the core of Amherst’s liberal arts promise: a curiosity-driven education. Professors understand that one of the main promises of AI is that it will save time for students. “It’s not like the students are bad or that they want to cheat, but they are under pressure. Everybody is busy. And if you can make this shortcut, and it’s so easy, then you would do it,” Moricz said.
Nguyen echoed this point, sharing that she thinks “much of AI’s appeal lies in its promise to save time,” but she also cited this as a main consequence of AI use. “In higher education, though, especially in the social sciences and humanities, slowness itself is often a virtue,” she said. “Careful reading, sustained argument, revision, reflection, these aren’t necessarily hyper-efficient processes, but they are central to a liberal arts education that posits that grappling with complexity or the things that most challenge us has an intrinsic value.”
Shandilya also expressed concern about students’ capacity to deliberate, challenge, and develop their own ideas. “I think the fact is that we are living with AI and there’s no way of getting away from it. But as best as possible, we don’t want to lose our abilities to think ... my primary concern is to challenge students to formulate opinions and ideas which are independent of something they can take off of the internet,” Shandilya said.
To Nguyen, this potential to erode learning forces Amherst to look inward and reevaluate the institution’s intrinsic educational values. “AI has generated this kind of reckoning where we have to start distinguishing between the performance of knowledge and the actual development of understanding,” Nguyen said. This introspective requirement begets the college’s reflection on its priorities. “Are we credentialing, training, cultivating judgment, forming citizens, fostering intellectual independence, or something else?” Nguyen asked.
To answer these value-based questions, Moricz believes that preserving the traditional liberal arts elements of an Amherst education amid AI is the most important task. “What we have in the college [is] not real life. We are doing this fantastic thing where we are sitting in the classroom and talking and writing and thinking, and I think that’s why it has to be cherished,” Moricz said.
To Douglas, developing critical thinking, writing, and processing don’t have to be mutually exclusive with AI usage. While Douglas understands that tools like AI can help enhance these skills, he is aware that AI may limit their development. “The danger, which is something that Amherst and all colleges need to deal with, is students start relying on [generative AI] before they’ve developed those skills,” said Douglas.
Sanchez-Eppler, who consistently emphasizes the interrelated nature of reading and writing to his students, worries about AI’s ability to harm students’ reading. To Sanchez-Eppler, writing can only be improved by reading, not by an increase in productivity. He shared a concern about AI reducing an incentive to learn, by providing shortcuts for students. “Don’t use it to write and don’t let it poison your capacity to read,” Sanchez-Eppler warned students.
For Moricz, this problem goes beyond just institutional responses and academic pressures on students, and instead might reveal a larger issue of student confidence. “It is critical for you to believe that you can do it. If you are using ChatGPT, then you don’t believe that you can do it. That’s no longer an education.”
Future of AI at Amherst
The future of AI in Amherst classrooms is still uncertain, as all professors interviewed acknowledged the complex ethical questions AI raises. For Sanchez-Eppler, he is hesitant to heed the calls to ban or limit AI on campus, encouraging experimentation with the technology before writing off its possibilities. But part of the problem about unregulated use, as Shandilya points out, is the increasingly omnipresent nature of AI. “AI is very insidious because before you know it, you’re using it without knowing that you're using it. That’s the danger,” she said.
Moricz intentionally took steps to make her class a space free from the temptation of AI, explaining “I think it is incredibly important to maintain and concentrate and fight for the space in which AI is not operational and we can be ourselves, our thoughts, our sentences, our grammatical mistakes, our creativity.” Amherst College is a precious space, and Moricz — just as all professors proved themselves to be — is concerned with preserving the humans at the heart of it.
A major institutional question then becomes one of balance: how to accomplish this preservation while simultaneously creating space for new and rapidly evolving technologies. Moricz herself acknowledges this, noting “one shouldn’t be rigid about it. AI is so much [a] part of life that I think there are spaces in your college environment where you, in fact, must learn about it and learn how to use it responsibly. It shouldn’t be banned, no, but it should be used responsibly.”
Ultimately, the quality of the solution Amherst and its community members arrive at will come down to the quality of the communication. “I don’t think the answer is going to be more surveillance or suspicion,” Nguyen said. “Instead, [there] might be more clarity about our shared commitments. Education depends on a mutual recognition that the point is not just about efficiency but intellectual, even moral, growth.”
The uncertain climate around Amherst’s AI policy has raised students’ concerns about convoluted expectations and inconsistent enforcement. One student, Association of Amherst Students Senator Daniel Fleer ’26, is attempting to use his student government position to clarify and shape AI policy at the college.
Based on his understanding, Fleer shared his sense that faculty are working together to develop a more coherent approach to AI. Fleer specifically emphasized that “instead of defaulting to professors handling cases within their own classes,” there has been a shifting interest “towards the broader and more official disciplinary process that exists and that’s outlined in the code of conduct.”
Within the senate, Fleer is attempting to build on the faculty’s initial movement to establish more official structures to address AI, going beyond immediate responses to consider the future of AI in the liberal arts. “We’re trying to get the faculty to move, to think a bit more collectively and thoughtfully about how AI is going to figure into our curriculum long term,” Fleer said.
The emergence of AI challenges not only how we read and write, but how we understand the purpose of higher education itself. Amherst College has weathered technological upheavals before, remaining a pillar of academic diligence for over two centuries. The value of an Amherst education has persisted through inventions from typewriters to the internet, and is unlikely to diminish with AI. As Shandilya remarked, “you are getting a Bachelor’s of Arts from Amherst College, not ChatGPT.”
Comments ()