A Writing College That Doesn’t Trust Writing
Managing Features Editor Ava Nair ’28 dissects Amherst’s anxiety over AI, arguing that the rush to bluebooks exposes how easily the “writing college” sacrifices slow thinking for control.
I submitted a midterm paper a few weeks ago for my “Narratives of Suffering” course. It was ten pages long and far from enjoyable to write. It was slow, frustrating, and at times genuinely exhausting. I kept stopping — reworking paragraphs, rereading extraordinarily painful passages, trying to write sentences that felt precise to what I was trying to argue. The paper required close attention not just to the texts but also to the language I used to analyze them. It was also work I cared about, and the kind that deserved that care. It is incredibly difficult to imagine that any version of that paper could have been written in 75 minutes.
Recent reporting in The Student and conversations with peers and friends have made clear that many professors are navigating artificial intelligence (AI) in very different — and often conflicting — ways. Professors from departments such as English, architectural studies, law, jurisprudence, and social thought, and political science are exchanging take-home assignments that once unfolded over days for timed in-class exams in response to concerns about AI usage. That reasoning is understandable, but the changes emerging from it are not neutral. They do not simply alter format; they redefine what writing is allowed to be.
What is especially frustrating is that these shifts are being made preemptively, in response to a real but, in many ways, structurally unsolvable problem. AI use is difficult to detect and even more difficult to prove. Policies built to prevent it, therefore, operate at a low level of certainty while producing very high consequences, altering the conditions of learning for every student, regardless of use.
In many humanities classrooms, writing is not just about presenting an argument. It is about forming an argument. I arrive at what I think through the process of writing itself — through trying (and failing) to make my language hold ideas that do not come easily to me. When that process is compressed into a fixed block of time, writing becomes less about critical thinking and more about producing.
This is not to suggest that timed in-class exams have no place at Amherst or even that they don’t belong in the humanities. In some contexts and departments, they make sense — and in courses that have always been structured around that format, they reflect a particular type of retention that students are expected to demonstrate. What is more difficult to account for are courses that have recently shifted toward timed assessments after previously not doing so, courses where the reasoning is “because of AI.” That change suggests a different set of priorities, where writing is being constrained — not because bluebook exams are a pedagogically better assessment, but because they are easier to monitor.
At its core, the shift toward timed assessments raises a question about educational priorities: Should we concern ourselves more with the possibility of AI misuse, or with creating the conditions that allow students to think, to struggle, and to arrive at something that feels like their own?
For me, that struggle is the entire point. I am a humanities student and an English major because it is the kind of work I need and the kind I thrive in. I have taken timed exams, standardized tests, and quantitative courses where my answers were judged as right or wrong. While I could engage in that mode of learning, it was always functionally misaligned with how I learned best. That experience is why I, and I suspect many students here, chose to study at a place like Amherst College — because the open curriculum seems to allow them to prioritize the kind of intellectual work that suits them best. For that reason, classes that have recently moved toward in-class exams after previously not doing so feel like a departure from the kind of learning I was led to believe Amherst valued. I chose to study in a setting where I could develop ideas, not produce them on demand.
I understand that professors are being forced to navigate an extremely difficult situation, and I do not think these choices are being made lightly. But they are still choices. And they reflect both professors and courses that prioritize avoiding risk over preserving the conditions that allow students to form a genuine relationship with what they produce.
Ultimately, a more controlled environment does not produce more meaningful learning; it only ensures that everyone is subjected to the same constraints. But making conditions the same for everyone does not necessarily make them better for anyone. It is unlikely that someone who writes an entire essay using ChatGPT is confused about what they are giving up in their learning, and taking an exam in a bluebook will not suddenly make that student want to properly engage with their coursework.
There is no easy solution to AI misuse — and I certainly don’t have one. But, I do think that there are responses that do not require collapsing writing into a speed-based exercise. Greater emphasis on process — for example, requiring students to detail how their thinking is developing — can help build trust between students and professors. Using more specific or creative essay prompts could also make generic, AI-generated responses less viable. These approaches do not eliminate the problem, but they present ways to engage with it without redefining writing itself.
The question is not whether AI misuse should be taken seriously. It should. But the irony is difficult to ignore. At a time when information is more accessible than it’s ever been, when a question can be answered by Google in under two seconds, writing is one of the last places where thinking unfolds slowly enough to feel like it is actually yours. In trying to guard against AI, we are placing that process under conditions that make it increasingly difficult to sustain at all.
An institution like Amherst College is built on the assumption that its students will go on to do meaningful and complex work — across disciplines and fields in ways that cannot be fully anticipated. But if that is true, then the ability to think deeply and carefully, and to work through ideas using language, is a foundational skill. So it is worth asking why the response to AI usage is shaped by an assumption that students will avoid doing the work rather than by the expectation that they will engage with it. Why is a place that so vehemently insists on its identity as “the writing college,” in practice, reorganizing itself around a diminished version of what writing is allowed to be?
Comments ()