Letter to the Editor: The Opinion Section has an AI Problem
In a Letter to the Editor, Max Froomkin ’28 examines patterns in The Student’s opinion section that raise questions about the growing use of AI‑generated prose.
A year ago, for April Fools’ Day, The Student announced its transition to AI-written articles because writers are “tired of the simple fact that we have to do the work.” Wouldn’t it be funny if, right around then, The Student started actually publishing AI-written articles?
It did.
AI use is notoriously hard to prove, but I’ll do my best to make my case. I took 1,359 Amherst Student opinion articles published from the start of the 2011-2012 academic year until April 8, 2026, and put them into an AI detection tool called Pangram. Pangram classified each article as being written by a human, a language model, or a mix of both.
What I Found
Until the end of 2022, when ChatGPT was introduced, Pangram didn’t mark a single one of 997 articles as containing AI-generated prose. Pangram avoids false positives on opinion pieces well.
The first article Pangram flagged was published six6 months after ChatGPT was released. But AI use in The Student didn’t really ramp up until this year, when Pangram marked over 30% of opinion section articles as containing AI. 12% were marked as fully AI-written.

In addition to just a classification, Pangram estimates the likelihood that AI prose was used in each article (from 0% to 100%).
This plot is especially illustrative. Each dot is an article, and its height is that likelihood. Notice that only three of 997 articles before ChatGPT’s release were assigned a probability over 10% of containing AI-generated prose. Since then, 93 have been (25.7% of all opinion pieces).

For a second opinion, I checked results with another tool called GPTZero, which Business Insider considers the best free AI-detection software. For GPTZero, I only went back to fFall 2019.
I found the same pattern. Not a single false positive before ChatGPT was released, and very little flagging until this year. For 2025-2026, though, GPTZero classifies 17% of articles as containing AI-generated prose.

GPTZero also provides a likelihood each article was human-written, completely AI-written, or a mix of both. I combined the probabilities for completely AI-written and mixed in this plot, so that each article’s height represents the probability it contains AI prose.

I’m mindful not to name names; as James Taranto warns in the WSJ, “to accuse [writers] of passing AI-generated work as their own is potentially defamatory.” But even those critical of using Pangram admit that “Pangram’s aggregate findings are credible.” That is, if Pangram says AI use is increasingly prevalent across the opinion section, it probably is.
(Similar analyses on the nNews and fFeatures sections gave more inspiring results. According to Pangram, not one of 373 nNews articles written since ChatGPT’s release likely contained AI. Yay, nNews! 11 of 355 fFeatures articles likely did—a modest number in comparison to opinion, but still not ideal. This seems to be a mostly opinion-specific phenomenon.)
This is Bad
I don’t want to suggest that the authors of these articles are completely uninvolved in their writing — I’m sure their pieces are based on their thoughts — but they’re letting AI do too much. Opinion writers should articulate their ideas themselves.
In 2023, writing on the use of AI at Amherst, the Editorial Board said, “The value in human writing is not only the capacity for creativity and originality but the time and effort put in: the hours spent struggling with the material, discussing ideas with others, and synthesizing those ideas to form an argument.”
The Student should be a place for that kind of human writing. It should not be a place for opinions nobody can bother to express.
Comments ()