There’s No Such Thing as Optional AI
Staff Writer Max Feigelson ’27 confronts Amherst’s adoption of AI tools in the classroom, arguing that the administration’s insistence on “optional” usage masks a deeper moral abdication, revealing how convenience and corporate pressures are reshaping the liberal arts.
In his email announcing the arrival of college-sponsored artificial intelligence (AI) tools, Amherst’s Vice President for Information Technology and Chief Information Officer David Hamilton wrote in a bolded line of text, “the choice to use these tools is optional.” Hamilton wrote that the presence of AI in the classroom introduces a whole suite of ethical questions related to “academic integrity, labor, equity, privacy, and the nature of intellectual work,” and he wanted to clarify that the school’s official sponsorship of AI technology wouldn’t mean anyone is forced to integrate these tools into their lives. Clearly, David Hamilton, Provost and Dean of the Faculty Martha Umphrey, Vice President for Student Affairs and Dean of Students Angie Tissie-Gassoway, and the other administrators who approved this change in Amherst’s digital infrastructure policy haven’t seen what AI is already doing in the classroom.
If you go to nearly any college lecture hall in America (or the world), you’ll see hundreds of laptops open to some sort of AI tool doing some sort of thing that is irrelevant to the matter at hand. It’s become normal for students to generate summaries of readings they don’t have the patience to complete and ask computers for discussion questions aimed at fooling the professor into thinking that they did their homework. It’s also become normal for professors to grade students with AI, thereby creating a horrifying, inescapable, mutually destructive cycle in which students submit AI-written papers to professors who grade with AI tools. Amherst wants to hold an “AI Day of Learning” on Friday, after which they might as well invite me to guest lecture on “Drunk Flamethrowing 101.”
Hamilton’s message was designed for Luddites and cynics alike. He repeated that the new tools are “optional” three times in one email because he wished to assuage fears that our liberal arts education will remain liberal in that we won’t be forced to do anything we don’t want to do. This is the most squalid and bare-bones form of liberalism one can imagine, the kind that validates all self-interest in the name of personal autonomy; the kind that divides a community of thousands into thousands of solipsistic independent gears in a directionless machine. Personally, I was never going to use AI regardless of school policy because I believe tuition is too high for me to entrust my moral and intellectual growth to a data center in Texas. Nobody is worried that the school will force us to use AI. The worry is that we already attend an institution poisoned by AI laziness, and rather than take a stand against what nearly every professor knows to be a threat, the administration opted to join the enemy. Yes, I’m not forced to use AI, but I’m still against it because it will affect my community.
Rather than blindly charge into the eternal night of new technology, we can think about whether it’s a good idea to introduce that technology into our lives. A decent test is to ask “to what question is this technology the answer?” If one cannot come up with a question that was pressing before the technology was invented, then the answer is probably no. Before Amherst College sponsored the introduction of AI technology in the classroom, administrators should have asked themselves, “to what question is NotebookLLM the answer?” I’ve yet to hear an answer to this question that justifies AI.
We might ask, do students struggle to take notes? Maybe some have trouble with their handwriting and a computer is necessary to help them keep up. If they don’t have trouble with handwriting and still find notetaking difficult, maybe they need to be taught how to sort between important and unimportant information just like students have been taught for centuries. Maybe this is what those people who work in the Strategic Learning Center and the Writing Center are paid to do. Maybe the ability to distill large amounts of information into coherent and complete records is an important skill they’ll use until they die.
Are students too busy and NotebookLLM might save them time? Maybe they should take on fewer responsibilities or learn to manage their time better. Otherwise, isn’t the point of college to be immersed in an intellectual environment and learn, perhaps for the first time, that in order to finish good and interesting work you may need to get used to being uncomfortable? Aren’t the world’s problems much more difficult than anything offered in college? Isn’t it our job to begin the work of fixing the world’s problems?
Do students have a hard time understanding the reading? Maybe for some, English is their second language and they need more time to understand the material and perhaps some assistance from the professor outside of class. Otherwise, have we considered that everyone has a hard time understanding some readings and the point is to study hard until you understand it? Have we considered that it’s okay for a chapter to take two, three, or six hours to read and that we’ll be smarter if we spend the time to read it? And also, when’s the last time we saw someone read a chapter three or four times before attending class?
I’m sure there are some questions which are best answered by AI. When the professor isn’t around, it can be good to ask a computer about the thinking behind certain steps in a mathematical proof or the grammatical rule employed in a phrase of a foreign language. But three years ago, if I showed up to class having read the reading, discussed that reading with some questions I’d written down beforehand, then wrote an essay about the whole experience, would anyone say that my experience could be improved if I had AI skip one of those steps? Did Plato’s academy provide less of an education because it didn’t have NotebookLLM? Were the Chinese imperial exams invalid tests because students were unable to generate summaries in place of memorizing the words of Confucius? The answer to all these questions is a categorical no. On the bumpy road towards intelligence and kindness, civic decency and courage, there is simply no replacement for rigor.
Common administrative rhetoric concerning AI argues that students should be trained to “properly use” the technology of the future. Even those outside of education feel that they are forced to integrate AI into their lives on the premise that “ChatGPT is already more powerful than any human who has ever lived” as OpenAI CEO Sam Altman wrote on his blog, likely smirking to himself. The argument sounds innocent; who could argue against “proper use?” But while this argument sounds as if it involves giving technology due diligence, the truth is that under its terms we could justify everything from plastic bottles to chemical weapons. The premise that lends this argument its ubiquitous acceptance is a myth that arises whenever new technologies are brought to the market: the myth of technological inevitability.
Technological inevitability is the fallacious idea that regardless of whether it’s a good idea to introduce a new technology, if technologies are powerful and popular then they will always come to dominate industry and culture. It’s tempting to believe in the myth because history is full of examples of technologies becoming adopted by a general public without genuine consideration of their negative effects. The stories of the invention of the newspaper, the telegraph, and the iPhone can all be viewed as signs of Luddites and cynics’ impotence in the face of technological inevitability. It can be easy to forget that sentiments like “the future is here to stay” are deeply cynical, nearly nihilistic, about the nature of human action. Alternatively we can remember Marshall McLuhan’s slogan: “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”
History is also full of examples of actors who possessed the intellectual perspective and moral courage to stand up against the myth of inevitability. The problem is that in comparison with the stories of the most successful and profitable tech CEOs, the stories of those brave people who resisted technology are undramatic. Yes, many of these attempts end in failure. But there are many whose attempts to profit at humanity’s expense were rebuffed by those who stood up for the common good.
For example, Robert Moses was right in his claim that highways are an effective way to move people across vast distances. But were it not for Jane Jacobs and the activists who fought against his plan to bulldoze through the middle of residential Manhattan for a highway, we would never have the walkable New York City we love today. These activists understood that the world would be changed by highways, and yet they forced their government to carve out an exception to the market that rewards greed and endless efficiency when they designated certain areas as “neighborhoods.” We can accept that although it’s easier to go along with the tide than to stand up for something, it’s also possible to stand up for things we know to be right and win.
Another example from the near past: From 1950 to 1980 the world used more than 40,000 tons of Dichlorodiphenyltrichloroethane (DDT) for all sorts of industrial use, including disease control, aerosols, candles, and most notably, pesticides. The chemical was an effective tool for farmers worldwide and played a big role in the elimination of malaria from three continents. But when Rachel Carson, now widely credited as the founder of the modern environmental movement, documented environmental impacts that correlated with DDT usage in the United States in “Silent Spring” in 1962, her work led to the eventual legislative ban on the chemical in agriculture.
No matter the time, those who stand to profit from the spread of technology will claim that their invention is an inevitability. Right now, AI CEOs have been so effective in their mythologizing that they have engineered the U.S. economy such that half of growth depends on a handful of tech-bros’ quasi-cultic, matrix-derived predictions. Privately, we all know that AI is an effective tool for some problems and a mediocre replacement for human intellect and effort more generally, but investors are so afraid of being left behind that they’re willing to gamble the entire economy on the marketing promises of a handful of billionaires. We should remember what the serial entrepreneur Margaret Heffernan said: “Anyone claiming to know the future is just trying to own it.”
There are careers built on the myth of technological inevitability that will make many of Amherst’s already wealthy students even wealthier. I’m sure that these industries wouldn’t mind if their applicants were trained on the statistical mean of billions of data points. Greed, as it turns out, is a product of mediocrity. I’m also sure that these new and exciting tools Amherst has introduced into our lives will make it easier for students to get through the day, perhaps saving students time to sleep or get wasted or curate their LinkedIn profile. But efficiency and luxury are simply unacceptable standards for any college, much less a purportedly liberal arts college like Amherst. We should not aspire to become the median of all possible data points, but strive for something — anything — that can make some sort of difference in a world that is crumbling around us.
Bigger universities than Amherst College have integrated AI into their curriculum in order to court funding from Google Gemini and ChatGPT. Amherst is fortunate enough to have billions of dollars in the endowment, but I can’t help but imagine that someone in the administration feels it would be fiscally irresponsible to write policy that’s openly hostile to AI companies’ economic interests. The Board of Trustees has already ignored two thirds of the faculty when it voted not to divest from Israeli weapons development firms. Even if students and faculty were to rally against it, the administration could accept a grant from an AI company in a quid pro quo exchange for the creation of a major or series of courses designed to prepare Amherst students for lucrative careers in AI development, a practice which has already begun at community colleges nationwide. Google recently dedicated a billion dollars to “help colleges teach and train students about artificial intelligence,” and there’s no reason to imagine why they wouldn’t want to invite as elite a school as Amherst into their program.
It’s not a mystery why students turn to AI. Life is hard and it’s natural to try to ease our burdens when we can. But in college we choose to take the hard path. We choose to provisionally accept difficulty into our lives on the premise that we’re becoming stronger by doing so. Students will always try to find the shortest path between two points, but the administration cannot respond to this natural urge by incentivizing laziness. That’s simply not their job. To the question of what, then, we should expect of our college administration in a time of fascist takeover and AI worship, perhaps Weizenbaum’s 1976 book, “Computer Power and Human Reason,” is relevant when he writes,
“I recently heard an officer of a great university publicly defend an important policy decision he had made, one that many of the university’s students and faculty opposed on moral grounds, with the words: ‘We could have taken a moral stand, but what good would that have done?’ But the good of a moral act inheres in the act itself. That is why an act can itself ennoble or corrupt the person who performs it. The victory of instrumental reason in our time has brought about the virtual disappearance of this insight and thus perforce the delegitimation of the very idea of nobility.”
We don’t hear college administrators speak in anything but the soothing non-words of corporate governance, and it’s worth imagining a world in which they do. What if Dean Tissi-Gassoway justified policy changes to the Amherst Community with reference to her convictions about the purpose of education? What if Provost Umphrey was under pressure to respond to the most recent article in The Atlantic that called Amherst a “safe space” in a way that takes into account William Nelson Cromwell Professor of Jurisprudence and Political Science Austin Sarat’s very public rebuke of that article’s thesis? These administrators are in a position to drive the school in a certain direction, and while it’s true that they must mediate between many different interests in making their decisions, the only direction we seem to be going is down the path of least resistance. Rather than actors at the top making decisions for the community, it feels that the school is following a script shared by all similar institutions.
Administrators issue statements only as long as other schools are willing to do the same, they meet student and faculty’s political demands only to the same extent that other schools meet those demands. We’re governed by technocrats and middle managers, mediators between legal, financial, and public relations interests who are forced to react to the world rather than lead.
In his final speech before his ouster from the Amherst College presidency, Alexander Meiklejohn defied the trustees and Vice President Calvin Coolidge when he urged a crowd of hysterically cheering students to “abolish the trustees.” Stretched across a table amidst a stifling gymnasium heat, he railed against the administration, pointing out the hypocrisy of an institution that claims to care about democracy, but specializes in bolstering the power of the wealthy; one that claims to educate students to become future self-governors, but offers its students and faculty no chance to exercise their majority will. He proclaimed, “I say the greatest danger to the American college today is that it will be drawn into the common life, and that it will take the standards of common life as its own, rejoicing that it will be like other men rather than with the necessary difference which every scholar has with every other man who is not a scholar.” The current college administration will call Meiklejohn an ideological martyr and claim that they attempt to live up to his expectations, much like they do for the 2016 Amherst Uprising. But Meiklejohn never would’ve let AI anywhere near his academic temple; he would have resigned before his institution wed spirit of its day, only to find itself lonely in the next.
According to the market economy of Feb. 2026, it makes sense to sponsor AI technology at Amherst. But institutions of higher learning cannot only make policies based on what makes sense. If Amherst is intent on doing so, I fear what will make sense for the administration when Immigration and Customs Enforcement comes to town, President Donald Trump demands financial concessions in return for immunity from state censorship, international students fear coming back from their home countries, and when further judicial decisions render the Department of Diversity Equity and Inclusion useless. AI technology presents the college with a relatively low-stakes choice between morality and logistics. When fascism came to Germany, nearly every university chose the latter. There’s little reason to believe Amherst is any different.
Comments ()