The AI Question Amherst Cannot Afford to Ignore
Contributing Writer Trang-Linh Nguyen ’29 investigates the implications of artificial intelligence for the future of liberal arts education, arguing that Amherst students cannot afford to treat AI with detached skepticism as the technology reshapes knowledge, labor, and power across society.
I write this because we, Amherst students, specifically, are in a unique position that most of us have not fully reckoned with, and the clock is ticking in ways that should genuinely unsettle us. We cannot maintain a denialist, prohibitionist posture toward AI. It’s not going anywhere. AI is not something we can afford to dismiss with the comfortable skepticism that elite liberal arts education sometimes breeds in us.
AI already poses a question that cuts to the heart of what we are doing here. If a machine exists that is infinitely superior to us at verifiable tasks — at retrieving probabilistically accurate information about nearly everything, at synthesizing vast literature in seconds, and at generating competent prose, code, or analysis — then what, exactly, is the point of our education?
The Paradox We Must Hold
I want to start with a statement that might seem contradictory, but isn't. You should not use AI for your education here at Amherst. We remain one of the last footholds of classical liberal arts education in America, a place where rigorous intellectual exercise still happens for its own sake. Amherst students and faculty hate AI so much that almost every syllabus carries a strict one-strike no-AI policy.
We have an entire institution, despite all its mishaps and flaws, that vigorously defends the value of an open, well-rounded classical education, irrespective of the “STEM-ization” waves and the pre-professionalism that has ravaged many other elite American colleges. Amherst’s instinct to preserve genuine thinking — I wholeheartedly want us to keep defending it.
And yet we cannot let this commitment to humanistic rigor become a refusal to engage with what AI means for everything else. These two things are not mutually exclusive — in fact, I think they require each other. The cultivation of genuine critical thinking here, in this protected space, should precisely equip us to be the rare group that engages seriously with a technology reshaping every domain of knowledge, every structure of power, every assumption about what human beings are for. We can refuse to let AI do our thinking and still understand that this technology has become a basic requirement of educated citizenship in our time.
What I observe at Amherst is something more like a studied disengagement — a kind of curated distance from the question of AI, as though engaging with it seriously would be somewhat beneath us intellectually, a concession to the hype. It's something the tech people care about while we attend to more refined concerns. And I understand the instinct. Silicon Valley is genuinely alienating to many of us who came here precisely because we wanted something different, something more human-scaled, more rooted in tradition and careful thought.
But the stakes of this AI problem are too high and the need for humanistic voices too great for us to absent ourselves. The very training that makes us suspicious of technological solutionism is the training most needed in the rooms where AI is being decided. Our absence doesn't protect us. It just means those rooms are filled entirely with people who don't share our commitments.
What Kind of Problems We Should be Answering to
AI is almost certainly the most transformative, fast-moving, and repercussion-inviting technology of our times. None of our institutions are prepared for the speed and significance of its development. AI’s existential risks include extreme power concentration, massive unemployment, unprecedented socioeconomic inequalities, AI-powered bio-catastrophes, and many more. And then the more fundamental questions — whether these systems are aligned to humane values at all, whether something approaching consciousness is developing inside them.
Even if you don't buy the doomsday framing, think about the sci-fi-like developments that are our very life right now. Five years ago, nobody worried about AI psychosis, deepfakes, or online bots in comment sections— when dead internet theory was still just a conspiracy theory. Only a year ago, software engineering and investment banking were still the careers we thought were future-proof. The things happening right now are so unprecedented that the future of AI is genuinely up for grabs in a way it has never been before.
I want to tell you two stories.
Last week, the Trump administration designated Anthropic (the company behind Claude) — one of the few AI labs genuinely committed to safety research — a supply chain risk to national security. This label has historically been reserved for foreign adversaries. What prompted it? Anthropic refused to let the Pentagon strip its safety guardrails. Specifically, the company insisted on two restrictions in its military contract: no use of its technology for fully autonomous lethal weapons, and no use for mass domestic surveillance of American citizens. Hours later, OpenAI announced it had reached a deal with the Pentagon without such restrictions.
That is one story. Here is another. The protein folding problem — predicting the three-dimensional structure of a protein from its amino acid sequence — had been one of biology's grand challenges for fifty years. In 2020, DeepMind's AlphaFold solved it for virtually all 200 million proteins that researchers have identified. Google DeepMind released the data for free. It has since been used by more than two million scientists in 190 countries, won the 2024 Nobel Prize in Chemistry, and is accelerating drug discovery for an insane number of neglected diseases — enabling the design of entirely novel proteins, fundamentally transforming biological research.
If you're serious about understanding AI, you must hold both realities simultaneously. The same underlying technology that produced AlphaFold's extraordinary contribution to human knowledge can be deployed for surveillance infrastructure that helped dismantle Hong Kong's protest networks, for Claude's involvement in the Maduro capture, and for North Korea's AI-accelerated phishing operations that funded its nuclear weapons program. The technology is neither the problem nor the solution. The governance — the actual humans in the loop demanding legislation, shaping how these potently dual-use AI are trained and used — is everything.
Why Us, Why Now
If I could have paused all AIs altogether, I would have. I truly believe the stability of our democratic institutions, of our collective moral infrastructure, is simply not ready for something this unthinkably powerful and irreversible. But I can't. Nobody can. And so I'll leave you with the only position that makes sense to me: "If you can't beat them, join them.
If bad actors want to capture all the AI benefits and wield the technology to maximize their quarterly earnings, hidden agendas, and consolidate power, we might have to enter a fight on those terms. The techno-tyrants – Silicon Valley figures who have spent a decade consolidating power over information infrastructure while growing openly contemptuous of democratic constraints – are building the most powerful AI systems humanity has ever produced. They are not waiting for the rest of us to catch up.
Who is supposed to counter them? Who is asking what this technology means for being human? Who is thinking about what it does to how we learn, how we relate to each other, how we understand truth, how we grieve, how we make meaning? Who is thinking about what it means for every field of knowledge beyond computer science? Who is thinking about societal diffusion — how this reaches classrooms in rural Asia, hospitals in sub-Saharan Africa, courtrooms in middle America — and whether that diffusion makes the world better or just more efficient at being unjust?
These are unprecedented human problems, not engineering ones — the kind that require people trained exactly as we are being trained. We touched grass. We read the books. We sat in seminars arguing about things that don't have answers, and we learned something desperately relevant: how to think carefully about problems that are new, high-stakes, uncertain, and deeply conflicting. Critical thinking — the practice of interrogating assumptions, weighing evidence, seeing through bad arguments no matter how confidently they're delivered — is the scarcest resource in the rooms where AI's trajectory is being decided. And it’s the thing Amherst has spent four years drilling into us. That training — the thing that sometimes feels impractical, the thing your relatives ask about at Thanksgiving, the thing that doesn't translate neatly onto a resume — turns out to be exactly what this moment demands. We ought to step up. The government will not do this for us. The technologists will not do this for themselves. They are too deep inside the logic of what they're building to see it from the outside. We need to be in the loop to keep them in check, to insist that the people designing these systems are not the only people who get to decide what they're for.
What I'm asking is that you bring the capacities you're developing here — the ability to read carefully, to reason about hard things under uncertainty, to see historical patterns — to bear on the defining challenge of our time — the AI governance problem.
Confront AI head-on. Do research on the intersection of it with your field of choice. I'm not going to stand here and tell you what the hard questions are for your discipline — I'm not deep enough in your work to know, and I wouldn't presume to. But I trust that if you are at Amherst, if you are as sharp as I know you are, and you turn your attention seriously toward this, you will find something worthy of your intellect at the crossing of AI and whatever you love. Your major, no matter how niche or distant to tech, has something to do with the most transformative general technology of our times – AI. Find out what. Ask new, hard questions in your domain. Write about it. Push your professors. Propose a thesis on it. Bring it into the seminar. Bring it into the real world.
Use the particular privilege of this moment. You are young, you are here, you are not yet bogged down by mortgages and inertia, or by an academic career too invested in the old paradigm to question it. And in the best case: get involved. Work in AI safety, AI governance.
And I'm asking something harder: that you have the moral courage to resist the pre-molded paths – pre-medicine, pre-corporate, pre-academia. Unlike previous automation waves, AI exposure is greatest in high-paying roles involving information processing and analysis. The pipeline that once ran from elite college to knowledge-economy career is narrowing in ways most of us have not yet registered. The conventional path is no longer the safe path. If the ground is shifting anyway, you might as well pursue the thing that actually matters to you. And the equally important part, we have to do more art. Draw more, sing more. Write more poetry and prose, and music. Reaffirm our humanity through creative work that cannot be reduced to optimization, that insists on meaning and beauty and the irreducible texture of human experience. Keep doing what liberal arts students do!
David Foster Wallace, class of 1985, spent a career critically examining the defining technologies and cultural forces of his time. In "E Unibus Pluram," his landmark 1993 essay, he took a ubiquitous technology that everyone was either celebrating or ignoring and asked what it was doing to the interior life of the people using it. He argued that television had colonized irony itself — that the medium had become so adept at preempting critique, at winking at its own absurdity, that genuine sincerity became almost impossible.
AI is our “television” — albeit more profoundly far-implicating. It’s the technology reshaping the interior life of everyone who uses it and everyone affected by its deployment, which is to say, everyone. And it demands the same quality of attention Wallace brought to bear: not the dismissive skepticism of the Luddite, not the breathless enthusiasm of the tech evangelist, but the careful, agonized, deeply human attention of someone who understands that a technology this powerful cannot be left to the technologists alone.
Who is doing that work for AI?
I think it should be us.
Nguyen co-leads AI Safety at Amherst — we are opening applications for our Spring 2026 Intro to AI Safety Fellowship, with technical and AI policy governance reading groups dedicated to bringing more interdisciplinary thinkers at Amherst into this conversation.
Comments ()