College Grapples with Implications of ChatGPT
“I’m probably an outlier on this particular question, but I think they’re going to get more and more capable. And what they cannot do may shrink. Maybe to nothing.” Eyes alight, Lee Spector, a professor of computer science and founder of the college’s Artificial Intelligence in the Liberal Arts (AILA) initiative, spoke before a rapt audience in Frost Library’s Center for Humanistic Inquiry.
The panel event at which Spector spoke, held on Feb. 20, centered around the implications of a popular new AI-powered chatbot, called ChatGPT, for the future of learning and teaching. ChatGPT has sparked widespread intrigue over its ability to instantly generate fluent and competent responses, in prose paragraphs, to almost any user-inputted prompt. It has drawn over 100 million users since its release in November 2022, who have used the AI to generate absurd poetry, full coding programs, answers to nearly any question you can ask — and far more.
Many commentators have cast the AI’s implications for academic writing in existential terms, with two different Atlantic articles declaring the software’s arrival “The End of High-School English” and the “death” of the college essay. Strong claims like these have been met with equally strong objections, fueling a wildfire debate over what the emergence of generative AI means for academic institutions, like Amherst.
While the February panel event focused on how AI could alter the future of academics, The Student’s conversations with Amherst faculty and students revealed broader, often philosophical, excitement and concern extending far beyond the walls of the ivory tower. As Spector, with a slight smirk on his face, put it in an interview with The Student, “A lot of the hand-wringing … about how it can be used to cheat seems pretty much completely beside the point.”
Many community members, like Spector, touched on plagiarism and honor-code policies, but were more eager to consider how generative AI may reshape society at large — by making it easier to spread misinformation, by performing jobs once held by humans, and even by challenging our understanding of what it means to have a mind.
Artificial Academics
Many professors to whom The Student spoke suggested that the panic over ChatGPT as a cheating tool was unfounded, or at the very least misguided.
Assistant Professor of Computer Science Matteo Riondato said that he had already encountered a case of a student using ChatGPT to write code for an assignment they submitted. As a result, this semester, he decided to add a line about ChatGPT to his syllabus “to just make it explicit that this is not what we would like [students] to do.”
Nevertheless, he maintained that he was not too worried about students using ChatGPT to cheat, noting that comparable methods of cheating had already existed. “For all I know, my students may be outsourcing their code-writing to someone else,” he said. “And is that really different?”
“It doesn’t introduce any new way of cheating that didn’t exist before,” he added. “Therefore, if you're really concerned about cheating, you should have already been concerned.”
Riondato was able to catch the ChatGPT cheater because the code the AI wrote was “far more complex” than anything discussed in the course. “Even cheating requires brains,” he remarked.
While the AI can offer cogent responses on just about any topic, much of the concern about ChatGPT’s potential as a tool to cheat has been concentrated in the humanities.
English Department Chair Christopher Grobe, whose research probes the nexus of technology and performance, explained that this is because professors “see ChatGPT producing a fluent paragraph of prose, and they [become] so overwhelmed by the fact that this model can do one thing that they struggle to teach other people to do.”
Grobe, who sat alongside Spector at the panel in February, said that he has fielded many emails from educators who want to figure out what, exactly, the AI’s limitations are.
“There are some really key things that it can’t do,” he explained, noting that the current version of ChatGPT struggles to correctly use quotations and produce an interpretation of text — core skills emphasized in the humanities.
The reason for these deficits, Grobe said, lies in the way that generative AI like ChatGPT actually work, which he said is commonly misunderstood. “[People] think of them as databases or fact engines when really they’re like word prediction,” he said. “So people misunderstand them as having some relation to the truth rather than the arrangement of words.”
Riondato, who researches algorithms for knowledge discovery and machine learning, added that misconceptions about how ChatGPT works result, in part, from “gimmicks” of the user interface, like the fact that it responds to prompts in the first-person and appears to “type” its answers. These features are intended to “capture user attention,” Riondato said.
When it came to the dangers of people taking what the AI generates at face value, Riondato and Grobe were in agreement. “This idea of sounding like someone who knows what they’re talking about, which, again, captures your attention, it makes you trust, as a user, what you’re reading,” Riondato said. “And therefore is very dangerous.”
Nevertheless, Grobe said he thought there were productive ways for students to use ChatGPT in their academic work. “What the current models are good at is generating a lot of variations on something,” Grobe said. “For some people, that could be a useful place to start when starting from a blank page feels scary for a writing task. I think what’s crucial is that it’s a starting point.”
The students whom The Student spoke to described a range of different interactions with ChatGPT.
Claire Holding ’26 said that she had heard of students using ChatGPT to “lighten the workload” by summarizing dense articles or performing other rote tasks — in other words, to “help burnout,” rather than as a replacement for original work.
Spencer Michaels ’24 — a law, jurisprudence, and social thought major — has leveraged ChatGPT to automate some of his note-taking workflow. “Let’s say I’m reading Karl Marx, and he’s talking about the French Revolution,” Michaels explained, “I have a plug-in in my note-taking software that will write a summary about the [historical] event automatically.”
Multiple students echoed Grobe’s belief that ChatGPT can generate starting points for many different types of projects.
One of the students who works for Spector’s AILA initiative, Ashley Bao ’26, said she had used ChatGPT on occasion to generate Python code for her personal projects. “It’s pretty useful,” she said. “Sometimes I’ll use it to get ideas … for whatever I’m working on.”
Michaels also said that, when he sits down to write papers, ChatGPT helps him brainstorm and organize his thoughts from the comfort of his room (as opposed to the inconveniences of attending office hours). He’ll probe ChatGPT for good questions about a concept, or ask it to remind him of the connections between authors. Though he admitted, with a laugh, that it’s rare to get a smart response, “It’s just nice having something to bounce off of.”
Some students, however, like Hedley Lawrence-Apfelbaum ’26, expressed concern about students using the software to cheat. “I think the danger is now that, obviously, people just use [ChatGPT] for all their work, which I don’t think is effective, but that people will do it anyways,” he said. He suggested that the college’s administration would have to adapt to the reality that students would make use of AI, and regulate accordingly.
The students were in agreement, though, that anyone who offloaded their original thinking to ChatGPT would ultimately be hurting themselves. “I feel like [using it this way] will wind up biting you in the ass at some point,” Holding said.
Generating (In)stability
Beyond fears about plagiarism and the end of academic writing, ChatGPT’s emergence this winter has sparked energetic public discourse surrounding the technology’s possible effects on the job market. The prognosis is largely centered around white-collar workers, whose labor often involves writing and other forms of content-creation. Commentary ranges from the apocalyptic to the dismissive.
Alfeld maintained that the economic changes foreshadowed by ChatGPT are completely unprecedented. “What I find most worrisome is the uncertainty,” Alfeld said. “In the Industrial Revolution we went from, you know, 80 percent of people being farmers to four percent-ish. But in doing so, it’s not like we ever had 70 percent unemployment.”
Alfeld thinks that, with the speed that AI is developing, and the tendency of culture and law to lag behind technology, we could enter a period where certain jobs start instantaneously becoming obsolete. “If that happens, we’ll need severe societal change,” he said.
To this latter point, Spector expressed what is perhaps the “unreasonably optimistic” hope that if and when ChatGPT wreaks havoc on the white-collar labor market, it will mobilize more equitable economic policies. “Arguably, for over a century, people have tried to confront these questions and to say, ‘What should be the basis for material support for people?’” Spector said. “If the disruption is universalized, perhaps we’ll do a better job figuring out how these things work.”
Like Alfeld, Spector emphasized the “massive” disruptive potential of these technologies for the economy, and he reiterated that the pace of their development will soon make their current shortcomings irrelevant.
ChatGPT threatens the future of our economy, there are also imminent hazards posed by its current vulnerability to bias and misinformation. For Riondato, these concerns are far more pressing than its potential for college-level cheating. For one, he explained, “it’s possible that search engines — companies like Google and Microsoft — will adopt it and incorporate it in their search engine interfaces.”
One problem with this, he said, is that generative AI like ChatGPT are currently susceptible to highlighting inaccurate information, if it’s in the datasets on which the AI is trained. This will be compounded, Riondato added, if different AI-powered search engines begin “feeding off of” each other’s false information.
Riondato was also concerned that AI-powered search engines would reduce the diversity of information available. “My worry for something like ChatGPT is that it will limit you to what you [now] find on the first page,” he said. “And not only that, it will limit you to what the company running the model will decide that you should know. We benefit much more from an informed public that is exposed to a variety of points of view.”
In terms of how to move forward, Riondato emphasized the importance of learning how best to use ChatGPT and other generative AI. “Like any tool can be used well or badly, and for positive or negative things,” he s aid. “But I feel like it’s a kind of tool that we haven't learned yet how to use.”
To this end, he pointed to the importance of developing regulations to mitigate the harmful effects that misuse of AI could have on society, comparing ChatGPT to a car. “There are very strong regulations about what a car is and what it’s supposed to do and how it’s supposed to react in some situations,” he said. For instance, he recommended considering regulations on “how [AI like ChatGPT] are supposed to answer some questions, or refuse to answer some questions, and how diverse the answer should be.”
Alfeld, though, pointed out that these flaws are by no means unique to computers. “It’s wrong sometimes; it makes very silly mistakes,” he said.
“[But] do you want to dig through all the silly things students have ever said?” Alfeld countered. “When you talk to ChatGPT, there’s this bizarre feeling [for] many, many people that there’s an intelligence on the other side.” With his characteristic irony, Alfeld suggests that our dismissive attitude toward the “stupidity” of ChatGPT is perhaps motivated by our anxiety to differentiate our own minds from computers.
Whether or not ChatGPT and AI technologies like it are equivalent to human minds, this “bizarre feeling” means that no conversation about regulation can escape these existential questions. Nina Aagaard ’26 noted that, as the technology becomes more ubiquitous in content-creation, the college, and society at large, should make it a norm to cite when ChatGPT’s words are being used. “It’s creating work that needs to be cited … as a source in the same way that a peer-reviewed article would be,” Aagaard said, even if it’s not “original” in the same way that individuals’ words are.
Some aspects of successful regulation go beyond merely imposing a rule. Alfeld, for instance, thinks it's crucial to disincentivize students from using ChatGPT to bypass intellectual labor. “Right now, students can spend 20 hours on a paper and get an A-minus, or they can spend 20 minutes and get a B,” he said, emphasizing that it won’t be long before that gap closes even more.
Alfeld has found an argument that has worked to modify the incentive structure for his intro CS students. Although ChatGPT is “reasonably good” at writing simple code, Alfeld notes that students simply “copy-and-pasting” code will end up having only basic skills that are increasingly undesirable in the job market. “If you do that [copy and paste] why would anyone ever hire you?” he retorted. “The alternative is they give it to an AI [who will do it for free].”
Many others agree that regulations at the campus-level should aim at cultivating thoughtful users of ChatGPT, rather than banning the technology outright. Much of this happens at the level of faculty-student relationships, as Alfeld described. AILA is hoping to create more venues for these exchanges on campus, according to Spector, who sponsors the club. “I think that it’s particularly important that people … who have deep knowledge in many different ways of viewing the world talk to each other to figure this out,” he said. “People should not feel like this is out of their control, and should stand up and talk to each other about what their concerns are, what their hopes are.”
The question of campus-wide regulations is intimately connected with the broader social implications the technology will have. For Spector, the most powerful thing Amherst can do to spearhead the behemoth process of regulating these technologies is to push back against the monomania of corporations and tech giants by setting a standard of nuanced, interdisciplinary discourse. “I have zero confidence that the narrowly technically educated people at the core of the big AI companies are going to do the right thing,” he stated. “That is not the way to a healthy future.”
Computing Answers
Every discussion about ChatGPT between The Student and campus community members was underlain by certain philosophical anxieties — about what it means for a computer software to write fluently, and perhaps, what it means to be human at all.
This question hung in the air of Red Room on the evening of April 13, where Alfeld spoke at an AILA-sponsored panel event about the future of AI in warfare. As his co-panelists, two big names in the legal political field of AI regulation, decried the brutality of AI’s military applications, Alfeld scanned the crowd full of his students. When the Q&A portion began, his poker face began to fall away.
Andy Arrigoni Perez ’24, a computer science major, approached the microphone with a question directed at Bonnie Docherty, Human Rights Watch expert on autonomous weapons systems, or “killer robots.” He pushed back against her view that systems should be banned, citing the possibility that AI technologies could make warfare more humane by eliminating decisions motivated by anger or fear.
Docherty resolutely shut him down, maintaining that any delegation of life-or-death decisions to AI is a degradation of human dignity. Right on cue, Alfeld interjected. “I’ve seen AI show more empathy than some people I know in real life,” he retorted.
In his office, where the doors are always open to the chatter of the CS lounge, Alfeld reiterated what these conversations are exposing about the “supreme arrogance of humanity.” “We, as a species, have always talked about what it means to be human … and we don’t have a universally-agreed upon definition,” he said. When people insist that AI lacks compassion, originality, or dignity, Alfeld thinks it’s something like a self-soothing mechanism.
“These computers are deterministic machines,” Alfeld said, “and so there are all sorts of questions at play if we say that they can be intelligent.”
According to Professor of Philosophy Joseph Moore, who is currently teaching a course on the philosophy of mind, the emergence of AI like ChatGPT has renewed long-standing debates in the field over what it means to have a mind.
“Speculation about the possibility and implications of artificial minds goes back at least into the 19th century, but reflecting on ChatGPT and other emerging AIs can make these issues relevant, and sharpen them,” he wrote in an email statement to The Student. “Not only do we confront the gold standard question of whether any artificial system can have ‘general intelligence,’ but these AIs press us to reflect on what we mean by this and other central notions bound up in our concept of mind.”
“And of course, philosophy can weigh in on more existential questions about what AIs — and the dreaded ‘singularity’— might mean for the future of mankind,” Moore added. “People range from being cautiously optimistic to completely apocalyptic about these larger questions.”
For his part, Riondato pushed back on the philosophical excitement surrounding ChatGPT’s emergence.
“I do think that most of the hype is definitely misplaced. I mean, none of these AI models has shown any kind of really revolutionary component, in some sense. The development is following the same rate of progress that it has followed for the past 15 years,” Riondato said. “But it’s clear that it’s a better model than what we had before and in terms of producing a sequence of words that sounds like language.”
He also noted that humans, whose “brains have evolutionarily developed to search for patterns and search for meaning,” are sure to “find meaning in those sequences of words [produced by ChatGPT,]” even if the AI itself is just recreating trends in datasets.
Grobe, for his part, cautioned against being “seduced by the analogy between human intelligence and machine intelligence.”
At the college’s panel event on ChatGPT in February, Spector offered a more nuanced vision for how human and artificial intelligence may converge. He acknowledged the areas in which human intelligence is successful but models like ChatGPT are not — like, in “representing the world, and doing logical reasoning, and planning” — but he expressed doubt that these distinctions would exist for long.
“I would expect [these large language models] to be hybridized in the very near future,” Spector said, “so that we’re going to have systems that actually do model the world, that actually do reason and plan, on top of doing this ‘autocomplete’ function.”
In an interview with The Student, Spector expanded on his belief that it’s close-minded to neglect the parallels between human and machine intelligence — even precluding any further advances in AI technology.
“I’ve, for a very long time, understood what happens in humans, in human brains and minds, to be a kind of computation,” he said. “But people often think that sort of denigrates the human mind, that it’s ‘merely’ computation. I would flip that and say it means computation is a lot cooler than we thought. Computation can fall in love. Computation can imagine other worlds. Computation can write mind-blowing poetry. Computation can feel pain.”