College Considers New AI Policy Guidelines

With the release of new AI tools to students on Jan. 21, Provost and Dean of the Faculty Martha Umphrey discussed the college’s plans to address the role of technology in student academic lives and administrative systems.

College Considers New AI Policy Guidelines
College plans to establish norms regarding AI use while still allowing professors to maintain autonomy. Graphic courtesy of Nina Aagaard ’26.

With the release of several Artificial Intelligence (AI) tools to students, faculty, and staff on Jan. 21, new questions emerged about the future of AI policies at the college and administrators’ plan to approach the technology’s growing role in academic life.

Provost and Dean of the Faculty Martha Umphrey noted that discussions about AI have been ongoing at the college for nearly two years. This year, in response to the rapidly changing landscape of AI, she said the college has placed particular emphasis on developing updated approaches and guidelines.

These efforts led to the creation of two groups tasked with examining different aspects of AI’s impact on the college.

The first is an “AI Working Group,” composed of faculty and staff, that considers how AI may affect various areas across the institution. This group examines not only the use of AI in academic settings but also its potential role in broader administrative systems, such as the admissions process. It also evaluates whether the college should introduce new AI platforms for faculty and students, as well as new training initiatives.

Because AI continues to evolve rapidly, the working group is designed to remain flexible and is “meant to be a group that can take up new questions as they arise and then send them off to the appropriate places to figure out how to handle them,” according to Umphrey. 

This structure allows the group to identify emerging issues related to AI and direct them to the offices or committees best positioned to address them.

While the AI Working Group focuses broadly on questions arising across the institution, the second body — the Committee on Student Learning — examines the issue from a more decisional and policy-oriented perspective. Composed of faculty and staff, the committee primarily focuses on academic policy.

One of the main challenges the committee is currently addressing is the variation in professors’ approach to students’ use of AI. Some instructors permit the use of AI for certain assignments, while others prohibit it entirely. Many fall somewhere in between.

According to Umphrey, this variation has created “a lot of anxiety” and uncertainty among students navigating multiple courses with different expectations. However, rather than imposing a single AI policy across every course, Umphrey noted that the college is working to establish clearer norms while still allowing professors to maintain autonomy in their classrooms.

“We're trying to come up with mechanisms to create some norms and guidelines to help faculty set policy without dictating what policy any particular faculty member should have,” she said.

Currently, several ideas are under discussion regarding how to clarify expectations around the permitted use of AI for students as they enroll in courses. One proposal is to introduce AI-use descriptors in course listings, allowing students to better understand whether AI tools are permitted, restricted, or prohibited in a class. Another is to provide faculty with syllabus templates designed to help them clearly communicate their policies regarding AI use.

While these steps are intended to reduce confusion and prevent violations, the Committee on Student Learning is also continuing to discuss how the college should respond when students do commit violations of course policies related to AI. Umphrey said this question is complicated in part because detecting AI use is not always straightforward.

“We all know that it can sometimes be challenging to know if a student has used AI,” she said. “Sometimes you can tell, but sometimes you can't.” 

Existing AI detection tools, she added, are not reliable enough for the college to depend on them. Because of the complexity and uncertainty involved in identifying AI misuse, the Committee on Student Learning is working with Director of Community Standards Corey Michalos to explore responses that move beyond the traditional “adjudication process,” which Umphrey said can be “really challenging and hard on everybody.”

Instead, she said the college hopes to emphasize what she described as an “educative model,” encouraging the community to think through these issues in a more thoughtful way. Although the details of this approach are still being developed, Umphrey said the college expects to share clearer recommendations soon.

“By the end of the semester, we will have some clear guidelines, some adjusted policies, and a good way to communicate to faculty, staff, and students about what we think is appropriate,” she said.

For Umphrey, however, Amherst’s response to AI extends beyond administrative policy. “Part of it is [also] about [asking] ‘what can we support?’ ‘What directions do we want to go?’ and ‘how can I provide funding or other resources to help support the faculty and students who are doing really amazing work in this area?’” she said.

This perspective, she added, reflects Amherst’s identity as a liberal arts college. “We're lucky to be in a small college that can take on the question of AI without abandoning our commitment to the importance of human relationships in education,” Umphrey said. “Because we’re small enough — because students and faculty work closely together — we will always center that important educational bond.”

Several campus initiatives already underway reflect this approach. Umphrey pointed to the “AI in the Liberal Arts Initiative,” facilitated by Department Chair of Computer Science Lee Spector, Administrative Assistant Caitlin Kennedy Downey, and a team of Amherst students, which explores both the practical and ethical implications of AI.

Umphrey also highlighted work being done by the Center for Teaching and Learning and Academic Technology Services to develop AI training resources for students, as well as the Loeb Center, which is examining how AI is affecting the future of the job market.

According to Umphrey, this kind of thoughtful engagement with AI distinguishes Amherst from other institutions that focus primarily on regulatory policies. “That’s something that you don't see at a lot of schools that are more interested in platforms and policies and so forth,” she said.

In addition to these initiatives, the college is also exploring ways to develop new courses that will address the implications of AI more directly. Umphrey said she hopes to provide “seed money” to support faculty in designing classes that will help prepare Amherst students for an increasingly AI-driven world.

She envisions two categories of courses emerging from this effort: classes that “really take on AI intentionally and teach responsible use of AI as part of the course goals,” and courses that are explicitly “AI-free,” emphasizing “the kinds of skills that a liberal arts education has always tried to encourage students to develop — slowing down, reading deeply, developing your own writerly voice, and so forth.”

Both approaches, Umphrey said, are essential for preparing students to navigate a world in which AI will play an increasingly central role.

Ultimately, she emphasized that the administration's goal is not simply to regulate AI but to help students engage with AI thoughtfully and responsibly so that they can develop “the qualities of mind that will help [them] to be strategic, to be discerning, to exercise good judgment, and to exercise ethical judgment out in the work world.”

“This is really what employers want,” Umphrey said. “They want students who think deeply about AI and not just use it blindly.”