Thoughts on Theses: Amelia Cogan ’26
In this edition of Thoughts on Theses, Staff Writer Savita Jani ’28 talks to Amelia Cogan ’26 about her thesis on predictive algorithms in policing and psychiatry, exploring her writing process, findings, and her ultimate reflections on using anthropology to study emerging technologies.
Q: Huge congrats on turning your thesis in! How do you feel now that it’s over?
A: It hasn’t entirely set in, but I do feel a lot of relief. There’s a sense that a weight has been lifted, but I also feel very sad, because you’re sort of creating knowledge in a really exciting way, and for that to be over … can be hard.
Q: How long have you been working on this project?
A: I proposed it in April of my junior year. But the original project I proposed was very, very different from what the thesis became. I really only nailed things down … as to what this project would be in late September.
Q: What is your thesis about?
A: It’s a multi-sited ethnography of the algorithm in predictive policing and psychiatry. If you load historical crime data and records into an algorithm, it [can] tell you who is most likely to commit a crime, particularly in the case of the Chicago Police Department. They have this list called the Strategic Subject List (the SSL) that’s pretty infamous. The idea is that ‘if I know this crime is likely going to occur in this parking lot, then I can station my officers accordingly.’ The hope would be that you’d improve public safety. Neuroprediction operates on a similar logic of predicting dangerousness, but it does so in psychiatry. And to be clear, it hasn’t been piloted at the clinical level yet. But the idea is that if you load patient data and medical notes into an algorithm, it will be able to predict which patients in a population are most likely to be dangerous [to others]. The primary targets and test subjects of neuroprediction thus far have been individuals diagnosed with schizophrenia spectrum disorder who are in inpatient care [under the physicians]. That’s where these sorts of technologies have been tested.
Q: How did you end up on this topic?
A: My first project idea was [that] I really wanted to look at schizophrenia in the courtroom. I wanted to learn more about how people were diagnosed with schizophrenia [and] understand litigation, particularly when mental health is a question that is being dealt with.
There’s this organization in Holyoke called The Wildflower Alliance, and my initial idea was to do an ethnography of this alliance, but for a number of logistical reasons, it became clear that that timeline would not align with the Amherst thesis timeline, or the timeline of what that research, if done properly, would involve. I’ve spent a lot of time studying policing and the criminal legal system at Amherst, and separately, I've spent a lot of time studying the histories of madness and psychiatry. I had never brought them together.
[At] the beginning of September, [an] article came out by a historian of neuroscience at University of California, Los Angeles (UCLA), Danielle Carr. I follow her on Twitter, and she reposted this piece of neuropredictive research, and she was sort of criticizing it. I read the piece, and it reminded me of another piece I had read earlier that week, reported by the New York Times. It was called “The N.Y.P.D is Teaching America How to Track Everyone Every Day Forever.” So in this short period of time, I read those two pieces, and, you know, what brings them together is this idea of predicting dangerousness before it occurs. And then having this whole suite of interventions opened up to mitigate [dangerousness] was really intriguing to me.
I've done some science and technology studies at Amherst, and I just started imagining all these theorists [whom] I’ve read in conversation with what I was reading. And it felt like a really generative ground for a thesis. But it was a really big challenge because I was being faced with all this highly technical language that I was just not familiar with at all. And actually, there's an Amherst College computer science professor who helped me, sat with me, and talked me through some of the technical jargon and these computing systems, so that was really helpful. But there was definitely a significant learning curve, especially when you're in dialogue with all these experts whose lives this is. During a lot of the interviews, there would be moments where I was like, “Okay, well, what does that mean? … Can you break that down? Can you give me a dummy’s overview?”
Q: Wow, that's really interesting. So, just to be clear, does your thesis discuss predictive policing and neuroprediction?
A: Yeah, it follows the algorithm in both domains. The ethnography that I did was with a lot of producers, developers, and researchers on these technologies, so I just got to spend a lot of time talking to them and making sense of how they were thinking about their work.
Q: Can you talk more about what the research and interview process was like?
A: Neuroprediction [is] such a new field. The first neuropredictive research article was published in 2013, and the field is very small. All of the published work[s] on neuroprediction are in conversation with one another, and they frequently cite and reference one another.You very quickly get a sense of “Who’s Who” in the field, so I cold-emailed a bunch of these neuropredictive researchers, and a number of them responded.
[In terms of] predictive policing … PredPol [now named Geolitica] is one of the foremost predictive policing systems in the U.S., and one of their founders is a professor of anthropology, so I was able to get in contact with him. Then I spoke with a bunch of criminologists who study predictive policing, recidivism, and crime forecasting. I also spoke with some individuals working on developing new predictive policing technologies.
In the thesis I think there are 12 informants who I quote and who I mentioned, and I spoke to all of those individuals two or three or, for some, four times. Our conversations would last anywhere from like 60 minutes to 100 minutes. And then there were other people who I spoke with and who I interviewed and recorded per [Institutional Review Board] (IRB) guidelines who I did not bring into the thesis. There were probably like 22 or 23 different individuals who I spoke to, and then 12 ended up being part of the analysis of the thesis.
Q: Very cool. For people who are not as familiar with the way anthropology works, can you talk about how an ethnography differs from other methodologies?
A: The traditional ethnographic method in anthropology is that you go and live with a community of people, and you study what happens around you, and you embed yourself in that community. The discipline of the anthropology of algorithms is very new, and its sort of principal thinker and scholar is a man named Nick Seaver, who works at Tufts University. He, among a couple of others, but really mostly [him], has developed this framework of the scavenging method of the multi-sited ethnography. He says that there’s a scavenging approach that is an opportunistic mode of inquiry that draws on multiple sites, sources, and forms of engagement, rather than privileging formal access to code or proprietary systems. He says that studying the algorithm in anthropology demands methodological flexibility, because the technologies are distributed across infrastructure in ways that are often arranged by hierarchy or partial visibility. So there’s a secrecy element the anthropologist has to accommodate. But the idea is that you follow the algorithm as it moves across people and places, occupying its own distinct place in a culture of technology.
Q: So the algorithm, even though it’s not a person, is your main subject?
A: Yes. And one argument that Seaver makes is that algorithms are human all the way down. Loaded into them are human variables [because] the people who design them are humans. [He argues that] you’re able to think of algorithms as cultural products in the same way that you might other forms of media.
Q: What is the biggest thing that you’ve learned from doing this thesis process?
A: I was shocked by how willing people were to [share things with me]. A lot of what I do in the third chapter is trace the fears and anxieties of these practitioners and developers about the future of these predictive tools. I was shocked by how forthcoming they were with me about their concerns and what keeps them up at night.
All of my research methods adhered to the IRB [institutional review board] protocols. So there’s subject anonymity for all my informants, everyone I spoke to. But even so, people were really willing to talk to me through the often nebulous, tricky territory that we see. And if you think about all these tech titans who work on predictive and generative technology, oftentimes it feels like they’ve totally bought in themselves, and the rhetoric feels very formulaic, and it feels like … there’s like a bravado to it that leaves very little room for their own interrogation of the technology. It leaves little room for them to voice their concerns.
But the conversations I had with these practitioners and developers were very different from that. There were definitely moments where people were very resolute in their assessments and opinions about these tools in ways that, to me, were questionable, but they also were very willing to go with me into the territory of actually, like, “this is really scary to me,” and that was cool, and it surprised me.
Q: Let's switch to talking about your process. What do you think were the most exciting and most challenging parts of this project for you?
A: [The] most exciting parts [were] definitely the conversations I got to have. [The most] challenging parts of the project was [that] a part of my research involved fieldwork at a conference called EMO, which focuses on technology, social issues and the ethical uses of artificial intelligence (AI). [While I was] at the conference, there was a lot I did not understand—technical language, jargon, and lingo that totally went over my head. And I’d have to go back to the hotel room that I was in and just study up on some of these basic computer science terms and systems. So that was definitely a challenge.
Q: That’s so cool. Did you attend the conference to gain content understanding, or was that also an ethnographic assessment of the people?
A: There was one element of it [where] I wanted to meet people and find more informants, but I was also interested in observing how a lot of these dynamics [among algorithm developers] actually play out. You have many technologists there who genuinely care about social impact and about how these tools can be used ethically, and who want them to be used that way. At the same time, they’re facing really steep financial incentives in terms of funding and backing. I was curious to see how all of that [was] negotiated and how they make sense of their work.
Q: Do you feel like ultimately these technologies are ethical creations? What’s your opinion on their use?
A: These sorts of predictive tools can be used in so many different ways, even in the domains of policing and psychiatry separately. I can just say that in all of my conversations, there wasn’t one tool that I came across that, to me, felt like it served a purpose that was either good or couldn’t have been served by another mechanism that I think could achieve what the tool wanted to do better. I also make many arguments [in my thesis] that the tools’ aims are questionable.
Q: I did promise to switch to process, and then I took you back to content! I swear I have fun questions built in here somewhere. The truly non-content-related question is: what was your favorite place on campus to work during thesis writing?
A: It really depended on the time period. In the last sprint: A level [at] the laptops that are sort of in their own little room. The left-most computer facing the desks on A level, or like the sort of cubicle structures on the A level, was a spot. I also spent a lot of time in the Moore common room writing over J-term with one of my friends, [who lives in Moore], and was writing her thesis there, so I just sort of joined in on the party. And then a lot of it was written on Frost third, and my conclusion was exclusively written in my room. It was super late, and I needed to get it done, and I was in my room.
Q: What’s your favorite thesis study snack?
A: Ooh, number one, the Trader Joe’s dark chocolate bark with almonds and pretzel pieces. I think it’s the best Trader Joe's chocolate product. And then I also really like this brand called ParmCrisps — they’re Parmesan crisp crackers, like baked Parmesan. And they have a really good Gouda Parmesan blend. That was a big hit.
Q: Do you have any advice for students considering writing a thesis, especially one in this field or department?
A: I think anyone who’s gearing up to write a thesis, or thinks they want to, should ask themselves, “am I ready for this to go in a totally different direction than what I’ve anticipated?” And if the answer is yes, then I think you know you’re ready. I think you have to be really excited to be proven wrong, and to shift, and to accommodate what you’re finding, especially in a field like anthropology, where you’re working with human subjects. You don’t decide your subject. You listen to people and follow what they’re telling you, and then after that, you sort of pull in the theory, and you create the argument. So you really just have to be ready to be at the will of what you find.
Q: So obviously you’re an anthropology major. Do you have a second major?
A: Yes, I major also in [Law, Jurisprudence, and Social Thought] (LJST).
Q: How do you feel like that combination has shaped your thesis?
A: I couldn’t have written [my appendix on case law] without my LJST knowledge. I also think that my interest in the criminal legal system has been really built out by the LJST department. And it wouldn't be a subject of academic focus for me if it weren't for LJST.
Q: In addition to your two majors, what else are you involved in at Amherst?
A: There are a couple of different things over the years, but right now, I’m one of two editors-in-chief of the Amherst College Law Review. And I’ve also co-run the Jewish Discussion Group. My sophomore year of college, I worked with the Jews for Ceasefire group. Earlier in my college experience, I was in the poetry club.
Q: Did you go abroad, and if so, where?
A: I did! I did … the IHP Human Rights program. We started in Nepal, then we went to Jordan, and we finished in Chile. It was amazing. That was in the fall of my junior year.
Q: What is, in your mind, the biggest thing that you would want readers to take away from your thesis?
A: There’s a towering amount of writing and literature about how artificial intelligence is going to ruin everything, and I think some of this is the product of prudent investigation, and I document a lot of the really onerous harms that these tools produce. But a lot of what the literature and reporting about this harm elides is engagement with the people who are populating these fields on the inside, and who can’t really stop thinking about the consequences that will withstand and outlast their individual control. And I don’t think these people [the developers] are always right. And I don’t think they’re always honest with themselves, but I do think many of them are anxious to think out loud and talk to people who are open to hearing them.
One thing I note in my conclusion is that, over the course of my conversations with my informants, I got the sense that they often don’t feel heard, or that people on the outside don't really have any interest in talking with them. They don’t feel heard by people who are generally anti-tech and care about what is most human. One thing I mentioned in the conclusion is that it’s a peculiar feature of this discourse around AI that, among the most passionate defenders of what is human, there doesn’t always manifest a treatment of the humans doing this building as a methodological priority. This is something that I’ve gathered based on my own reading of the literature and [in] conversations with people. I think even Nick Seaver, who is the [founder] or one of the foremost people in the field of the anthropology of algorithms, says in this podcast episode that I listened to that developers can’t really be taken at their word, and we don’t need to believe them. One argument that I make is that actually, in order to understand these technologies and where they’re going, you have to listen to and engage with the people who are involved in their production and their deployment. I also make the case for why I think anthropology is the discipline best equipped to have those conversations.
Q: Is there anything else that you wanted to share?
A: One thing I would mention is that, without fail, every single person who I spoke to — all of the technologists and researchers who I spoke to over the course of the thesis process — said to me, it’s not a question of if these technologies are going to be built out in significant and life-changing ways for all of us, it's a question of when they are going to be. The first line of the thesis is a quote from one of my informants. He said, “This is going to come, whether you like it or not.” There’s this sense that we’re going to have to deal with these things, and what we need to do is think about how we're going to deal with that.
Comments ()