The World Is About to Get So Weird (And Incredibly Exciting)
Contributing Writer Trang-Linh Nguyen ’29 reflects on uncertainty in the age of artificial intelligence, arguing that the coming technological upheaval demands participation from humanists and liberal arts students and that the future remains steerable if enough people choose to step in.
If I were to summarize the general sentiment I feel towards the future into one word, it would be confused — confused, uncertain, and deeply dissatisfied with the many governing structures I was contracted into existing under. These structures operate at every scale: from the rules of a club on campus to the decisions of governments I never voted for, the Davos crowd and the donor networks that pre-select our choices before we even see a ballot. Most recently, it takes shape in the formation of a specific techno-wealth class whose terrifying reality of “technofeudalism” I can’t seem to escape. Since I like seeing my friends on social media through Meta, enjoy the digital convenience of the backup cloud that Google provides me, and seek out occasional informative blurbs from Claude, I am implicated in the very thing I resent, and that is part of what makes this so disorienting.
Never before have all the promises of the 2000s — the triumph of liberalism Francis Fukuyama describes in his 1992 book, "The End of History and the Last Man," the progressive strides of social movements that felt like permanent gains — been so visibly unraveling. The world I was raised to believe would be my adult life — the promises of buying a house, living in a walkable city, the stable 9-to-5 white-collar career in finance, consulting, law, or academia — have never been so far-fetched. I am left feeling disillusioned and with a low hum of resentment towards the many frameworks and implicit agreements that my parents, grandparents, and great-grandparents signed onto that were never true.
Why is artificial intelligence (AI) being enabled to cut through and upend all the old ways of doing things — the tenure committees and the newsrooms and the trading floors and the congressional offices? Is my generation just uniquely doomed? Should I pretend that nothing is changing and just live under a regime of denial, carrying things out as business as usual, indulging in a normalcy bias hoping that AI will go away while I indulge in scrolling, trying to scream the uncertainty away? Uncertainty over your life is a deeply unsettling, uncomfortable sensation to have — possibly the most unsettling one.
I have since been slowly trying to find an answer, a coping framework to navigate this terrain of adulthood. I’ve turned towards wanting to seek a career that attempts to understand the defining technology of our times, and have thought deeply about how to prepare myself, my family, and the people I love before the downside of this AI revolution arrives. I hope to share why the world can be a radically better place, if more smart, capable, and thoughtful young people like us can help steer a future, a new set of rules, and a new architecture of institutions in AI.
Why I Think AI Is Something We’ve Never Seen Before
Many of you probably think that the AI chatbots are stupid and that this is a hype cycle — I thought so too four months ago. Here’s why I’m afraid — as someone who did not want to be.
AI is not a closed pre-programmed machine parroting your sentences back to you nor a simple statistical engine autocompleting your texts. What we are building — what is being scaled right now with hundreds of billions of dollars flowing in from every sovereign wealth fund and venture firm on earth — is something that forms emergent capabilities that its own creators did not anticipate and cannot explain. It ingests the summation of human knowledge through the internet and produces forms of problem-solving that exceed any individual’s capacity. And no one — not the engineers at OpenAI, not the researchers at DeepMind, not even the smartest alignment scientist at any university in the world — fully understands how. When I talk about intelligence here, I mean strictly the efficiency with which something gets from point A to point B — not moral wisdom, not emotional depth, not the kind of nuanced human compass that makes you kind. There’s only strict cognitive throughput: doing math, writing code, diagnosing disease, and optimizing systems.
And this capacity is growing at a rate that should genuinely unsettle you. METR, a nonprofit that benchmarks how long AI can work autonomously on complex tasks, has found that this capability doubles roughly every seven months and has done so for six years straight — with no sign of slowing. In 2019, an AI could handle a task that took a human one minute. By early 2026, the frontier model can independently complete tasks that take a human expert fourteen and a half hours. If this trend continues — and nobody has a principled reason to think it won't — by the end of this decade, we’re looking at AI agents that can carry out month-long projects without supervision.
Every major technology reshapes the world, but its implications are ultimately bounded by its domain — for nuclear power, energy abundance and annihilation; for biotech, miracle cures and engineered plagues; for social media, genuine human connection and panopticon-scale surveillance. AI is dual-use across every dimension simultaneously — it multiplies every upside and every catastrophe from technological advancement at once. It is discovering new drugs and enabling the design of novel toxins that slip through biosecurity screening. It is tutoring children and generating child abuse material. It is writing legal briefs and fabricating evidence. It is diagnosing cancer and defrauding your grandmother — an 82-year-old retired teacher lost her entire $200,000 in retirement savings to a deepfake of a doctor, money she’d saved for her autistic grandson’s future care. Voice cloning now needs only seconds of audio from a grandchild’s Instagram to produce a panicked, heartbreakingly convincing call. AI is unique in its ability to register every domain of human life as both a catastrophe and a miracle at the same time.
And we already know we are bad at fighting technology. Social media is still harming us, and we can’t seem to control the algorithms — creations of machines, not of human editorial judgment. We, as a collective, failed spectacularly against our first fight with technology married to brutal capitalistic market incentives — the kind that keep even the most good-faith employee, paid just well enough, running these corporations at the expense of millions of others. To borrow Tristan Harris’s pitch from the Center for Humane Technology: We were outmatched. And the incentives driving AI are worse — a race to the bottom where the richest people are pouring every resource they have into the decisive control of the most consequential technology in human history. They definitely do see something deeply transformative in this technology.
You Will Not Live a Normal Life
Even if you don’t buy any of that — even if you think I’m catastrophizing — consider this: Every major revolution has had roughly 30 to 50 years of painful diffusion that upended entire ways of life, caused unprecedented inequality, and triggered the social upheavals necessary to reinstate a new normal. And the “normal” we live in now? That was itself the product of pushbacks, of fighting, of revolutions and advocacy by important, thoughtful people who cared about risks they saw as immediate.
We survived nuclear war by a razor-thin margin — think of Vasili Arkhipov on the Soviet submarine refusing to authorize a launch during the Cuban Missile Crisis, or Stanislav Petrov choosing not to report what his instruments said was an incoming American strike. A global hunger catastrophe was averted by Norman Borlaug, a young agronomist who bet his career on the conviction that mass starvation was solvable. Think of the millions of young soldiers on the warfront in 1914, the crushing wealth inequalities of the Gilded Age, and the hippies’ disdain toward the government during the Cuban Missile Crisis when the world came within hours of ending.
It’s not like this is our first time dealing with something terrifying. But here’s what matters for our generation, specifically: Goldman Sachs found that employment among 22-to-25-year-olds in AI-exposed occupations fell 16% between late 2022 and mid-2025. Entry-level job postings have dropped 35% since January 2023. And displacement doesn’t just mean temporary unemployment — it leaves what Goldman calls yearslong “scarring”: depressed income, delayed homeownership, lower probability of marriage. This is the economy we are graduating into. The world is not perfect macroeconomics — and with AI, it may be that the basic theories we use to understand the complex interactions of economies are becoming obsolete. But would we not want the transition to be as smooth as possible, with the governance structures arriving in time to help the people most harmed by the disruption? Unfortunately, that doesn’t happen by itself. It happens because people fight for it.
So What You Can Do
All that is to say: the world will never be the same. The technology unfolding right now is going to upend everything — the things we love, the traditions of intellectual life and the humane ways of being you care about, and the things we hate, the power structures and the exploitation and the bureaucratic rot. Both are subject to radical change. But you have to come in to shape which way it goes because right now, almost no one caring deeply about AI has the potential we do. Right now, it’s under control of the haughty, quick-minded, privileged ruling class with their ideological mission to shape an easily-manipulable world. If we don’t step in, the things we hate will only become more entrenched.
To our relief, three genuinely good things are happening. First: AI governance is no longer a private, technocratic matter — it is a political one now, requiring public consciousness, and the pro-tech lobbying bloc is not winning by the margin they expected. This is the moment to engage with our local politicians and think strategically. Second: This revolution needs every discipline, every thinker possible. Third: We are still in the nascent stage. We are drafting policies, inventing conceptual vocabulary. Liberal arts students, we are built for this.
The questions that we need to answer are not technical questions. They are ours. Here are some of the questions that I would really love more effort to seek answers to — but once again, I am just one person — and you are most definitely smarter than me. The more the merrier!
You do not need to be a technical prodigy to work in AI. You do not need to code nor understand linear algebra. You can be someone deeply trained in a discipline you already love — political science, philosophy, literature, economics, history, sociology, art, truly anything — to extrapolate from historical precedent and analogy to know precisely how AI reshapes your field of interest. We all can contribute meaningfully to steering AI towards a good future, for ourselves and for many generations to come.
If you study political science: How do we design regulatory institutions that can keep pace with a technology that iterates faster than any legislative session? What does democratic accountability look like when the systems that will soon be making consequential decisions are still opaque even to their creators?
If you study philosophy: What does it mean to hold a machine morally responsible — or to hold the people who deployed it responsible? Where does the line fall between a tool and an agent?
If you study economics: How do we model labor markets when the basic priors about human comparative advantage are shifting beneath us? What does a just transition look like — not in theory, but in policy?
If you study history: What can the Gilded Age, the Great Offshoring, the post-enclosure upheavals teach us about what happens when technological change outpaces institutional response?
If you study literature or the arts: What is the value of human creative expression when a machine can produce technically competent prose, music, and images — and who gets to decide?
If you study sociology or anthropology: How do communities absorb algorithmic decision-making, and whose ways of knowing get erased when “optimization” becomes the default frame?
If you study journalism: How do we build newsrooms that can explain a frontier model evaluation to readers hundreds of miles away, with the rigor and speed this moment demands?
If you study law: How do we govern dual-use biological design tools when screening is already failing — and how do we write liability frameworks for harms caused by systems no one fully understands?
And beyond academia: What do we teach children about AI when the thing you’re training them to do can be done by a machine? How do we prevent the “liar’s dividend” — the phenomenon where deepfakes become so pervasive that anyone can dismiss authentic evidence as fabricated — from hollowing out what remains of public trust?
If you’ve read this far and you’re thinking, “what do I actually do tomorrow?” — here are a few concrete things:
Start learning. Take BlueDot Impact’s AI Safety Fundamentals course — it’s free, it’s designed for people without technical backgrounds, and it will give you the conceptual vocabulary to hold your own in any conversation about AI risk. Read the readings. Do the exercises. It is genuinely the single best on-ramp that exists right now.
Start talking. Use Claude, use ChatGPT, use whatever model you want — not to write your papers, but to actually probe it. Ask hard questions. Try to break it. Develop an intuition for what these systems can and can’t do, because that intuition is what will make you useful in any policy, journalism, or governance role down the line. Talk to your professors about how AI is changing their field — most of them are thinking about it and none of them have good answers yet.
Reach out. If any of this resonated — if you have ideas, or questions, or you just want to talk through what working on AI might look like for someone with your particular training — email me at [email protected]. I will sit down with you and give you a briefing on where things stand, what the open questions are, and where someone with your skills could actually make a difference.
Hate it or love it, every generation is confronted with its existential risk and a rapidly-closing window to act. Every piece of normalcy you’ve ever taken for granted — the weekend, the minimum wage, the fact that your drinking water doesn’t poison you — exists because someone showed up and fought for it when the window was open. The window for AI is open right now. It will not stay open forever.
Comments ()