AI Overshoot

AI is about to disrupt our definitions of work, identity, and economic agency, but many of us are quietly assuming we’ll figure it all out later.

I recently started reading the book How the World Surrendered to Climate Breakdown by Wim Carton and Andreas Malm. Although I’m not finished yet, the book examines how we’ve collectively accepted rising global temperatures and the reality of climate change—but also how we seem to be betting on future technologies to save us down the line. The idea is that someday we’ll invent systems that will reverse the damage. A hundred years from now, things might be “back to normal,” though in the meantime, it’s going to get really uncomfortable.

Halfway through the book, I realized I could apply the same lens to something entirely different: the rapid evolution and proliferation of artificial intelligence. This post is a thought experiment on the parallels between how we’ve responded to climate change and how we’re responding to AI.


Surrender and Overshoot

First, let’s define overshoot in the climate context. The book explains that we’ve set goals to keep temperature rises within 2–3°C over the next century. Yet at the same time, we allow ourselves to exceed those numbers temporarily, on the assumption that we’ll fix things later. In other words, we’re comfortable with continuing business as usual—drilling for more oil, emitting more CO2—because we believe that future technologies (like direct air capture or fusion energy) will eventually solve everything.

If you’re a climate-change denier, this post may not be your cup of tea. Climate change is real, and so are its consequences: extreme weather events, resource challenges, and climate-driven migration, to name a few. But a key part of the book’s argument is that we’ve adopted a kind of collective shrug. We accept that yes, we’re overshooting, and we’ll address it “later” once new policies or technologies emerge. That’s the essence of overshoot.


Drawing Parallels: Climate and AI

Now, how does this relate to AI? At a breakneck pace, we’re pouring enormous resources into the “virtual oil fields”: GPUs in data centers, advanced models, and a march toward general artificial intelligence. On a personal level, as a designer, the advent of AI has changed my entire outlook on my career. Part of me wonders if it’s wise for anyone to pursue design now, given that AI might automate so many tasks.

This sense of looming obsolescence has also made me hyper-aware of how many daily activities can be automated—and how many jobs might be, in essence, “bullshit jobs.” If AI can handle those tasks efficiently, what does that mean for our sense of purpose? Economically, there’s an analogy to climate overshoot: AI is about to disrupt our definitions of work, identity, and economic agency, but many of us are quietly assuming we’ll figure it all out later.

We see the same surrender/overshoot dynamic at play: we invest heavily in AI, aware of its profound impact, but downplay those impacts by assuming that tomorrow’s technologies or policies will fix the resulting chaos. We’re like climate optimists who say, “Well, direct air capture will handle it eventually.” Except now, it’s, “We’ll regulate AI once it’s big enough or causing enough trouble.” Meanwhile, short-term gains look fantastic, and there's always that reassuring idea that future innovations will solve tomorrow’s problems.


The Personal Responsibility Trap

Another similarity is the focus on individual responsibility. In climate conversations, you’re told to reduce your personal carbon footprint, recycle, bike to work, and so on—rather than focusing on bigger systemic issues. In AI, that translates to, “Learn the new tools or get left behind.” While personal accountability matters, it can distract us from deeper structural questions, like what happens when AI truly reshapes entire industries?

Climate change has led to mass migration; AI could lead to a different kind of migration—one of purpose. People who define themselves through their work may find themselves needing to rediscover meaning if their job is automated. Could that kind of existential crisis be as destabilizing as a 2–3°C rise in global temperature? Possibly. Yet we rarely address those bigger questions because the short-term gains—from AI breakthroughs to corporate profits—are too enticing.


Overshoot in Crypto

Fascinatingly, we see a similar dynamic in the crypto world. I’ve followed debates about the Ethereum Foundation and whether it should double down on things like DeFi, memecoins, etc. It’s easy to be lured by short-term wealth creation and turbocharge that evolution. But at the same time, we risk ignoring big-picture concerns: what happens to social systems, or to everyday people, if everything becomes hyper-financialized? We assume that by the time those problems become urgent, future solutions or policies will emerge. And if they don’t, well…maybe I’ll be wealthy enough to shield myself from the fallout. It’s a lie we sometimes tell ourselves, and one we’re not always proud of.


Conclusion

In climate, in AI, and even in crypto, the same pattern emerges: short-term gains and an almost religious faith in future innovation. We tell ourselves that by “overshooting” today, we’ll have the technology or the policies tomorrow to set things right. It’s an alluring promise—because it absolves us from making uncomfortable sacrifices right now. But as both climate scientists and AI ethicists remind us, overshoot can lead to a lot of pain along the way.

It’s crucial to ask difficult questions while we still have time: How do we want AI to shape our sense of purpose? How do we balance crypto’s innovations with the risks they introduce? How do we confront climate change without simply assuming tomorrow’s tech will magically solve it? The answers aren’t obvious, but the more we push them off, the higher the bill will be when it finally comes due.


Co-written obviously with AI (chatGPT-01mini)

Loading...
highlight
Collect this post to permanently own it.
RM logo
Subscribe to RM and never miss a post.