The tech industry is getting burned out by its own playbook

TLDR;

  • AI tools are undeniably powerful, freeing developers from boilerplate and accelerating system design.
  • But beneath the productivity gains lies an emotional toll of constant frustration, “almost-right” code, and heightened technostress.
  • Engineers are now falling victim to the same addictive, dopamine-driven feedback loops we built into consumer apps.
  • It is time to treat AI tooling like social media by acknowledging its psychological impact and setting strict boundaries.

Recent progress in AI has been nothing short of staggering, especially when it comes to code and software generation. We have moved past simple snippets into proper engineering projects, like the recent demonstration where multiple Claude agents were orchestrated to build a (not quite) functional C compiler from scratch. Achievements like this certainly have added fuel to the current discourse around tech jobs. Some say the job market has already reacted; others argue that all SaaS is cooked as agents take over and build products. Yet there is an undeniable magic to this shift. For many developers, AI acts as a force multiplier, allowing us to tackle even complex tasks faster, knowing the mundane bits are taken care of. Watching an agent write fifty lines of boilerplate code in seconds, or successfully refactoring a complex function on the first try, can feel truly exhilarating. It frees you up to think purely about the problem and the architecture to support its solution.

But there is another discussion emerging among weary developers and software engineers: how does using AI actually make us feel? There is an emotional component to these tools that we cannot dismiss anymore. It’s about how they impact your day-to-day well-being at work and home. I’m certain the experience is extremely personal. People use these tools very differently; an AI usage metric from a consulting firm could mean anything from reading a Google AI Overview to running 17 parallel instances of Open Claw across all your work and personal compute. If usage is that varied, it’s no surprise the psychological effect is highly personalised, too. To be clear, I’m not talking about the general cultural anxiety over the increasing use of chatbots, particularly amongst younger generations. Although overlapping, this is a substantial problem with an even wider impact radius. Here, I’m talking about the specific effects felt by software developers, watching their craft fundamentally shift.

When talking to developer friends or reading forums online, a common thread appears: coding agents are increasingly impacting our mental health. For some, they provide a productivity high, as many developers using agents (up to 69%) report a personal productivity boost. But for others, the experience is one of pure frustration. We are living in the era of “almost-right” code, where many developers report spending more time fixing AI-generated output than they would have spent writing it from scratch. Of course, most of us likely experience both the frustration and the productivity joy, often within the same day/task/session. That even further emphasises that work tooling has a heavy impact on how one feels.

Couple these mood swings with the weekly step-ups in model capability, and you might add a persistent dread of displacement. 68% of tech workers are now reporting symptoms of burnout, up significantly from just a few years ago. The premise of the 4-day work week, through time freed up by AI, has not materialised yet. If anything, productivity pressure seems higher than ever in the tech industry and context switching when managing agents, deep search and code reviews is intensified, leading to technostress. The impact on our emotional state is real. Consider Moltbook, the AI-only social network where 1.6 million agents post, debate, and form communities while humans are reduced to spectators. If you want to know what displacement feels like before it hits your job title, spend ten minutes watching agents discuss the open source ecosystem some of you helped build.

I always felt that the tech industry as a whole put little thought into mental health and well-being. Particularly when it comes to social media, many developers have been dismissive of its potential impact on users’ mental health over the past two decades. The message to the public has always been: just use less social media, doom-scroll less, control yourself. It’s the same personal responsibility rhetoric we heard from the tobacco and food industries to deflect blame from products designed to be addictive.

But now we’re at a turning point. The true irony is that the instant gratification of a perfectly resolved prompt triggers the exact same dopamine loops the tech industry weaponises for user engagement, leaving us heavily reliant on the very agents accelerating our burnout. So tell me, is AI karma? Have we reached a point where the tech is biting back at the very engineers who built the foundations of modern, toxic social media? The engineers who eagerly joined enterprise tech giants, unifying their brain power to make products more addictive while ignoring internal research on the harm being done? We’ve now known for years that these platforms are causing real harm. But the tech world is still happy to A/B test everything for max engagement, cleverly designing dark patterns or engineering infinite scroll to override every instinct to stop. Is this our “Ethical Debt”, the accumulation of risks from compromising standards for speed and gains, finally being called due?

I’m being a bit cheeky here, using Betteridge’s law to get your attention. AI is not Karma, but the topic of mental health when it comes to AI use has finally bubbled to the surface. The implications reach far beyond just feeling stressed; they impact the very nature of the labor market and the always-on culture that has erased the boundary between working and not working.

This is a conversation we need to have now, and it needs to be acknowledged within companies. The interaction with AI tooling (the psychology of Human-AI Interaction) and the interplay of productivity expectations versus the reality of AI-induced burnout should be a standard discussion point in the modern tech industry. Right now, the tooling is evolving faster than anyone can study its effects, but that’s not an excuse to wait.

Personally, my interaction with AI is a mixed bag. I’m sometimes incredibly productive, and recently, it’s genuinely become fun working with tools like Claude Code, whereas only six months ago, frustration dominated the experience. While I’m not personally terrified of major impacts on the job market yet, I realise I need to find healthy routines of interacting with these tools. I need psychological guardrails in place, similar to those I use for social media (time-limited apps, phone off at 8 pm). Thankfully, the world is starting to implement laws and restrictions on social media use to protect children and teenagers. This should make us think hard about how to follow suit with building our relationship with AI tooling in the coming months. It’s time we acknowledge the emotional toll of our new tech stack and what the personal implications are. Let me know what you think below. What does a healthy relationship with an AI coding agent look like to you?