We clean out the garage every other month. It's one of those routine household things, nothing complicated. I'd pulled out the cars, started sorting through the items that have accumulated over time, and was working through the space when my wife went inside to the kitchen. About ten minutes later she came back out and stopped in the doorway. "Who are you talking to?"
I didn't have a good answer. Because I'd been standing there, alone, talking out loud to no one. "We need to get rid of these boxes. Then reorganize the toolbox. Then I want to change the oil on the lawnmower." Step by step. Out loud. Structured and explicit, like I was dictating a work order. She pointed out that this wasn't the first time she'd heard me doing it. We laughed it off.
But it stuck with me. Because I know exactly where that habit came from. My life has become "keep the spec updated." I wrote in January that building software with AI is addictive — I meant it as a compliment at the time, describing how AI gave back the part of the job I loved most. This post is the other side of that coin. What I didn't fully account for is what happens when you can't turn it off. Everything is a spec now. Every task gets decomposed, narrated, sequenced out loud before I execute. That's how I work all day: dictating specifications into Voice to text, talking through architecture in transcribed code reviews, feeding structured prompts to Claude. And apparently my brain doesn't turn it off when I walk into the garage.
An MSN article on AI brain fry landed in my feed recently and stopped me cold. The headline described exactly what I'd been feeling but hadn't been able to articulate. Then I read the Harvard Business Review study it was referencing. Researchers had given a name to the thing that had been living in my nervous system for months. And once I saw it named, I couldn't unsee it.
The Feeling That Didn't Have a Name
After a long, intense session with Claude (deep in the zone, pair-programming through a complex .NET architecture problem, iterating on SQL queries, refactoring layers of business logic) I'd finally close the laptop and try to wind down. But my brain wouldn't stop. It wasn't the normal buzz of a productive day. My thoughts were racing, but not toward anything useful. I felt wired but empty. Anxious, but not about anything specific. Like my mind was stuck in fifth gear with nowhere to go.
The hardest part is decoupling before sleep. I'll close my eyes and sometimes I can still see text flying by. Not metaphorically. I mean lines of code, fragments of responses, the cadence of a chat interface scrolling. Like a ghost image burned into my retinas. It's the same thing gamers describe after marathon sessions, except my screen wasn't showing explosions. It was showing architecture diagrams and JSON payloads. My brain had been processing language and logic at such intensity for so long that it couldn't find the off switch.
If you've experienced anything like this, that inability to downshift after hours of prompting, evaluating, re-prompting, and judging AI outputs, you're not imagining it. Researchers now have a name for it: AI brain fry.
Science Catches Up to What We Already Felt
In March 2026, researchers from Boston Consulting Group and the University of California, Riverside published a study in the Harvard Business Review that put data behind what a lot of us power users already knew in our bones. They surveyed nearly 1,500 full-time American workers and found that about 14 percent reported experiencing a distinct kind of mental fog after intensive AI use: difficulty concentrating, slower decision-making, headaches, and a general sense of cognitive overload.
The numbers that jumped out at me: workers experiencing AI brain fry reported making 39 percent more major mistakes than their non-fried colleagues and saw a 33 percent increase in decision fatigue. Those aren't rounding errors. For someone like me who writes code that goes into production, who builds SQL queries that drive real business decisions, those numbers are sobering. Workers whose AI use required high oversight expended 14 percent more mental effort and experienced 19 percent greater information overload. The researchers call this "oversight load," and it's the single biggest driver of brain fry.
The paradox is real: AI is supposed to make us more productive, but the cognitive cost of supervising it may be eating into those gains in ways we haven't fully measured. The AI can run far ahead of us, but we're still here with the same brains we had yesterday. That's exactly what it feels like. The tool is faster than my ability to judge its output, and my brain is burning calories trying to keep up.
The Seduction of Speed
Here's what the studies don't fully capture: the sheer momentum of working with AI is intoxicating. Tasks that used to take a full day can be started and mostly finished in an hour. That friction of getting started (the blank file, the blinking cursor, the mental overhead of scaffolding a solution from nothing) is largely gone. You describe what you want, and something plausible appears. You iterate. You ship. The barrier to starting any task has dropped so much that you find yourself starting everything. More features. More refactors. More ambitious timelines.
And the output looks good. That's the seductive part. The code is clean. The patterns are reasonable. The variable names make sense. So you start trusting it. You review a little faster. You skim where you used to scrutinize. You move on because the next thing is already queued up and the momentum feels too good to break. Speed becomes the default mode, and the critical eye that used to catch subtle issues gets quieter because the AI's output passes the smell test often enough that your brain starts pattern-matching toward acceptance rather than skepticism.
This is where the fry compounds. You're not just tired from one intense session. You're tired from a pace of work that AI made possible but your brain didn't evolve to sustain. More features shipped means more decisions made. More code reviewed means more context held in working memory. More output means more things to think about when you're supposed to be sleeping.
The Session Trap
I've shifted my workflow heavily toward spec-driven development, writing detailed specifications first, then letting AI help me implement. Most days I'm not even typing the specs. If you want context for how deep that goes, I Haven't Typed Code in Months. I've Never Shipped More. describes the day-to-day of working this way — the same sustained intensity that this post is about managing. I'm using Voice Ink, a voice-to-text tool, to dictate them. I'll sit down, open a recording, and talk through what I need: the data model, the business rules, the edge cases, the expected behavior. Then I hand that spec to Claude and we iterate on the implementation.
This works brilliantly. It forces clarity of thought before a single line of code gets written. But it also means my sessions are long. And here's where Anthropic's token throttling enters the picture. Claude limits how many tokens you can use in a rolling four-hour window. You'd think that would be a natural break point, a built-in guardrail. Instead, it became the opposite. I'd hit the throttle, wait for tokens to replenish, and keep going. The session never really ended. I was in an almost endless loop of work-wait-work while awake, stretching what should have been a focused sprint across an entire day. The throttle didn't force me to rest. It just made the session longer and more fragmented, which is probably worse for cognitive recovery.
I go hard in my Claude sessions. I know that about myself. And I've learned, the hard way, that I need real separation afterward. Not a five-minute break. Not switching to email. Actual cognitive distance. The kind where my brain has permission to process nothing for a while.
The Hidden Cost: Managing the Amplified Output
There's another dimension that the studies are only starting to touch on. AI doesn't just fry the individual using it. It amplifies the cognitive burden on everyone around them. As a senior developer and architect, I'm not just managing my own AI-augmented output. I'm managing a team of developers who are all producing more. Every developer on my team is shipping more code, more features, more pull requests than they were a year ago. That's the promise of AI productivity, and on paper it looks great.
But someone has to review that output. Someone has to hold the architectural vision. Someone has to wade through the details, catch the inconsistencies, make sure that five developers moving faster don't create five slightly different approaches to the same problem. That someone is me, and the mental capacity required to manage an AI-amplified team is substantially more than managing a team producing at pre-AI rates. The volume hasn't just increased. The complexity of oversight has increased. More code means more surface area for bugs. More features means more integration points to reason about. More velocity means less time to think about whether we're building the right things.
I find myself spending more mental energy than ever just staying on top of improvements, thinking about what's next, evaluating whether the things we shipped yesterday still hold up against what we're building today. The thinking never stops because the building never stops.
The Language Shift No One's Talking About
Here's what none of the studies capture yet: AI isn't just fatiguing us during work. It's changing how we talk and how we think, all the time, everywhere. That garage moment wasn't a one-off. It was a symptom.
Because I spend most of my working hours dictating specs through Voice to text, I've trained myself to think in structured, explicit, machine-parseable language. Our team's code reviews are transcribed now too, and that changes how you speak in them. You become more deliberate, more structured, more explicit. You stop using shorthand and inside jokes because the transcription won't capture the context. You front-load your point because the AI summary will prioritize what comes first. You're not just talking to your teammates anymore. You're talking to your teammates and the machine that's listening.
That dual-audience awareness is cognitively expensive. You're running two communication protocols at once: human and machine. And gradually, for efficiency's sake, you start optimizing for the machine. Because the machine is more demanding about structure and less forgiving about ambiguity. The human listeners adapt. The machine won't. So you develop this hybrid communication style, part human conversation and part structured prompt, and it follows you out of the office. Into meetings. Into dinner conversations. Into the garage while your wife is in the kitchen.
I've caught myself doing it while cooking, planning a weekend trip, sorting through storage bins. My internal monologue has become an external specification. I'm not sure if that makes me more efficient or less human, and that I can't tell is part of what concerns me.
The Throughput Debate
The HBR study suggests that AI brain fry might represent a net step backwards in overall throughput, that the cognitive costs could be eating the productivity gains. I see the angle, and I respect the data. But that's not what I see in practice. The output is real. The features are shipping. The architecture is better because I had an AI helping me think through edge cases I would have missed working alone. I'm building things in weeks that would have taken months.
The issue isn't that throughput has decreased. It's that doing more comes at a cost that we haven't learned to account for yet. We're measuring the output but not the toll. And that disconnect is creating a new set of questions that nobody in our industry is prepared to answer.
The Questions Nobody Wants to Ask
Here's where it gets uncomfortable. AI adoption across companies is uneven. Some teams are all in. Some haven't started. Some developers are producing at 2026 rates and some are still producing at 2024 rates. And in the gap between those two realities, some genuinely difficult moral and ethical questions are forming.
Is the output of a senior developer one or two years ago still acceptable? That question sounds harsh, but clients are already asking it implicitly. When they see what's possible with AI-augmented teams, their expectations recalibrate. Timelines that were reasonable eighteen months ago now feel slow. Deliverables that were impressive in 2024 now feel thin. The bar has moved, and it's moved in a direction that assumes everyone has access to these tools and the cognitive capacity to wield them indefinitely.
The argument becomes "but now we can do more." And yes, we can. But at what cost? Are we comfortable building client expectations around a pace of work that requires developers to run their brains at unsustainable levels? Are we comfortable with the implicit message that pre-AI output levels are now underperformance? What happens to the developers who can't or won't adopt these tools at the same intensity? Are they less valuable, or are they the ones with healthier boundaries?
These questions don't have easy answers, and the slow, uneven adoption of AI across organizations makes them even messier. A company where half the team is AI-augmented and half isn't is a company with two different definitions of productivity, two different sets of expectations, and a growing tension between them. That tension is showing up in performance reviews, in hiring decisions, in how we scope projects and set deadlines. And nobody has a framework for navigating it yet.
Too Early to Know What We're Doing to Ourselves
Beyond the immediate fatigue, there's a longer-term question that genuinely worries me. Nine months of going all-in on agentic development — documented in Nine Months in the Trenches — gave me a lot of the productivity gains described in that post. It also gave me a front-row seat to exactly the cognitive costs this section is about. Academic research on cognitive offloading suggests that heavy AI dependence may erode the critical thinking skills that make us good at our jobs. A 2025 study by Gerlich found a negative correlation between frequent AI usage and critical thinking abilities. When you let AI do the heavy cognitive lifting, your brain gets fewer reps.
Researchers frame this as the difference between scaffolding (where AI helps you build a capability you eventually internalize) and substitution (where AI does the thinking for you and your own capacity atrophies). As a .NET architect who's been writing SQL and designing systems for years, I sometimes catch myself accepting an AI-generated query without fully tracing the execution plan in my head. That's a red flag. The day I stop being able to mentally walk through a query optimizer's decisions is the day I've given away too much.
But here's the honest truth: this needs to be taken seriously, and it's too early to see the full effects on the brain. We are running an uncontrolled cognitive experiment on an entire generation of knowledge workers. The technology is barely two years into mainstream adoption. We don't have longitudinal studies. We don't have neuroimaging data on what sustained AI-intensive work does to attention, memory, and executive function over years. We're flying blind, and the pace of adoption isn't waiting for the science to catch up.
The early signals (the brain fry, the phantom text behind closed eyelids, the inability to downshift, the cognitive patterns leaking into everyday life) are warning lights on a dashboard we're only beginning to read.
What I'm Doing About It
I'm not going to stop using AI. I'm not going back to writing every line by hand. The spec-driven workflow is genuinely better for producing solid, well-thought-out systems. But I am changing how I manage the cognitive cost.
Hard session limits. Ninety minutes max, then I step away. Not to email, not to Slack. To nothing. Hit 25 golf balls into the net. Stare at trees. Let my brain idle. And when I hit a token throttle, that's my cue to stop, not to wait it out and keep going.
Doing some things the hard way on purpose. At least once a day, I write code from scratch without AI. I manually rework the azure pipeline plan. I sketch architecture on paper before dictating a spec. This is how I keep the scaffolding from becoming substitution.
Practicing unstructured thought. I'm deliberately making time to think in ways that aren't spec-shaped. Reading fiction. Grab my guitar and improvise for 15 minutes over a drum track. Letting my mind wander. Try to balance the need for quality work and maintainability. My brain needs to remember that not everything is a prompt.
Protecting the hours before sleep. No AI sessions within two hours of bedtime. It has been 10-15 minutes and that isn't healthy. When I ignored this rule, I'd lie in bed watching phantom code scroll behind my eyelids. The decoupling time isn't optional. It's a requirement.
Listening to the people around me. My wife noticed the narration habit before I did. My boss has brought it up to me and even referenced the article above. When the people in your life start pointing out that you're talking differently, thinking differently, behaving differently, pay attention. They're seeing something you can't see from the inside.
Resisting the speed trap. Just because AI lets me review faster doesn't mean I should. The output looks good. It usually is good. But "usually" isn't good enough for production code, and trusting AI because it passes the smell test is a habit I'm actively working to break.
The Conversation We Need to Have
The tech industry is pushing AI adoption at a pace that doesn't account for the human brain on the other end. Companies are measuring adoption rates and output metrics, but almost nobody is measuring the cognitive cost. The BCG researchers put it plainly: organizations need to monitor cognitive load as a job-related risk, the same way they'd monitor physical safety on a factory floor.
We're in an awkward middle period. AI is powerful enough to transform how we work but not autonomous enough to work without constant human supervision. That gap between what AI can do and what our brains can sustainably oversee is where brain fry lives. And the ethical questions about expectations, about what counts as acceptable output, about the uneven adoption across organizations? Those aren't going away. They're going to get louder.
We're not just fatigued. We're changed. The way we speak, the way we process tasks, the way we narrate our own lives. All of it is bending toward a mode of cognition optimized for machines. Some of that bending is useful. Some of it is loss. And most of us haven't stopped long enough to figure out which is which.
So if you've been feeling it (the fog, the inability to downshift, the phantom text behind your eyes at night, the creeping sense that your brain has been running a marathon it didn't sign up for) know that it's not weakness. It's a real response to a genuinely new kind of cognitive demand. This needs to be taken seriously. It's too early to see the full effects. And the research is just barely catching up to what our nervous systems have been screaming at us for months.
Take the break. Close the chat. Clean out the garage in silence. Let your brain remember what it sounds like when it's just yours.