The Zombie Developer
Two weeks ago during a code review, I realized something a bit unsettling.
A teammate (with 3 years of experience) couldn’t explain why he used reactive
instead of ref
in a Vue component.
“Cursor suggested it,” he said.
It wasn’t the first time — and I suspect it won’t be the last.
In fact, I experienced it myself not long ago: we lost access to our Cursor licenses for 72 hours and couldn’t use Claude’s latest model.
My productivity plummeted. I felt frustrated, clumsy, slow. Tasks that used to take minutes became painful.
Some teammates even paid out-of-pocket for individual licenses just to keep working “normally.”
That’s when I realized something: I had crossed the line too — without noticing.
What line? I’ll explain that in a minute.
The truth is, embedded AI assistants have gone from occasional tools to full extensions of our cognitive process.
And according to recent science, they’re changing our brains.
And not for the better.
The Evidence Is Uncomfortable
Your Brain on ChatGPT
In one of the most detailed studies to date on how AI affects the human brain, MIT tracked the brain activity of 54 students over four months.
The study, "Your Brain on ChatGPT", revealed chilling results:
Students who used ChatGPT showed “the weakest neural connectivity” of all groups — with up to 55% less brain activity compared to those who wrote without assistance.
But the worst part wasn’t the numbers — it was what happened when heavy users were asked to work without AI:
They failed to activate the same brain regions, producing “linguistically bland” content with weaker memory recall.
78% of them were unable to recall what they had just written minutes before.
As if the content had passed through their brains without leaving a trace.
Cognitive Debt
MIT researchers coined a new term: cognitive debt —
a state where outsourcing mental effort to AI weakens learning and critical thinking.
Microsoft and Carnegie Mellon confirmed the pattern in a study of 319 knowledge workers:
The more users rely on generative AI, the less critical thinking they apply — potentially “leading to deterioration of faculties that should be preserved.”
In their words, with a touch of dark humor:
“A key irony of automation is that by mechanizing routine tasks, it deprives users of opportunities to exercise judgment and strengthen cognitive muscle — leaving them atrophied and unprepared when exceptions arise.”
The Principle You Can’t Ignore
“Use It or Lose It”
Neuroscience is clear on this: neural circuits that aren’t regularly activated begin to degrade.
This principle — “Use it or lose it” — is one of the ten core principles of neuroplasticity:
Just like unused muscles weaken over time, neural pathways that aren’t engaged consistently tend to deteriorate — and may eventually be pruned away.
When systems in the body are underutilized, they atrophy.
And the energy required to maintain them gets rerouted elsewhere.
Read that again.
Vicious Circles
A longitudinal study documented “vicious cycles of skill erosion,” where increased reliance on automation fostered complacency, weakening conscious attention —
and leading to degradation that remained hidden until failure.
In case it’s not clear:
People remain accountable for tasks they no longer understand, making them incapable of responding when automation fails.
The Perpetual Junior Syndrome
Developers Who Never Grow Up
In our field, this translates to developers who never develop real technical intuition.
Today’s juniors, who skip “the hard way,” may stall early — lacking the depth to grow into senior roles.
If an entire generation never experiences the satisfaction of solving problems truly on their own,
we may end up with a workforce that can only function with AI guidance.
And when AI is wrong — which happens constantly, in subtle and not-so-subtle ways — they won’t catch it.
A (Too) Common Example
A junior generates a full form component with AI. It ships to production. It works.
But they don’t understand why there’s an emit
, what v-model
is — just an example — or how shared state is managed.
The code exists.
The understanding doesn’t.
Now imagine this isn’t a one-off.
Imagine this pattern repeats every single day, across dozens of teams, hundreds of devs, thousands of lines of code:
- Components deployed without ever being understood.
- Decisions made without awareness.
- Interfaces built from ignorance disguised as productivity.
Where do you think that ecosystem ends up?
Let me be blunt:
Toward a technical culture where form outweighs substance, where output is valued more than understanding, and where critical thinking becomes an obsolete luxury.
A sector full of fast executors… without judgment.
The Warning Signs
Passive dependency:
- Are you forgetting basic API calls or language idioms?
Loss of ownership:
- Can you explain your technical decisions without saying “Cursor suggested it”?
Inability to go deep:
- Do you feel stuck when AI can’t solve something? (I’ve been there.)
Homogenization:
- AI users produce “statistically homogeneous” content — solutions that are mainstream, generic, and disconnected from the actual technical or social context they operate in.
Countermeasures: Protect Your Cognitive Capacity
At FrontendLeap, this is crystal clear: AI should elevate you — not replace you.
That’s why I teach people to use these tools for what they really are: multipliers of what you already understand, not prosthetics for what you don’t.
The problem isn’t using AI.
The problem is using it before thinking.
These are the three resistance principles I operate by:
Master the essentials without help
Before using assistants, you should know:
- Language fundamentals.
- How to debug manually.
- How to sketch basic architecture before writing a line of code.
- How to implement core patterns.
Use AI as an extension — not a crutch
- Never accept without understanding.
- Rewrite, question, validate.
- Let your own judgment be the filter.
Separate thinking from execution
- Avoid embedded assistants while designing.
- Query the model only after forming a hypothesis.
- Add friction: copy, edit, understand, validate.
The Real Cost of Not Thinking
Automation promises efficiency.
But recent studies warn that if you're worried AI might replace you — and you're using it uncritically — you may be degrading your own skills into irrelevance.
Let’s be clear: the fault won’t lie with the AI.
It will lie with how you chose to use it.
This isn’t about rejecting AI.
It’s about using it without becoming cognitively dependent on it.
Because when AI fails — and it will — you’ll need a brain that still knows how to think.