I keep noticing this in myself, which is probably why it sticks out.
Not that AI gets things wrong — that part is obvious. It does, all the time.
It's how easy it is to go along with it anyway.
Cognitive surrender
There's this idea from a recent study that keeps coming back to me: "cognitive surrender."
Which sounds dramatic, but the more I think about it, the more it feels uncomfortably accurate.
Not because people are lazy or incapable. Just because the experience of using these systems nudges you in a certain direction.
You ask a question. You get a clean, confident answer. It sounds right.
And somewhere in that process, the part of you that would normally pause and double-check just… doesn't show up.
The numbers
The study puts numbers to it, which makes it harder to ignore.
People followed correct AI advice most of the time — that's expected.
But they also followed incorrect advice almost 80% of the time.
That's the part that sticks.
Not that the model was wrong. That people stayed confident even when it was.
Fluency, not intelligence
I don't think this is really about intelligence.
It's about fluency.
LLMs don't just give answers — they give finished answers. No hesitation, no visible uncertainty, no rough edges.
And we're not used to treating that as something that might still be wrong.
The new habit
What's weird is how quickly this changes your habits.
When answers are always one step away:
You check less. You verify less. You feel more certain, faster.
Even when your actual understanding hasn't really caught up.
I've caught myself doing it
I've caught myself doing it.
Reading something, thinking "yeah that sounds right," and moving on — not because I verified it, but because nothing in the response made me feel like I needed to.
That's new.
Or at least, newly noticeable.
Borrowed confidence
The problem isn't just that the model guesses sometimes.
It's that the guess arrives fully formed, in perfect language, with no signal that it is a guess.
And that makes it very easy to borrow its confidence without earning it.
What it really means
Maybe that's what "cognitive surrender" really is.
Not some dramatic loss of agency.
Just a slow shift:
from thinking something through to recognizing something that sounds right to eventually just… accepting it.
Still adapting
I don't think we've figured out how to live with that yet.
We're still treating fluency like a proxy for truth.
And it isn't.
But it's close enough to feel like it is — which might be the more important problem.