Orbrya
All posts
Oliver Skinner2026-03-05

The Gerlich Study: What the Research Actually Says About AI and Your Child's Brain

7 min read There's a number circulating in AI and critical thinking research discussions that deserves more careful treatment than it usually gets. r = −0.75. It comes from a January 2025 study publi

7 min read


There's a number circulating in AI and critical thinking research discussions that deserves more careful treatment than it usually gets.

r = −0.75.

It comes from a January 2025 study published in MDPI's Societies journal by Michael Gerlich of SBS Swiss Business School in Zurich. The study tracked 666 participants and measured the relationship between frequent AI tool use and critical thinking ability. That r = −0.75 describes how strongly those two things moved together, and in which direction.

The direction was negative. The stronger the AI use, the lower the critical thinking scores.

In social science research, r = −0.75 is a strong correlation. Strong enough that when this number appears in education discussions, it tends to get cited quickly, shared widely, and stripped of the context that makes it meaningful. That's a problem, because the context matters, and because you're going to use this research to make decisions about your child's education.

So let's look at what it actually shows. And what it doesn't.


What the Study Found

Gerlich's study used a mixed-methods design, combining survey data with cognitive assessments across 666 participants. Participants reported their frequency and intensity of AI tool use, then completed assessments measuring critical thinking ability, specifically their capacity for analysis, evaluation, and independent reasoning.

The core finding: frequent, heavy AI use correlated with lower critical thinking scores at r = −0.75. The mechanism Gerlich proposed is what he called cognitive offloading, the brain's tendency to stop maintaining capabilities it no longer needs to exercise.

This isn't a new concept. We already know it applies to other tools. Most adults who've relied on GPS navigation for years have noticed their internal sense of direction quietly degrading. Spellcheck has made many of us worse spellers. This isn't a moral failing. It's how brains work. They're efficient. They don't maintain skills that are being outsourced to tools.

The concern with AI is that the outsourcing is happening at a much broader level. Navigation and spelling are discrete skills. The cognitive tasks AI handles, drafting arguments, synthesizing information, drawing conclusions, answering questions, are much closer to the core of what we mean when we talk about thinking.

Gerlich also found that younger participants showed higher AI dependence and lower critical thinking scores than older participants. Children and teenagers, the population families reading this are thinking about, may be the most vulnerable to this effect.


What the Study Doesn't Prove

This is where intellectual honesty requires slowing down.

Correlation is not causation. This is the most important caveat, and it isn't a technicality. It changes the interpretation meaningfully. The study shows that heavy AI use and lower critical thinking scores tend to appear together. It does not prove that AI use caused the lower scores.

There's a plausible alternative: people who already have lower critical thinking ability may be more likely to lean heavily on AI tools, rather than the other way around. The relationship could run in the opposite direction, or both directions at once, or be driven by a third factor Gerlich didn't measure.

The sample has limits. 666 participants is meaningful, large enough to take seriously, small enough to be cautious about generalizing. We don't know how representative these participants are of children ages 8 to 16, or of families supplementing their child's education, or of students in different educational contexts.

This is not a longitudinal study. Gerlich measured AI use and critical thinking at a single point in time, not tracked the same people over months or years. We don't have data showing that someone's critical thinking declined after they increased AI use. That would be stronger evidence. This is a snapshot.

A correction was published. In September 2025, MDPI published a correction to Gerlich's study. The core findings remain intact, but the research is ongoing rather than settled.


Why It Still Matters

None of those caveats mean you should ignore this research. They mean you should understand it accurately, which is itself an AI literacy skill.

Here's what the study does establish, stated carefully: among the participants studied, there was a strong statistical association between heavy AI use and lower critical thinking ability. The proposed mechanism, cognitive offloading, is theoretically grounded and consistent with what we already know about how brains respond to tool use. Younger participants showed the pattern more strongly than older ones.

That's not nothing. It's a genuine signal worth paying attention to, especially alongside what else we know.

The Brookings Institution published a major report in January 2026, based on consultations with over 500 stakeholders across 50 countries and a review of more than 400 studies. Their conclusion was direct: AI risks "undermine children's foundational development," and they described a "doom loop of AI dependence" where students increasingly offload thinking onto AI, which erodes the capacity they need to evaluate AI effectively, which makes them more dependent, which erodes capacity further.

That's the Gerlich mechanism described at a systems level.

And from RAND's nationally representative survey of parents: 61% agreed that greater AI use will harm students' critical thinking skills. These parents aren't reading academic journals. They're watching their kids, and something is registering.


The Distinction That Changes Everything

The finding from this body of research that I keep coming back to is from researcher Chahna Gonsalves at King's College London, who identified two separate forms of critical thinking in AI-assisted learning. The first is critical thinking for the assignment: using your reasoning to analyze and apply information. The second is critical thinking toward the AI: interrogating the output itself, asking whether it's accurate, checking its sources, evaluating its reasoning.

Most students are only doing the first. The Gerlich correlation is, at least in part, measuring what happens when students outsource the cognitive work entirely, using AI as a replacement for thinking rather than a tool to think with.

The implication isn't "use less AI." It's "use AI differently."

Students who use AI as a starting point and then verify, interrogate, and evaluate what it produces aren't offloading their critical thinking. They're exercising it. The cognitive task shifts from generating ideas to evaluating ideas, which is actually a higher-order skill.

That's the distinction Orbrya is built around. Not AI avoidance. AI oversight.


What This Means Practically

If you caught our earlier post on the one skill worth teaching regardless of your views on AI, you already have the framework: verification habits built early, practiced regularly, framed as a game rather than a burden.

If you read our post on the Spot the Lie game, you've already seen one practical entry point.

The Gerlich study gives you the research backbone for why this matters. Here's how to use it honestly with other parents:

You can say: "There's a peer-reviewed study with 666 participants showing a strong negative correlation between heavy AI use and critical thinking ability. The researcher thinks cognitive offloading is the mechanism. It's not proof that AI causes the decline, but it's a signal worth taking seriously."

What you shouldn't say is "research proves AI makes kids dumber." That's not what the study shows, it's not careful, and it won't hold up when someone asks a follow-up question, which, if you're teaching your kids to ask follow-up questions, they will.


The Bigger Picture

One study doesn't settle a question this large. But multiple independent data streams pointing in the same direction, cognitive offloading research, parental surveys, institutional analyses, employer feedback about new hires who can't work without AI assistance, start to form a coherent picture.

The picture is this: using AI without learning to evaluate AI produces a specific kind of cognitive risk. The risk shows up in research. It shows up in hiring. It shows up in classrooms.

But here's what the Gerlich data also implies, and what gets lost when this study circulates: r = −0.75 is a correlation between heavy AI use and lower critical thinking. It says nothing about students who use AI critically, who treat AI output as a claim to evaluate rather than a fact to accept. That kind of use isn't cognitive offloading. It's cognitive exercise.

The study doesn't indict AI. It indicts passive AI use. And passive versus active is a choice families can make deliberately, starting with something as simple as asking "how do you know?" after every AI interaction.

That's what the Gerlich finding is actually pointing toward. Not less AI. Better habits around AI. And those habits are entirely teachable.


Next up: The Two Types of Critical Thinking Every Parent Needs to Understand, the Gonsalves research that explains exactly why the verification skill is different from everything else being taught under the "AI literacy" label.


Sources cited in this post:

Originally published on Hashnode