The Doom Loop: How AI Dependence Quietly Erodes Your Child's Ability to Think
7 min read In January 2026, the Brookings Institution released one of the most comprehensive analyses of AI dependence in children's education published to date. The researchers, Mary Burns, Rebecca
7 min read
In January 2026, the Brookings Institution released one of the most comprehensive analyses of AI dependence in children's education published to date.
The researchers, Mary Burns, Rebecca Winthrop, and their colleagues at Brookings' Center for Universal Education, consulted with over 500 stakeholders across 50 countries and reviewed more than 400 studies. Their conclusion was not a cautious hedge. It was direct: AI risks "undermine children's foundational development," and the mechanism they described has a name that's hard to shake once you've heard it.
They called it the doom loop of AI dependence.
Here's how the loop works. A student uses AI to handle cognitive tasks: drafting, synthesizing, reasoning, answering questions. Over time, their capacity for those tasks independently begins to weaken, because the brain doesn't maintain skills it isn't using. As their independent capacity weakens, they become more reliant on AI to compensate. Which weakens their capacity further. Which increases their reliance.
The Brookings report states plainly: "At this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits."
That's a significant claim from a significant institution. It deserves to be taken seriously and understood precisely.
What Brookings Is and Isn't Saying
The Brookings Institution is a nonpartisan policy research organization. Their education research doesn't have an agenda beyond understanding what the evidence shows. When they say the risks currently outweigh the benefits, they're making a timing and implementation argument, not an anti-AI one.
The problem, as they frame it, isn't AI itself. It's AI used without the cognitive framework to use it well.
A student who has developed strong foundational skills, reading comprehension, analytical reasoning, the ability to evaluate sources, the habit of asking whether information is correct before accepting it, can use AI productively. The tool augments capabilities that are already there.
A student who hasn't developed those foundations, and who begins using AI before they have, is in a different situation. The tool doesn't augment anything. It substitutes. And substitution, the research increasingly suggests, has developmental costs.
This is the timing argument that makes AI literacy so urgent for children, and why teaching verification skills before or alongside AI use matters rather than as an afterthought.
The Mechanism: What Cognitive Offloading Does to a Developing Brain
The doom loop isn't a metaphor. It has a cognitive science basis worth understanding.
As we covered in our earlier post on the Gerlich study, cognitive offloading describes what happens when brains outsource tasks to external tools. The brain is efficient: it doesn't maintain capabilities it doesn't need to exercise. This is adaptive for adults who have already built those capabilities. For children who are still building them, it's a different matter entirely.
The developmental window matters. The cognitive skills involved in critical reasoning, analysis, synthesis, evaluation, the ability to interrogate information rather than just receive it, are built through practice during specific developmental periods. A child who practices these skills builds neural pathways and cognitive habits that persist. A child who outsources these tasks during the same window may not build them in the same way.
There's a meaningful difference between a 40-year-old who starts using GPS and gradually loses their sense of direction, and a 10-year-old who uses GPS from the beginning and never develops one. The adult had the skill and lost it through disuse. The child may never build it at all.
The Brookings researchers, drawing on 400+ studies, found this pattern appearing across multiple cognitive domains, not just critical thinking, but reading comprehension, problem-solving, and what they describe as "foundational development" more broadly.
What Parents Are Actually Observing
The research aligns with what parents report when they're honest about what they're seeing.
A nationally representative RAND survey found that 61% of parents agreed that greater AI use will harm students' critical thinking skills. These aren't parents who read the Brookings report. They're watching their kids do homework, and something is registering.
An analysis of parent commentary across forums and discussion groups found that concern was the dominant emotion, appearing more than twice as often as any positive sentiment. Parents described kids who finish homework in five minutes but can't explain what they turned in. Kids who produce polished writing that doesn't sound like them. Kids who struggle with tasks they should be able to handle when asked to work without AI.
One parent captured it precisely: "The teacher is of the opinion that 'these kids will always have a device on them.' Everything is open book. Except we see the kids failing at both using the tools and doing the work. They understand neither."
That's the doom loop in a household. The tool didn't improve performance. It masked the absence of the underlying skill, until the moment the mask came off.
The School Policy Vacuum Makes It Worse
The doom loop dynamic is significantly worsened by the fact that most schools aren't addressing it.
A CRPE and USC survey using a nationally representative sample found that 96% of families with elementary-aged children either didn't know about any school AI policy or said their school hadn't communicated one. For secondary school families, 83% reported the same.
The Gallup/Walton Family Foundation found that only 19% of teachers say their school has a meaningful AI policy. RAND found that 55% of schools have no formal AI guidance at all.
This means most students are navigating AI use without institutional guidance. They're not being taught how to evaluate AI output, because their school hasn't figured out how to teach it. They're not being taught when to use AI versus when to work independently, because no one has drawn that line. They're left to develop their own habits, and the default habit for most kids is convenience.
Convenience means using AI to reduce effort. That's how the loop starts.
The families who interrupt this dynamic aren't waiting for schools to sort it out. They're building verification habits at home, through conversation, through games like the Spot the Lie activity we described earlier, through consistently asking the question that matters most: how do you know that's true?
The Exit Ramp Exists
The doom loop isn't inevitable. Brookings isn't arguing that AI will inevitably harm all children. They're arguing that AI used without cognitive scaffolding, without the skills to evaluate it, question it, and override it when it's wrong, creates risks that currently outweigh benefits.
The scaffolding changes the equation.
A student who has been taught to treat AI as a starting point rather than a conclusion, who reflexively asks whether what AI produced is accurate, who knows the questions to apply before accepting any AI output, that student is not in the doom loop. They're using AI as a tool for thinking, not a replacement for it.
That's the difference between AI dependence and AI mastery. And it's learnable. The research on metacognitive skills is clear that these habits can be built, that they transfer across subjects and contexts, and that children who develop them early maintain them.
The question for families isn't whether to engage with AI. Their kids are already engaging with it, with or without parental involvement. College Board surveys found 84% of high school students used AI for schoolwork in 2025, up from 79% just five months earlier. Avoidance isn't a realistic strategy.
The question is whether to engage actively, building evaluative skills alongside usage habits, or passively, leaving children to develop their own relationship with a tool that has a strong pull toward convenience and a well-documented tendency to sound correct while being wrong.
The doom loop is a passive outcome. Breaking it is an active choice. And the entry point is simpler than most parents expect: one question, asked consistently, at the dinner table or over homework or after any AI interaction.
How do you know that's true?
Ask that enough times and it stops being your question. It becomes theirs. That's when the loop breaks, not through less AI, but through a child who has internalized the habit of checking. That child isn't dependent on AI. They're in charge of it. And that's exactly the outcome the research is pointing toward.
Next up: the "How Do You Know?" Habit, the single question that builds verification instinct across any subject, at any age, with or without AI in the picture.
Sources cited in this post:
-
Burns, M., Winthrop, R., et al. (2026). "A New Direction for Students in an AI World: Prosper, Prepare, Protect." Brookings Institution. https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/
-
Doss, C. et al. (2025). "AI Use in Schools Is Quickly Increasing but Guidance Lags Behind." RAND Corporation. https://doi.org/10.7249/RRA4180-1
-
Gallup / Walton Family Foundation. (2025). "The AI Dividend: New Survey Shows AI Is Helping Teachers Reclaim Valuable Time." https://www.waltonfamilyfoundation.org/the-ai-dividend-new-survey-shows-ai-is-helping-teachers-reclaim-valuable-time
-
CRPE. (2025). "AI Is Moving Fast — But School Responses and Parent Opinions Are Not." Center on Reinventing Public Education / USC.
-
College Board. (2025). "U.S. High School Students' Use of Generative Artificial Intelligence." https://newsroom.collegeboard.org/new-research-majority-high-school-students-use-generative-ai-schoolwork
Originally published on Hashnode