You Don’t Have to Like AI to Teach Your Kid This One Skill
8 min read Maybe you’ve banned AI from your home. Maybe you use it every day. Maybe you’re somewhere in the middle, curious but cautious, watching your kid use tools you don’t fully understand and wo
8 min read
Maybe you’ve banned AI from your home. Maybe you use it every day. Maybe you’re somewhere in the middle, curious but cautious, watching your kid use tools you don’t fully understand and wondering what you’re supposed to do about it.
Wherever you fall on that spectrum, the skill I’m going to describe works for you.
AI literacy for kids has become one of the most searched topics in parent education circles, and for good reason. But most of what’s being taught under that label is missing the most important piece. You don’t need to approve of AI, understand it, or even allow it in your home to teach it. The families who will benefit most from teaching this skill are honestly the ones who are most skeptical, because this skill is fundamentally about healthy skepticism itself.
The skill is AI output verification: the ability to evaluate what an AI system produces and decide whether it can be trusted.
Almost nobody is teaching it right now.
The Gap Between Using AI and Thinking Critically About It
Most “AI education” today looks like this: children learn to write better prompts. They learn which tools to use for which tasks. They learn how to get AI to produce what they want.
What they don’t learn is how to tell whether what AI produced is actually correct.
That’s the entire gap, and it’s a significant one.
A 2025 study published in MDPI’s Societies journal tracked 666 participants and measured the relationship between frequent AI tool use and critical thinking ability. The correlation was striking: r = −0.75, one of the strongest negative correlations you’ll find in educational psychology research. The more participants leaned on AI to handle cognitive tasks, the less capable they became at doing those tasks on their own.
The researcher, Michael Gerlich of SBS Swiss Business School, described the mechanism as “cognitive offloading.” The brain tends to stop maintaining skills it no longer needs to exercise. It’s the same reason most adults can’t reliably do long division anymore. When a tool handles a task for you consistently, your brain eventually stops reserving mental resources for it.
The problem with AI is that the offloading happens at a much broader level than calculators or spellcheck. And unlike those tools, AI makes mistakes, sometimes subtle ones, sometimes significant ones. A calculator doesn’t fabricate results. An AI language model does.
Offloading your thinking to AI without learning to evaluate what AI produces is a combination that carries real risk.
Two Types of Critical Thinking (And Why Only One Matters for Your Kid’s Future)
Researcher Chahna Gonsalves of King’s College London published a paper in the Journal of Marketing Education that drew a distinction I keep coming back to.
She identified two separate forms of critical thinking that come into play whenever someone uses AI.
The first is critical thinking for the assignment: using your brain to analyze, synthesize, and apply information to accomplish a goal. This is what educators have always taught, and it’s what most AI literacy programs focus on today. How to use AI thoughtfully as a tool for doing your work better.
The second is critical thinking toward the AI: interrogating the output itself. Is this accurate? How would I check? What would I be assuming if I accepted this at face value?
The second type is harder. It requires what researchers call metacognitive oversight, the ability to step outside your own process and evaluate whether the tool you’re relying on is actually giving you reliable output.
Most AI education teaches the first type and ignores the second. Children learn to use AI thoughtfully for their assignments but not to be skeptical of what AI gives them.
The second type is the one that employers are paying for.
The Wage Number That Changes the Conversation
PwC’s 2025 Global AI Jobs Barometer analyzed close to a billion job postings across 32 countries to understand how AI skills are affecting wages. Positions requiring AI skills advertise wages averaging 56% higher than comparable roles without those requirements, double the premium from the prior year.
To be precise about what that number measures: it compares job postings that require AI skills against equivalent postings that don’t, within the same occupation. It isn’t measuring AI auditing skills specifically versus usage skills, because almost no one is teaching that distinction yet.
What the data does tell us is that AI competence is becoming a significant economic differentiator, and the premium doubled in a single year. The families who start building genuine AI competence now, not just AI familiarity but real ability to work with AI critically, are positioning their children ahead of a curve that is moving fast.
The question isn’t whether to prepare your child for an AI-present world. It’s how.
What This Skill Actually Looks Like
Verification is less a subject to study than a habit to build.
At its core it comes down to three questions your child learns to ask whenever they encounter information, from AI or anywhere else:
-
How do they know? What’s the source of this claim?
-
Can I verify this somewhere else? Does another independent source confirm it?
-
What would happen if this were wrong? What’s the cost of accepting this without checking?
These questions aren’t AI-specific. They’re the same ones a good historian asks about a primary source, the same ones a scientist asks about a study, the same ones a good lawyer asks about evidence. AI just makes them more urgent, because AI is everywhere, produces output that sounds authoritative, and gets things wrong in ways that aren’t always obvious.
The families who teach these questions now are building something that no algorithm can replicate: a child who knows how to think about information, not just consume it.
A RAND Corporation survey of nearly 1,000 parents found that 61% agreed that greater AI use will harm students’ critical thinking skills. They’re right to be concerned. But concern alone doesn’t solve it. The solution isn’t less AI. It’s more intentional thinking about AI.
Why Families Who Take This Into Their Own Hands Have an Advantage
Schools are struggling with this. A 2025 RAND survey of principals found that only 45% reported having any school or district AI policies at all. Among teachers the situation is worse: only 19% say their school has a meaningful AI policy, according to a Gallup/Walton Family Foundation survey. And among schools that do have policies, most focus on academic integrity (whether to allow AI use at all) rather than on how to evaluate AI critically.
This isn’t a criticism of teachers. They’re working with limited guidance, limited training, and limited time. The systems haven’t caught up.
But you don’t have to wait for the systems.
Whether your child is homeschooled or in a traditional school, you can have this conversation at dinner tonight. You can introduce verification habits this week. You can give your child a meaningful head start on a skill that schools are only beginning to figure out, regardless of what their school is or isn’t doing about it.
That window won’t stay open forever. Right now, though, it’s wide open.
Try This Tonight: The “How Do You Know?” Game
The simplest starting point is also the most sustainable one. At dinner tonight, make one claim, something plausible but worth questioning:
-
“I read that the average American walks about 3,000 steps a day.”
-
“Apparently, sharks have to keep swimming or they’ll die.”
-
“Someone told me chocolate milk comes from brown cows.”
Then ask your kids: How do you know if that’s true?
Let them try to answer. Don’t correct them right away, just listen to their reasoning. Are they accepting it because it sounds right? Are they looking for a way to verify it? Are they asking where you heard it?
That one question is the seed of everything. It’s the same question they’ll need to ask when they’re 19 and ChatGPT tells them something for a paper, and when they’re 25 and an AI-generated report informs a decision at work.
Habit formation starts early. The parents who start this conversation now are giving their kids something that’s genuinely hard to teach later: the reflex to verify before accepting.
What Comes Next
This is the first post in a series on teaching AI literacy to kids, not the “how to use AI” kind but the verification and evaluation skills that actually prepare them for a world where AI is everywhere and isn’t always right.
Next up I’ll walk you through a specific game that makes verification feel like play rather than homework: the Spot the Lie challenge, which you can run at the dinner table this week with no preparation and no technology required.
After that, we’ll dig into the research behind the r = −0.75 correlation: what it actually means, what it doesn’t mean, and what it suggests about how to structure AI use in your home.
If you want to be notified when those posts go live, and when we open access to the full AI literacy curriculum we’re building (launching August 2026), join the waitlist below. No sales pressure, just the research and practical content as it comes out.
Orbrya is building AI literacy curriculum for K-12 families, homeschool and supplemental, focused on the verification and evaluation skills that schools haven’t caught up to yet. The curriculum launches August 2026.
Sources cited in this post:
-
Gerlich, M. (2025). “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies, 15(1), 6. MDPI. https://doi.org/10.3390/soc15010006
-
Gonsalves, C. (2024). “Generative AI’s Impact on Critical Thinking: Revisiting Bloom’s Taxonomy.” Journal of Marketing Education. Sage Journals. https://doi.org/10.1177/02734753241305980
-
PwC. (2025). “The Fearless Future: 2025 Global AI Jobs Barometer.” https://www.pwc.com/gx/en/issues/artificial-intelligence/job-barometer/2025/report.pdf
-
Doss, C. et al. (2025). “AI Use in Schools Is Quickly Increasing but Guidance Lags Behind.” RAND Corporation. https://doi.org/10.7249/RRA4180-1
-
Gallup / Walton Family Foundation. (2025). Teacher AI use and school policy survey. https://www.waltonfamilyfoundation.org/the-ai-dividend-new-survey-shows-ai-is-helping-teachers-reclaim-valuable-time
Originally published on Hashnode