Orbrya
All verification posts
Output by Orbrya2026-03-04

What Our AI Got Wrong About the PwC Wage Premium (And How You Would Catch It)

Output Episode 1 cited a real PwC statistic but drew an inference the data doesn't support. Here's the exact error, the corrected framing, and the verification method that catches it.

In January 2025, PwC published an analysis of close to a billion job postings across 32 countries. The headline finding was significant: positions requiring AI skills advertise wages averaging 56% higher than comparable roles without those requirements. That premium had doubled in a single year, up from 25% the year before.

That statistic is real, it is verified, and it matters for every family thinking about what skills their child will need in the workforce. We cited it in the first episode of Output, our AI-produced podcast series on AI literacy for families. We also got something wrong about it.

This post explains what the error was, how it got into the episode, and exactly how you would have caught it. If you are here because you listened to the episode and spotted something that did not sit right, you are in the right place. If you found this through search and have not heard the episode yet, everything below stands on its own.


What the Episode Claimed

Around the 8-minute mark of Output Episode 1, after establishing the PwC wage premium, one of the AI hosts said this:

"The audit capability we just talked about, that critical thinking toward the AI, that's actually what's being monetized. Employers aren't paying you an extra 50% just to generate text. They are paying you to know when that text is right, when it's wrong, and how to apply it safely. They are paying for the judgment."

Read that carefully. The host took a verified statistic about AI skills broadly and used it as evidence for a specific claim: that the premium accrues specifically to workers who can audit and verify AI outputs, as opposed to workers who can simply operate AI tools. That is not what PwC measured.


What PwC Actually Measured

PwC's methodology compares job postings that require AI skills against job postings for the same occupation that do not require AI skills. A software engineer role that lists AI competence as a requirement versus a software engineer role that does not. A marketing manager position requiring AI skills versus one that does not. The premium is the wage gap between those two groups.

That methodology tells us that AI competence broadly commands a significant wage premium. It does not tell us anything about which kind of AI competence drives that premium. The job postings analyzed do not distinguish between workers who use AI and workers who verify AI outputs. That comparison does not exist in PwC's data, and PwC itself cautions that the findings may not imply a causal relationship.

The argument that verification skills are more valuable than usage skills is Orbrya's core positioning. It is a reasonable argument, and the emerging research on cognitive offloading and critical thinking supports it. But reasonable and supported by related evidence is different from confirmed by a billion job postings. The episode presented our inference as PwC's finding, and that is the error.


How the Error Happened

Output is produced using NotebookLM, an AI tool that generates audio conversations from source documents. Before publishing, we review the transcript for accuracy.

The Gerlich correlation, the Gonsalves two-types distinction, the RAND school policy data, the PwC headline statistic — all of those were verified against primary sources before this episode went live.

What we missed in review was subtler. The PwC number itself was accurate. The error was in the inference the AI hosts drew from it, stated with the same confident tone as the verified statistic. That confidence is exactly what makes AI-generated content require careful auditing. The model does not flag the moment where evidence ends and interpretation begins. It moves through both with identical authority.

This is not a reason to distrust the episode. The core argument — that verification skills are the critical gap in AI education — is well-supported by the research we covered. It is a reason to read research citations carefully and ask whether a specific claim is actually in the data or is an inference drawn from it.


How You Would Have Caught It

The second of the three verification questions the episode teaches: can I verify this somewhere else?

If you went looking for PwC data specifically linking the 56% premium to auditing competence versus usage competence, you would not find it. What you would find in the actual report is the methodology: same occupation, with and without AI skill requirements. That is the check. The moment you tried to verify the specific claim and found the underlying data was broader than the hosts implied, you caught it.

That process takes about four minutes. You search the PwC report title, open the PDF, find the methodology section, and read how they defined AI skills. No specialized knowledge required. Just the habit of asking whether you can find the specific claim — not just the general topic — in the source.

That habit is what Orbrya's curriculum builds. Not skepticism for its own sake. Not the exhausting practice of fact-checking everything. The targeted ability to recognize when a confident claim extends beyond what its source actually shows, and to verify it before passing it on.


The Corrected Framing

The accurate version of what the episode should have said: PwC's analysis of close to a billion job postings found that positions requiring AI skills advertise wages averaging 56% higher than comparable roles without those requirements, and that premium doubled in a single year. That is a strong signal that AI competence commands real economic value. What the data cannot tell us yet is how much of that premium belongs specifically to workers who can evaluate AI outputs versus workers who can simply operate AI tools. That distinction is not yet tracked in job posting data at scale.

The argument that verification matters more than usage stands on its own evidence. Researcher Chahna Gonsalves at King's College London identified two types of critical thinking with AI: critical thinking toward the AI, which involves evaluating and auditing outputs, and critical thinking for the assignment, which involves applying AI outputs to complete tasks. The Brookings Institution's January 2026 review of over 400 studies described a doom loop of AI dependence that specifically threatens independent analytical thinking. And a January 2025 study by Michael Gerlich published in MDPI's Societies journal found a correlation of r = -0.75 between AI tool usage and critical thinking ability across 666 participants.

That body of evidence supports the argument that auditing skills are the more valuable ones. It supports it as an argument — not as something a wage dataset confirmed directly.


A Note on the Gerlich Study

While being precise about what research shows and does not show: the Gerlich correlation is cited correctly in the episode, but one piece of context was missing.

MDPI published a correction to this study in September 2025. The core findings remain intact. The r = -0.75 correlation stands. But the study was conducted at a single point in time rather than longitudinally, which means it shows an association between AI usage and lower critical thinking scores rather than proving that AI usage causes the decline over time. It is worth taking seriously as a signal. It is not settled science, and we should have said so in the episode.


Why We Publish These Corrections Publicly

Output is produced using AI tools and reviewed by humans before publishing. That review catches most problems. It does not catch everything, as this episode demonstrates. Rather than quietly fixing errors or pretending they did not happen, we document them here because the documentation is more useful than a silent correction.

If your child is learning to verify AI outputs, the most powerful demonstration available is watching an adult do it in public, find something wrong, and explain exactly how they found it. That is what this post is. It is also, not coincidentally, exactly what Orbrya's curriculum teaches.

Every episode in this series will have a correction post. Not because every episode will have significant errors, but because every episode is produced by AI and reviewed by a human, and that process is itself the demonstration.

If you want to receive these correction posts alongside new episodes as they publish, the waitlist is at orbrya.com.


Sources


Episode Chapters


Output is produced by Orbrya. New episodes every Monday. Tutorials every Tuesday. Watch on YouTube | orbrya.com