In partnership with

The $4 Billion Problem Hiding in Nearly Every Fast-Food Location

You show up to your favorite fast-food restaurant for a quick meal. But the line is too long, and you’re starving. So you bail.

You’re not alone. 93% of monthly fast-food visitors in America say their top frustration is long lines. And while you miss out on chicken nuggets and fries, restaurant owners lose significant revenue they can’t afford to miss.

So brands like White Castle use Miso Robotics’ AI-powered kitchen restaurant robots to run their fry stations, keeping kitchen operations smooth, workers safe, and customers happy.

Aided by a collaboration with NVIDIA, Miso’s AI-powered Flippy Fry Station robot works 2X faster than average fry cooks. That means operators serve more customers and unlock up to 3X more profits per location. And that means much shorter lines for you.

This is a paid advertisement for Miso Robotics’ Regulation A offering. Please read the offering circular at invest.misorobotics.com.


ResearchAudio.io

74% AI Task Coverage. 0% Job Losses. One Worrying Signal.

Anthropic's new "observed exposure" metric tracks what theory alone misses.

Computer programmers now have 74.5% of their work tasks covered by AI in real-world automated usage. Customer service representatives sit at 70.1%. Data entry keyers at 67.1%. And yet, across all these occupations, unemployment has barely moved since ChatGPT launched. That is the central tension in a new report published today by Anthropic researchers Maxim Massenkoff and Peter McCrory.

But there is one signal that breaks the pattern: hiring of workers aged 22 to 25 into exposed occupations has dropped roughly 14% since late 2022. The headline finding is calm. The subtext is not.

Why Existing Measures Fall Short

Most AI labor market research relies on theoretical exposure: what tasks could an LLM speed up by at least 2x? The most cited framework comes from Eloundou et al. (2023), which scores each O*NET task on a scale of 0 (not feasible), 0.5 (feasible with tools), or 1 (feasible with LLM alone). By this measure, 94% of Computer and Math tasks and 90% of Office and Admin tasks are theoretically exposed.

The problem is that theory and reality diverge enormously. A task like "authorize drug refills and provide prescription information to pharmacies" scores as fully exposed (beta = 1), yet Anthropic has never observed Claude performing it. Legal constraints, software requirements, human verification steps, and adoption friction all create a gap between what AI could do and what it actually does in practice.

The authors point out that past attempts to forecast labor disruption have a poor track record. A well-known study identified roughly a quarter of US jobs as vulnerable to offshoring, but a decade later most of those jobs showed healthy employment growth. Even the BLS's own occupation-level forecasts add little predictive value beyond simple linear extrapolation.

The Method: Three Data Sources, One Metric

800
O*NET occupations scored
97%
of Claude usage on theoretically feasible tasks
30%
of workers with zero AI coverage

The new metric, called Observed Exposure, combines three data sources. First, the O*NET database, which lists tasks for approximately 800 US occupations. Second, Anthropic's own Claude usage data from the Anthropic Economic Index (covering August and November 2025). Third, the Eloundou et al. theoretical exposure ratings.

The formula works like this: a task counts as "covered" when it is both theoretically feasible and has seen sufficient work-related usage in Claude traffic. Then the measure adjusts for how the task is being done. Fully automated implementations (like API-driven workflows) receive full weight, while augmentative use (a human using Claude as a copilot) receives half weight. These task-level scores are then averaged up to the occupation level, weighted by time spent on each task.

Theory vs. Observed Coverage (Key Categories)

Computer & Math
33% / 94%
Office & Admin
28% / 90%
Legal
38% / 80%
Theoretical
Observed

The result is striking. While the theoretical measure suggests LLMs could penetrate 94% of Computer and Math tasks, Claude's actual observed coverage sits at just 33%. The gap exists across every occupational category. Many tasks that are theoretically possible remain unused due to model limitations, legal constraints, specific software requirements, or simple adoption friction.

That said, theory and usage are strongly correlated: 97% of all Claude usage observed across four Economic Index reports falls on tasks rated as theoretically feasible (beta = 0.5 or 1.0). A mere 3% of usage goes to tasks that Eloundou et al. rated as not feasible for LLMs.

What the Employment Data Actually Shows

The researchers matched their occupation-level exposure scores to individual respondents in the US Current Population Survey, then compared unemployment trends for workers in the top quartile of exposure against the 30% of workers with zero exposure. The finding: no systematic increase in unemployment for exposed workers since late 2022. The difference-in-differences estimate is +0.002 with a standard error of 0.0019, statistically indistinguishable from zero.

To put this in perspective, the authors note their framework could detect differential unemployment increases on the order of 1 percentage point. A scenario where all workers in the top 10% of coverage were laid off would push the top-quartile unemployment rate from 3% to 43%, and aggregate unemployment from 4% to 13%. Nothing remotely like that is happening.

But the young worker data paints a different picture. Using the panel dimension of the CPS, the researchers tracked monthly job start rates for workers aged 22 to 25 entering high-exposure versus low-exposure occupations. The series diverge visibly in 2024. Job finding rates for less exposed occupations remain stable at about 2% per month, while entry into the most exposed jobs fell by roughly half a percentage point. The averaged post-ChatGPT estimate is a 14% drop in the job finding rate for young workers in exposed fields, though this result is just barely statistically significant.

This echoes findings from Brynjolfsson et al., who reported a 6 to 16% fall in employment in exposed occupations among 22-to-25-year-olds using ADP payroll data. Importantly, both studies attribute this to reduced hiring rather than increased layoffs. The young workers who are not hired may be remaining at existing jobs, taking different jobs, or returning to school.

BLS Projections Align with Observed Exposure

One piece of external validation: occupations with higher observed exposure tend to have lower BLS growth projections for 2024 to 2034. For every 10 percentage point increase in observed coverage, the BLS projects 0.6 fewer percentage points of employment growth. The relationship is modest (R-squared = 0.027), but notably, there is no such correlation when using the Eloundou et al. theoretical measure alone. This suggests the observed metric captures something that pure theory does not.

Key Insights

Theory-practice gap is enormous. Even in the most AI-penetrated category (Computer and Math), just 33% of tasks show real automated usage versus 94% theoretical feasibility. Measuring what AI could do is very different from measuring what it is doing.

Automation weight matters. The metric distinguishes between fully automated API workflows (full weight) and human-in-the-loop augmentation (half weight). This distinction, combined with filtering for work-related usage, makes observed exposure a better predictor of employment outcomes than raw theoretical scores.

Young workers are the leading indicator. The overall unemployment picture looks stable, but entry-level hiring into exposed occupations is slowing. This pattern (reduced inflow rather than increased outflow) is exactly how early displacement would look before it shows up in unemployment statistics.

Exposed workers are not who you might expect. The most AI-exposed group earns 47% more than unexposed workers, is 16 percentage points more likely to be female, has nearly 4x the rate of graduate degrees, and is almost twice as likely to be Asian. AI displacement risk concentrates in well-educated, higher-paid knowledge work.

The most notable feature of this framework is not today's findings. It is the commitment to measure the same thing repeatedly as AI capabilities advance. The red area of observed coverage will keep growing toward the blue area of theoretical capability. The question is how fast that happens, and whether the young-worker hiring signal intensifies or fades. The dataset is available on Hugging Face.

ResearchAudio.io

Source: Massenkoff & McCrory (2026), "Labor market impacts of AI: A new measure and early evidence," Anthropic Research

Keep Reading