|
ResearchAudio
AI Interviewed 1,250 Humans About AI. Here's What They Said.
December 2025
|
|
Anthropic just did something unusual.
They built an AI-powered interview tool, had it conduct 1,250 in-depth conversations with professionals about how they feel about AI, then released all the transcripts publicly for anyone to analyze.
The findings reveal something we don't often see: the gap between what people say about AI and what they actually do. The productivity gains they report alongside the anxiety they feel. The creative professionals hiding their AI use from peers. The scientists who want AI partnership but can't bring themselves to trust it.
|
The Method: Anthropic Interviewer
Traditional user research doesn't scale. Running 1,250 in-depth interviews manually would take months and cost a fortune. So Anthropic built a tool called Anthropic Interviewer—powered by Claude—that conducts 10-15 minute interviews automatically, then feeds results back to human researchers for analysis.
The system works in three phases: planning (creating interview rubrics), interviewing (adaptive real-time conversations), and analysis (identifying themes across transcripts). Human researchers set the goals and interpret the findings. The AI handles the scale.
|
|
Three groups interviewed:
General workforce — 1,000 professionals across education, tech, arts, business, admin, and sales
Creatives — 125 writers, visual artists, filmmakers, designers, musicians
Scientists — 125 researchers across physics, chemistry, biology, data science, engineering
|
|
The Headline Numbers
The surface-level findings are positive. Among the general workforce:
|
|
86%
Say AI saves time
|
65%
Satisfied with AI's role
|
55%
Anxious about future
|
|
|
That last number is the story. The majority report productivity gains and anxiety about where this is heading. Both things are true at once.
|
The Perception Gap
One of the most interesting findings came from comparing what people said in interviews with how they actually use Claude (from Anthropic's separate usage analysis).
|
|
What people say
65%
describe AI use as augmentation (collaborating with AI)
|
|
|
What people do
49%
of actual Claude usage is automation (AI performs tasks directly)
|
|
|
|
People perceive their AI use as more collaborative than it actually is. Whether that's aspirational framing, social desirability, or genuine misperception—the gap is real.
|
Creatives: Productivity Gains, Peer Judgment, Economic Anxiety
Among creative professionals, the numbers were striking: 97% said AI saves them time, 68% said it increased their work's quality.
A web content writer reported going from 2,000 words of polished content per day to over 5,000. A photographer reduced turnaround time from 12 weeks to 3.
But 70% mentioned managing peer judgment around AI use. The stigma is real and actively hidden.
|
|
"I don't want my brand and my business image to be so heavily tied to AI and the stigma that surrounds it."
— Map artist
|
|
|
The economic anxiety went deeper than stigma:
|
|
"Certain sectors of voice acting have essentially died due to the rise of AI, such as industrial voice acting."
— Voice actor
|
|
|
"I fully understand that my gain is another creative's loss. That product photographer that I used to have to pay $2,000 per day is now not getting my business."
— Creative director
|
|
|
All 125 creative participants said they wanted to remain in control of their outputs. But many acknowledged this was slipping. One artist admitted: "The AI is driving a good bit of the concepts; I simply try to guide it… 60% AI, 40% my ideas."
|
Scientists: Want Partnership, Can't Trust It Yet
The scientist interviews revealed a different pattern. 91% expressed desire for more AI assistance in their research. But they confined actual use to peripheral tasks—writing manuscripts, debugging code, literature review.
The core work—hypothesis generation, experimental design—remained off-limits. Trust was the barrier in 79% of interviews.
|
|
"After I have to spend the time verifying the AI output, it basically ends up being the same amount of time."
— Mathematician
|
|
|
"What I would really like from an AI would be the ability to accurately grab information, summarise it and use it to write the core of a funding application. AI generally writes well; the problem now is that I just can't rely on it not hallucinating."
— Economist
|
|
|
What scientists actually want—if reliability improves:
|
|
"I would love an AI which could feel like a valuable research partner… that could bring something new to the table."
— Medical scientist
|
|
|
Unlike creatives, scientists showed relatively low worry about displacement. Some pointed to tacit knowledge that can't be digitized. A microbiologist explained working with bacterial strains where "you had to initiate various steps when the cells reached specific colors. The differences in color have to be seen to be understood and are seldom written down anywhere."
|
The Future People Imagine
Across all groups, a common vision emerged: automate the routine, preserve what's meaningful.
48% of general workforce participants envisioned transitioning toward roles that manage and oversee AI systems rather than performing direct technical work.
|
|
"If I use AI and up my skills with it, it can save me so much time on the admin side which will free me up to be with the people."
— Pastor
|
|
|
Not everyone was optimistic about staying in their field:
|
"I believe AI will eventually replace most interpreters... so I'm already preparing for a career switch, possibly by getting a diploma and getting into a different trade."
— Interpreter
|
|
|
Of those expressing anxiety, 25% set deliberate boundaries around AI use. Another 25% adapted by taking on additional responsibilities or pursuing specialization. Only 8% expressed anxiety without any remediation plan.
|
Why This Matters
Anthropic's previous research tools could analyze what happens in Claude conversations. But they couldn't answer: How do people use Claude's outputs after the chat ends? How do they feel about it? What do they imagine for the future?
This is qualitative research at quantitative scale. And they're making it participatory—starting this week, Claude.ai users may see pop-ups inviting them to be interviewed.
|
|
Key Findings
Productivity and anxiety coexist. 86% report time savings. 55% are anxious about the future. Both are true.
People perceive collaboration, practice automation. Self-reported augmentation (65%) exceeds actual augmentation (47%).
Creatives hide their AI use. 70% manage peer stigma actively. Gains are real; so is the judgment.
Scientists want partnership they can't yet trust. 91% want more AI assistance. 79% cite reliability as the barrier.
|
|
|
The full interview transcripts are available on Hugging Face for anyone to analyze. What patterns would you find?
These are 1,250 professionals describing, in their own words, what it feels like to work alongside AI in December 2025. That's a snapshot worth studying.
|
|
Source
Introducing Anthropic Interviewer: What 1,250 professionals told us about working with AI — Anthropic, December 4, 2025
Full Interview Dataset — Hugging Face
|
|
ResearchAudio — Technical insights for practitioners
|