Last quarter, a PM I know ran their entire concept validation through synthetic users. Six AI-generated personas, three rounds of feedback, unanimous enthusiasm. They shipped the feature in six weeks. Adoption flatlined within two weeks because the real users had a workflow constraint that none of the synthetic personas could have surfaced — they used the product on shared tablets in a warehouse, and the feature assumed persistent login on a personal device.
That story captures everything about where synthetic users stand right now: genuinely useful in the right context, genuinely dangerous when you forget what they can't do.
The Promise Is Real — For the Right Problems
Synthetic users solve a pain point every PM knows: the canyon between "we should talk to users" and "we actually have time and budget to talk to users." Recruiting takes weeks. Scheduling is a nightmare. And when you finally sit down with five people, three of them aren't really your target user anyway.
AI-generated participants eliminate that friction. You can spin up a persona that matches your ICP, run it through a concept test, and get directional feedback in minutes instead of weeks. For early-stage hypothesis generation — before you've committed to a direction — that speed genuinely matters.
Where synthetic participants earn their keep:
Screening bad ideas fast. If even an AI persona finds your value prop confusing, real users definitely will.
Pressure-testing messaging. Running ad copy or onboarding flows through synthetic personas catches obvious friction before you burn recruitment budget on it.
Sharpening interview guides. Talk to the synthetic persona first, find the interesting threads, then bring tighter questions to real interviews. You arrive with better hypotheses instead of spending the first fifteen minutes on obvious discovery ground.
Competitive positioning. Simulating how someone might compare your product to alternatives surfaces gaps in how you frame things.
The key distinction across all of these: none of them require the synthetic user to be right. They ask it to be directionally useful. That's where this tool shines.
The Sycophancy Problem
Here's what the vendor demos skip: these personas are people-pleasers. They're trained on text that skews toward coherent, agreeable responses. Ask one "Would you use this feature?" and you'll almost always get a qualified yes. Real users say "honestly, I'd probably forget it exists" — and that's the insight you needed.
When to Use Which
Not every research question demands the same rigor. A rough decision guide:
| Signal You Need | Synthetic | Real Users |
|---|---|---|
| "Is this concept obviously broken?" | ✅ Fast, cheap, sufficient | Overkill at this stage |
| "Does the messaging land?" | ✅ Good for directional reads | Better for emotional resonance |
| "How do people actually navigate this?" | ❌ Can't simulate real behavior | ✅ Usability testing needs real humans |
| "What workflows exist that we haven't considered?" | ❌ Limited to training data patterns | ✅ Exploratory discovery needs real context |
| "Will users pay for this?" | ❌ No wallets, no stakes | ✅ Willingness-to-pay requires real trade-offs |
| "Is this safe for vulnerable populations?" | ❌ Never | ✅ Always — healthcare, finance, accessibility |
The pattern that emerges: synthetic participants work well for convergent questions where you're narrowing options, but they fall apart on divergent ones where the goal is discovering what you don't know you don't know.
The "80/20" Advice Gets Misapplied
You've probably seen the recommendation floating around: use synthetic users for 80% of your research volume, real users for the final 20%. It sounds efficient. The problem is how most teams actually interpret it — they run dozens of synthetic sessions, then schedule two real interviews at the end to "validate."
That's backwards. The real-user work needs to happen first for discovery, not last for validation. You need actual humans to surface the unknown unknowns — the shared-tablet scenario, the workaround nobody documents, the fact that your power users quietly resent the feature your dashboard says is most popular. Synthetic personas can't generate that kind of surprise. They extrapolate from training data, so they'll always give you a tidier, more predictable version of what you already expect to hear.
A better framing: real users for discovery, synthetic users for iteration. Talk to five or six real humans to map the problem space. Then use synthetic personas to rapidly test solution variations. The AI refines something grounded in reality instead of hallucinating a reality of its own.
This sequencing also changes what you get from the synthetic round. When you've already done real discovery, you can prompt the AI persona with actual context — real quotes, real workflows, real constraints — instead of feeding it a generic ICP description and hoping for the best. The output quality improves dramatically because the input quality did.
What This Means for Your Next Sprint
If you're evaluating whether to bring synthetic participants into your process, here's the honest version:
Go for it if your team currently does zero research because of time and budget pressure. Synthetic users are infinitely better than no users at all. Getting any signal — even noisy, directionally-approximate signal — beats shipping on instinct.
Be cautious if you're planning to replace existing real-user research to cut costs. You'll save money and lose the insights that actually redirect product strategy. The ROI math looks good on a spreadsheet and terrible when you ship to silence.
Skip it entirely for anything involving trust, safety, accessibility, or populations underrepresented in the model's training data. If your synthetic personas systematically exclude non-Western, non-English-speaking perspectives, you're building confidence in a product that fails entire markets while feeling great about your research velocity.
The tool is real and the speed is real. Just don't mistake fast feedback from a very articulate parrot for understanding your users.