CSAT and NPS after you add AI: how to read scores without fooling yourself
Scores often dip or swing when automation enters the funnel. Here’s how to segment surveys, time them right, and separate AI quality from product issues.
The week after you turn on an AI agent, someone will forward a dashboard: CSAT down 4 points. Panic follows. Often the movement isn’t “users hate AI” — it’s that you’re measuring a different journey than before.
Segment by resolution path
Blended CSAT mixes users who never hit a human with users who escalated after a frustrating loop. Split scores: resolved by AI only, resolved by human after AI, human-only (control). You’ll usually see different stories in each bucket.
Time the survey to the outcome
Asking “how was the chat?” the second the bot replies captures mood, not resolution. Trigger feedback after the conversation is marked resolved or after handoff completes. You want satisfaction with the outcome, not the first message.
NPS is a blunt instrument for support
NPS reflects product, pricing, and brand — not just your agent. Use CSAT or CES (Customer Effort Score) for channel-specific quality, and keep NPS on a slower cadence so you don’t attribute product churn to a bad bot week.
What to watch monthly
- CSAT delta vs. the same segment pre-AI (not blended).
- Comment themes mentioning “robot,” “loop,” or “repeated.”
- Correlation between thumbs-down on AI answers and subsequent churn (lagged).
Want to actually ship this?
Signorian deploys a docs-grounded AI support agent in under an hour. Free on 100 conversations/month. Founder pricing for the first 500 teams.
Claim founder pricingKeep reading
Designing AI-to-human handoffs that customers don’t hate (and agents can actually use)
7 min read
The 5 metrics that tell you if your AI support agent is actually working
6 min read
Signorian alternatives by team stage: what to choose at 3, 15, and 50 support seats
6 min read