How to conduct better customer interviews
Practical tips for running customer interviews that surface real insights instead of polite agreement.
Most customer interviews fail before they start. Not because the questions are bad, but because the approach is wrong.
Here's how to run interviews that actually surface useful insights.
Start with a clear research goal
"We want to understand our users better" is not a research goal. It's a wish.
A good research goal is specific and actionable:
- "We want to understand why trial users don't convert to paid plans."
- "We want to learn how customers discover and evaluate alternatives to our product."
- "We want to identify the top three pain points in our onboarding flow."
Your research goal determines your questions, your participant selection, and how you'll analyze results. Get this right first.
A useful test: if your research goal doesn't suggest a specific action you'd take with the results, it's too vague. "Understand our users" doesn't point anywhere. "Identify why mid-market companies churn within 90 days" points directly at retention strategy.
Ask open-ended questions
The biggest mistake interviewers make is asking leading or closed questions:
- Bad: "Do you find our product easy to use?"
- Good: "Walk me through the last time you used our product."
- Bad: "Would you use a feature that does X?"
- Good: "Tell me about the last time you ran into that problem. What did you do?"
Open-ended questions let respondents tell their story. Closed questions confirm your assumptions. The difference between the two is the difference between discovery and validation — and most teams need discovery more than they think.
One practical technique: start every question with "Tell me about...", "Walk me through...", or "Describe...". These prompts invite narrative, which is where the richest insights live.
Follow up on what matters
The best insights come from follow-up questions, not your script. When a respondent says something surprising or emotional, dig deeper:
- "Tell me more about that."
- "What do you mean by [specific word they used]?"
- "Why was that important to you?"
- "What happened next?"
- "How did that make you feel?"
Watch for signals: hesitation, emotion, unexpected vocabulary, contradictions between what they say and what they did. These are the moments where the real insights are hiding.
This is where AI-powered interviews excel. A well-designed AI interviewer can detect signals in responses and ask relevant follow-ups consistently, something even experienced human interviewers sometimes miss when they're focused on their script or thinking about the next question.
Don't ask about the future
People are terrible at predicting their own behavior. Questions like "Would you use this feature?" or "How much would you pay for this?" produce unreliable answers. Research shows that stated intent correlates poorly with actual behavior.
Instead, ask about past behavior:
- "Tell me about the last time you had this problem."
- "What did you try before finding us?"
- "What happened after you cancelled your subscription?"
- "How did you solve this before you had a tool for it?"
Past behavior is the best predictor of future behavior. If someone says "I would definitely use that feature," it means almost nothing. If someone says "Last week I spent three hours trying to do exactly that with a spreadsheet," that's a strong signal.
The same principle applies to pricing. Don't ask "What would you pay?" Instead ask: "What are you paying for your current solution?" or "What did you spend on solving this problem last quarter?" Real spend data is infinitely more reliable than hypothetical willingness-to-pay.
Interview the right people
Five interviews with the right participants are worth more than fifty with the wrong ones. Be deliberate about who you talk to:
- For churn research: Talk to people who recently cancelled, not loyal customers. Loyal customers will rationalize why they stayed. Churned customers will tell you what broke.
- For acquisition research: Talk to people who recently signed up, while their decision is fresh. Ask what alternatives they considered and what tipped them toward you.
- For usability research: Talk to people who match your target user profile, not your power users. Power users have adapted to your product's quirks. New users will show you where the friction actually is.
- For pricing research: Talk to people who evaluated but didn't buy, and people who upgraded. The gap between those two groups is where your pricing story lives.
Recruitment is half the battle. Spend time defining your screening criteria and don't settle for "whoever is available." The wrong participants don't just give you less useful data — they give you misleading data.
Keep it conversational
An interview should feel like a conversation, not an interrogation. The respondent should do 80% of the talking. Your job is to listen, be curious, and guide the discussion toward your research goals.
A few practical tips:
- Use the respondent's name. It builds rapport and makes the conversation feel personal.
- Acknowledge their answers. "That makes sense" or "Interesting" shows you're listening without leading them.
- Reference what they said earlier. "You mentioned X — can you tell me more about that?" This shows you're paying attention and often unlocks deeper insights.
- Be comfortable with silence. People fill pauses with their best insights. Resist the urge to jump in with the next question. Count to five after they stop talking.
- Don't correct or educate. If they misunderstand your product, don't fix it. Their misunderstanding is the insight.
- Mirror their language. If they call it a "dashboard" don't call it a "reporting module." Using their vocabulary signals respect and avoids confusion.
Watch for the gap between words and behavior
What people say and what they do are often different. This isn't because they're dishonest — it's because humans are bad at introspection.
A customer might say "I love your product" while only logging in once a month. Another might complain about missing features while using the product daily. The gap between stated attitudes and actual behavior is where the most interesting insights live.
When you notice a gap, explore it gently: "You mentioned you really value X, but it sounds like you don't use it that often. Can you help me understand that?" This isn't confrontational — it's curious. And it often leads to the most valuable part of the interview.
Synthesize as you go
Don't wait until you've finished all your interviews to start analyzing. After each interview, write down:
- The three most surprising things you heard.
- Any patterns that are emerging across interviews.
- Questions you want to add or change for the next interview.
- Quotes that capture a key insight in the respondent's own words.
This iterative approach helps you refine your research in real-time and ensures you don't miss important signals. By interview five, you'll be asking much sharper questions than you started with.
Also: write your notes within 30 minutes of the interview. Memory degrades fast, and the nuance of how someone said something is the first thing you'll forget.
Know when you've heard enough
You don't need 50 interviews to find a pattern. In practice, most qualitative studies reach saturation — the point where new interviews stop revealing new insights — somewhere between 5 and 12 participants.
If you've done eight interviews and the last three all repeated themes from the first five, you probably have enough. If every interview is still surprising you, keep going.
The goal isn't statistical completeness. It's sufficient understanding to make a better decision than you would have made without the research.
The goal is understanding, not validation
If you finish your interviews and every single respondent confirmed your hypothesis, something went wrong. Either you asked leading questions, recruited biased participants, or only heard what you wanted to hear.
Good research challenges your assumptions. Go in looking for truth, not agreement. The most valuable interview is the one that makes you rethink your roadmap — not the one that confirms it.