Customer Research
Most business leaders know they should do more customer research. Fewer know they're doing it badly — and a bad survey is worse than no survey at all.
Matthew Gaunt | April 2025 | Customer Research
I've been saying this for years to every client I work with: ask your customers. They will tell you more in ten minutes than six months of internal debate. The problem is not that businesses don't want to do research. The problem is that most surveys are written by people who already know the answer they want — and every question in the survey quietly points the respondent towards it.
Leading questions. Loaded language. Double-barrelled questions that muddle the data. Scales that change halfway through. Questions written to validate a decision that's already been made. It happens everywhere — and not because people are dishonest. It happens because it is genuinely hard to write a clean, unbiased survey if nobody has taught you how.
|
"A bad survey doesn't just give you bad data. It gives you confident bad data — and that is far more dangerous." |
So I did something about it. I'm not a research methodology expert — but I know the principles, and I know Claude well enough to build something useful. I put together a structured review and improvement prompt that applies established frameworks — Total Survey Error (TSE), Dillman's Tailored Design Method, and MRS best practice — to any survey draft you share with it.
The prompt is free to use. I've shared the full text below. Here's what it does — and why each step matters.
Step One
| 1 |
Structural audit |
Before you look at individual questions, look at the whole shape of the survey. Is the order right? Standard practice runs screener questions first, then behavioural questions, then attitudinal, then demographic. Most surveys get this backwards — they open with "how old are you?" when what you actually need to establish is whether this person is in your target audience at all.
The structural step also checks length. Respondent fatigue is real. A survey that takes more than eight minutes to complete will haemorrhage completions — and the people who drop out are rarely random. You lose the busiest, most time-poor respondents first: often exactly the people whose opinion you most need.
|
The so-what If your survey has no screener question and asks 25 questions, it needs to be cut before anything else changes. Structure first, questions second. |
Step Two
| 2 |
Question-by-question critique |
This is where the detail lives. The prompt reviews every question against ten specific failure modes — the most common ways a survey question stops producing honest data and starts producing comfortable data.
The ten failure modes it checks for:
|
The so-what For each issue found, the prompt quotes the original question, names the specific problem, and rewrites it to neutral. You get before and after — not just a critique. |
Step Three
| 3 |
Language and register check |
Consumer customers and commercial customers are not the same audience. A survey written for a residential homeowner should feel warm, plain, and easy. The same survey sent to a facilities manager or procurement lead should be professional, concise, and to the point. Most surveys get written in a single corporate register that is slightly too formal for consumers and slightly too vague for commercial buyers.
This step also flags anything that sounds like marketing — surveys should not be brand exercises. Respondents who feel they are being sold to while filling in a survey give worse data, and lower completion rates.
|
The so-what If your survey intro talks about your "commitment to excellence" or "passion for customer service", cut it. Respondents want to know how long it takes and what happens next. That's it. |
Step Four
| 4 |
Methodology recommendations |
This is where the prompt shifts from critique to construction. It advises on question types — when to use a Likert scale versus NPS versus a simple ranking versus open text. These are not interchangeable. NPS is useful for tracking loyalty over time. Open text is useful for surfacing themes you haven't predicted. Ranking is useful when you need to know relative priority, not absolute satisfaction.
It also recommends where to use skip logic — so respondents who haven't used a particular service aren't asked to rate it, and you don't have to filter that noise out of your data after the fact.
|
The so-what The goal is data you can actually use. A survey that produces a warehouse of opinions but nothing you can act on or track over time has cost you goodwill with respondents for nothing. |
Step Five
| 5 |
The clean redraft |
The final output is a complete, professionally reformatted survey — ready to load directly into HubSpot Forms. Every question is formatted consistently: question text, question type, response options, and any skip logic. It closes with a warm, brief thank-you that respects the respondent's time.
The intent of every original question is preserved throughout. This is not a rewrite exercise. It is a bias-removal exercise. The form changes; the function doesn't.
|
The so-what You go in with a draft. You come out with a HubSpot-ready survey that will produce cleaner data, higher completion rates, and results you can actually act on. |
The Bigger Point
The hardest thing about customer research is not writing the questions. It is staying genuinely open to an answer you didn't expect — or didn't want. Most survey drafts I see are written by people who have already formed a view. The survey becomes a mechanism to confirm it rather than test it.
That is not research. It is expensive guesswork dressed up as evidence.
The prompt below helps you break that pattern. Use it. And if you've built something better — share it.
Free to Use
Copy the prompt below into Claude (or your AI tool of choice). Paste your survey draft at the bottom where indicated. Run it. Work through the output section by section.
Customer Research Review and Improvement Prompt
You are a senior market research professional with expertise in B2C and B2B survey design, questionnaire methodology, and behavioural science. You follow established frameworks including the Total Survey Error (TSE) model, Dillman's Tailored Design Method, and best practice guidance from the Market Research Society (MRS).
I am going to share a survey draft with you. Your task is to act as a critical but constructive peer reviewer. Work through the survey in full, completing all five steps below.
Step 1 — Structural Audit
Assess question order (screener → behavioural → attitudinal → demographic). Flag length and fatigue risk. Identify missing question areas. Check for a clear call to action and proper close. Output: numbered list of structural observations with a recommended fix for each.
Step 2 — Question-by-Question Critique
Review every question and flag: double-barrelled questions, leading questions, loaded language, ambiguous wording, acquiescence bias, social desirability bias, scale inconsistencies, response option problems, jargon, and assumptions built into questions. For each: quote the original, name the issue, give the corrected version.
Step 3 — Language and Register Check
Is the language right for the audience? Are sentences short and active? Is there corporate or marketing language that should be removed? Does the survey respect the respondent's time? Output: short paragraph summary with specific rewrites flagged.
Step 4 — Methodology Recommendations
Advise on question types (Likert, NPS, ranking, open text, multiple choice). Recommend skip logic. Flag open-text questions that could be closed. Assess whether the data mix will be actionable and comparable over time. Output: bulleted recommendations.
Step 5 — Redrafted Survey
Produce a clean, professional redraft incorporating all improvements. Format for HubSpot Forms: Q[number]: [question text] | Type | Options | Logic. End with a brief warm close and thank-you.
Throughout: never introduce bias in rewrites. Preserve the intent of every question. Flag anything written to confirm a preferred narrative rather than generate genuine insight. Treat B2C and B2B as distinct audiences. If a question cannot be salvaged, recommend removing it and explain why.
[PASTE YOUR SURVEY HERE]
| → |
Running research as part of a broader commercial review? |
I work with consumer-facing boards and founder-led businesses on the commercial decisions that follow the data — pricing, positioning, channel strategy, customer acquisition. If you're using research to inform a bigger strategic question, I'm happy to have a conversation. No pitch. Just a conversation.
Get in Touch →| MG |
Matthew Gaunt Board Advisor — Matthew Gaunt Associates |