Double Barreled Questions, Response Bias & Other Survey Mistakes to Avoid

Response bias can affect the quality and validity of your customer feedback. Here's how to avoid it to make sure you get honest, accurate feedback from your customers every time.

Double Barreled Questions Response Bias

Here at Fieldboom, we’re all about customer surveys. We love businesses who ask for – and incorporate – feedback from their customers to improve their products and services.

We know that launching a Voice of The Customer program gives customers the opportunity to provide feedback, which allows you to:

  • Better understand the needs of their customers
  • Assess the effectiveness of their product or service
  • Address common complaints and other issues
  • Improve the overall operations of the organization

But problems can arise when companies don’t treat the creation of such surveys with the seriousness and scrutiny they need to treat them with.

Though these surveys aren’t conducted in a lab, they are, for all intents and purposes, a type of scientific research. Just as scientists conduct experiments to determine the cause and effect of certain phenomena, companies conduct customer satisfaction surveys to figure out exactly what they’re doing well, and what they need to fix, in terms of the service they provide.

And, just as with scientific experiments, any mistake made while creating and conducting customer surveys can completely invalidate the data the surveyor collects.

A survey that returns inaccurate or otherwise unreliable results, then, ends up being a complete waste of time, money, and other resources on the part of the company.

Or, even worse, if a company doesn’t realize the results of a survey are invalid, it might end up focusing on improving or changing areas of the organization that didn’t need to be improved in the first place (and ignoring areas that do need improvement, as well).

In turn, customers who took the time to voice their opinion – only to have their suggestions fall on deaf ears – will believe the company in question to be non-responsive, and will almost definitely defect to a competitor.

Taking all of this into consideration, it should be clear that your customer satisfaction surveys need to be created in a way that minimizes confusion (both in terms of how your customers understand the questions asked, and how you interpret their answers) and generates reliable data you’ll be able to use to improve the service your company provides.

How Survey Bias Can Affect Customer Satisfaction Survey Results

Before we dive into describing the most common mistakes to avoid when creating customer satisfaction surveys, it’s important to understand the implications of making these mistakes in the first place.

As alluded to in the intro, survey bias occurs when respondents skew their answers due to factors not related to the question at hand. In scientific terms, such bias is the result of a single experiment including more than one controlled variable.

If this is a bit confusing, don’t worry – we’ll clarify it all in a bit.

Survey bias comes in one of two overarching forms:

  • Response bias
  • Non-response bias

Response bias occurs, as described above, when outside factors influence a customer’s response to a single survey question.

Non-response bias occurs when respondents’ answers potentially differ from how non-respondents may have answered a question. (Again, we’ll clarify that momentarily!)

Common examples of response bias include:

  • Acquiescence Bias: The human tendency to “go with the flow” and please the surveyor rather than answer in a way that might possibly be offensive or insulting.
  • Demand Characteristics: The human tendency to alter responses because they are completing a survey (in other words, this occurs when a person thinks the surveyor wants them to answer in a certain way – rather than simply answering honestly).
  • Extreme Responding: Ignoring possible “middle ground” responses, opting to only choose the most extremely negative or extremely positive possible answers for a set of questions.
  • Social Desirability Bias: The tendency of an individual to respond in a way that places themselves in the most positive or favorable light.

In contrast, the following scenario illustrates an instance of possible non-response bias:

  • A company sends out a poll via email asking for information regarding its customers’ propensity to use mobile devices to make purchases.
  • Since the survey was sent via email, it can be assumed that those who received it are at least somewhat familiar with the concept of shopping via mobile device.
  • On the other hand, those who aren’t very active in terms of online shopping may not even receive the survey in the first place.
  • The results, then, would likely show that a vast majority of customers do often use their mobile devices to make purchases.
  • But such results would largely be due to the fact that those who would have responded negatively (had the survey been sent via snail mail in addition to email, for example) never actually responded.

Though in the example above, the way in which to mitigate bias is rather clear (sending the survey through multiple mediums), there are instances in which determining whether or not a survey question is valid (and how to fix it) isn’t so cut-and-dried.

5 Common Customer Satisfaction Survey Mistakes to Avoid

In this section, we’ll discuss how the way in which survey questions are worded – and answer choices are presented – can unintentionally undermine the intentions of the surveyor (which is, of course, to gain information to be used to improve operations within the company).

We’ll describe and provide examples of:

  • Double-barreled questions
  • Leading questions
  • Loaded questions
  • Overwhelming and underwhelming respondents
  • Use of absolute terms

Let’s begin.

Double-Barreled Questions

A double-barreled question is one that addresses two separate issues while asking for a single response.

For example, suppose you ask customers to respond to the following statement:

The customer service representative was helpful and polite.

For the customer to “strongly agree” with this statement, the employee would have had to be both extremely helpful and extremely polite.

But what if the employee was extremely helpful, but was also rather brash? The customer wouldn’t be able to truly agree with the statement…but they probably also wouldn’t completely disagree with it, either.

Or, suppose the customer answered neutrally. Would that mean the employee was extremely helpful but not polite? Or were they extremely polite but not very helpful? Or were they just “so-so” in both regards?

Asking such a question will not only confuse the respondent, but it will also render their answer undecipherable.

The best way to avoid asking double-barreled questions is to ensure every question you ask addresses one issue, and one issue only.

Additionally, you can always feel free to ask follow-up questions separately that might allow you to better see from your customer’s perspective. Using the above example, after asking about an employee’s level of helpfulness and politeness, you might then ask something like “How did the employee’s level of politeness contribute to their ability to help you?” or “How did the employee’s level of politeness contribute to your overall experience as a customer?”

Though including these clarifying questions will make your survey a bit longer, doing so is an efficient use of your customers’ time – while using double-barreled questions is simply a waste of it.

Leading Questions

Earlier, we discussed the tendencies of individuals to respond either in a way that would provide the least amount of “friction,” or in a way that they believed the questioner wanted them to respond.

For the most part, these tendencies are caused when an individual is asked a leading question. This type of question plants a seed of subjectivity in the respondent’s mind, making it nearly impossible for them to answer honestly.

For example, if a survey question reads “Agree or disagree: The service was excellent”, the customer will be forced to assess the service they received through the lens of whether or not it was, in fact, excellent. To choose an option other than “strongly agree,” the customer would have to prove to himself that the service was anything but “excellent.” If nothing really went wrong during the entire process, the customer would then most likely “strongly agree” that the service was excellent.

However, “excellent” customer service is such that employees go above and beyond to meet the needs of the customer. So, was the service really “excellent”? Or was it simply free of any hangups (and, as such, simply “adequate”)? Because of the way the question was framed, it’s impossible to tell.

When creating a customer questionnaire, it’s essential that you ask each question in an objective manner. In turn, your customers will be able to approach the topics addressed in each question objectively, as well.

For example, rather than asking customers to respond to the statement, “The service was excellent,” ask: “How would you rate the service?” and provide a Likert Scale ranging from “Poor” to “Excellent.” That way, your customer defines the adequacy of the service provided – rather than having to present for or against your own claim.

A Note on Double-Negative Questions

You might group double-negative questions in as a subset of leading questions, in that they frame questions in a subjective way rather than allowing the customer to approach the topic objectively.

These are questions that introduce a topic, but frame the question in a convoluted manner.

For example:

Agree or Disagree: The salesperson was not unhelpful.

As a respondent, it’s rather confusing to know exactly what you’re answer means. If you agree, are you saying the salesperson was helpful? Or are you saying he wasn’t helpful? Or was he not exactly helpful, but he certainly wasn’t unhelpful?

It’s rather easy to avoid asking double-negative questions: just avoid negative adjectives (“un,” “not,” etc.).

And, again, avoid subjective statements. In the example above, the correct way to broach the topic would be to state: “How would you rate the salesperson’s ability to assist you?”

Loaded Questions

A loaded question is similar to that of a double-barreled question in that, no matter which answer a respondent chooses, they won’t be providing any valuable information.

Unlike double-barreled questions that ask two questions in one, loaded questions assume certain information, then ask a question based on this assumption.

For example, consider the following question:

How easy was it to find what you were looking for today?

And let’s say the response choices range from “Easy” to “Difficult.”

Even if a customer responds that it was difficult to find an item, they’re inherently saying that they did, in fact, find the item.

But what if they weren’t able to find the item at all?

Think of the implication this would have on the company:

Rather than focusing on ensuring certain items are consistently in stock, it would focus on making items easier to find. But, of course, if an item isn’t in stock in the first place, it would be impossible to find – so their efforts in making available items more accessible wouldn’t be addressing the customer’s initial problem.

Like with double-barreled questions, you can avoid asking loaded questions by breaking up information into separate questions. In this case, you would first ask “Did you find the item you were looking for today?” and then ask a question regarding the customer’s ability to locate said item.

Though the second question is dependent on the first, asking both takes away any possibility of making assumptions on the part of the company.

Over Or Underwhelming Respondents

We’ve addressed this issue in many of the previous sections, but there’s a bit more to go into regarding the amount of information you provide in your survey questions.

To quickly review, double-barreled questions ask for too much information, while loaded and leading questions assume and/or provide too much info.

On the response side of the equation, it’s possible to confuse your respondent by providing either too few or too many answer choices.

Say you ask your customer the question:

How would you rate the service you received?

…and follow with a scale of 1-3 (1 being low, 3 being high). Because there are so few choices, it’s impossible to gauge the customer’s level of satisfaction. If they circle “3,” does that mean the service was incredible? Or was it simply better than average? There’s no way to tell.

On the other hand, if you ask the same question, and provide a scale from 1-20, your customers would likely have a hard time determining whether the service they received should score a 16, or maybe 17 (And, from the perspective of the surveyor, the difference between a 16 and 17 is rather negligible).

For more on the appropriate amount of choices to provide in your customer satisfaction surveys, check out our post on using Likert Scale questions.

Using Absolute Statements

Never use absolute statements in your customer satisfaction surveys.

See what I did there?…sorry…

Say you want to determine whether or not your store’s items are easy to locate.

Framing the question/statement as “Agree or disagree: I always find what I’m looking for when shopping at (your store)” would, as we’ve said, provide no meaningful data.

Chances are there will have been moments that your customers had trouble finding what they were looking for, or they didn’t have time to look for every item they wanted, or something occurred that pulled them away from finding what they needed. So, most of your customers would have to “strongly disagree” that they always find what they need in your store.

But this doesn’t necessarily mean it’s difficult to find items in your store. It just means their rate of successfully finding items isn’t 100%. But, again, this doesn’t really give you much information to work with.

Avoid using absolute statements. Instead, as we’ve said before, remain as objective as possible. Let your customers choose the absolute statement if it applies to them, or the middle-of-the-road option if it’s more applicable.

Using the example above, you might ask customers to agree or disagree to the statement “I am able to find what I’m looking for when I visit (your store)”, then provide a scale ranging from “Strongly agree” to “Strongly disagree.”

The responses you receive to that question will be much more valuable in terms of improving your company’s overall operations.

Conclusion

The purpose of sending out customer satisfaction surveys is to pinpoint specific areas of your customer service that are in need of improvement (as well as areas in which you are excelling).

But you won’t be able to make any such improvements if the data you collect from said surveys is inaccurate or invalid.

Before sending these surveys to your customers, assess each question you hope to ask in terms of objectivity, assumptiveness, and overall usefulness with regard to the information you hope to glean.

Matt is one of the brilliantly gifted content contributors at Fieldboom. He helps us whip up useful and interesting blog posts, guides and more.