We’ve talked a lot recently about the importance of determining your target customers’ true needs, developing a minimum viable product, and gathering customer feedback to validate your product idea.
Today, we’re going to bring these topics together under one umbrella so to speak, by discussing the Kano model of product development and customer satisfaction.
Developed by Noriaki Kano in the 1980s, the Kano model allows a company to classify its product’s features depending on the value they provide to their users.
In turn, this allows development teams to focus on optimizing the essential features of a given product and also to recognize when certain features are unnecessary or superfluous – and thus avoid wasting valuable time, money and energy to create/maintain them.
Keep reading to learn:
- The different categories of quality that a given feature can be classified as within the Kano model, as determined by your customers
- How to go about collecting this information from your customers
- How to use the information you collect from your customers to determine which category a certain feature falls in to
Without further ado, let’s dive in!
The following graph probably won’t mean much to you as of right now, but that will change as you keep reading:
For now, the most important thing to understand is that the category a feature ends up falling into depends on the level of functionality and satisfaction it provides the user. Again, this will become a bit more clear soon enough.
Depending on these two overlying qualities, a feature will be placed into one of the following categories:
- One-Dimensional, or Performance
Let’s take a deeper look at each of these classifications separately.
As the name implies, Must-Be features are those which users deem essential in order for the product as a whole to work as expected.
Examples of Must-Be features include:
- A working steering wheel on a vehicle
- The ability to make phone calls on a new smartphone
- Buttons on a button-down shirt
Sounds pretty obvious, right?
That’s the point.
Must-Be features must be present, or the product won’t work – and won’t hold any value in the eyes of the customer.
But, there’s a bit more to discuss, here. Take a look at how Must-Be features appear on the graph we referred to earlier:
As we said, the absence of Must-Be features will result in poor functionality – leading to poor customer satisfaction. However, while the presence of Must-Be features has a positive effect on the functionality, this doesn’t lead to a positive level of customer satisfaction.
To explain what we mean, here, let’s go back to our examples from before:
If you bought a car without a steering wheel, a smartphone that can’t make phone calls, or a button-down shirt without buttons, how would you feel?
(You don’t need to answer that. We know you’d be pretty peeved.)
On the other hand, how would you feel if you bought a car with a steering wheel, a smartphone that can make phone calls, or a button-down shirt that actually comes with buttons?
You probably wouldn’t be all that excited or anything; after all, you expect these features to be present. There’s no “wow” factor or anything like that, here.
The presence of a Must-Be feature doesn’t add any level of satisfaction to the user’s experience with your product – but the absence of such a feature will absolutely destroy the overall experience for the user.
Now that we have a better understanding of the graph from above, let’s look at how Performance features appear on it:
As the graph shows, the level of satisfaction a One-Dimensional feature provides directly correlates to the level of functionality of said feature.
To further illustrate this point, consider the following examples:
- A vehicle’s miles-per-gallon metric
- The hard drive size of a smartphone
- The processing speed of a computer or laptop
Each of these examples are cases of “the more, the better” (as opposed to “either/or,” as with Must-Be features). Generally speaking, the more miles per gallon a driver gets, or the more hard drive space or processing power an electronic device has, the higher their level of satisfaction will be.
(Note that we said “generally speaking”; we’ll get to that in a bit.)
For development teams, determining “how much” of a Performance feature a product should have is a bit more involved than considering Must-Be features. While, again, enhanced functionality will typically lead to higher levels of satisfaction among users, dev teams will have to consider:
- The cost of enhancing the feature above and beyond user expectations
- The amount of extra cash users will be willing to spend to receive the enhanced functionality
- Whether enhancing a certain feature will help differentiate the product from the competition, in the eyes of the user
In an ideal world, you’d want to provide your users with as much performance from a One-Dimensional feature as you possibly can. Within the context of running a profitable business, you’ll instead need to find a “sweet spot” in which you can maximize the value you provide your customers while also maximizing the profit your product brings in.
Going back to our trusty graph, Attractive features appear as so:
While not entirely accurate, you might consider Attractive features to be the inverse of Must-Be features: Simply including these features – no matter the level of functionality – can be enough to enhance your user’s satisfaction. Along with this, the absence of an Attractive feature doesn’t affect user satisfaction at all (since the user didn’t expect the feature to be included, in the first place).
Consider the following examples:
- A miles-per-gallon meter on a car’s dashboard
- A smartphone with wireless charging capabilities
- Free two-day shipping from an online retailer
None of the features listed above are essential to the basic functionality of the product or service in question; but they certainly do add value to them.
Now, where the “Must-Be inverse” comparison falls apart is in the fact that the more functional an Attractive feature is, the more satisfaction it provides – to a point.
Using the automobile example, the addition of a miles-per-gallon meter is certainly a nice touch – but one that displays your MPG metric down to the tenth of a mile is even better. However, an MPG meter that displays the metric down to the thousandths of a mile would be quite unnecessary in the eyes of the average driver.
While it certainly benefits a company to include Attractive features within its products, doing so should never be done at the expense of the functionality (or inclusion) of a Must-Be or One-Dimensional feature. As a simple example, a car that includes an MPG meter – but does not include a steering wheel – is obviously not going to be well-received by its users.
Similarly, the inclusion of Attractive features should never break the bank or over-extend a company’s budget in the slightest. Remember: the mere presence of such features will likely increase user satisfaction in the first place, so there’s no need to go overboard. Rather, the goal when including Attractive features should be to “wow” your customers with as little investment as possible.
(A quick note: As time goes on, and technology becomes more and more advanced, features that were once considered “Attractive” – such as wireless smartphone charging capabilities – will eventually become Must-Be or One-Dimensional features. For example, a decade ago, pretty much any “smart” functionality of a cell phone was considered incredible; nowadays, this is simply par for the course.)
Indifferent features are those that, quite simply, the user just doesn’t care about one way or the other.
In other words, the level of functionality provided by Indifferent features has no bearing on the user’s satisfaction level.
Examples of Indifferent features include:
- The type of plastic a bottle of juice comes in
- Whether a car’s gas tank is located on the left or right side of the car
- The color of a printer’s casing
(Note: These are hypothetical examples; as we’ll discuss in a bit, it may be that the type of plastic your company uses does matter to your eco-conscious customer base.)
It’s important to note that these values may (or may not) have some internal value to the company that provides the product (e.g., perhaps a certain type of plastic is more cost-effective). However, the Kano model (and the ensuing analysis) is not concerned with this value at all.
At any rate, the point of determining which of your product’s features your users are indifferent to is to avoid investing excess resources into changing and/or improving said features. Since no amount of improvement will matter to your users anyway, there’s no sense in focusing on optimizing these features.
Reverse features are, truly, the inverse of One-Dimensional (Performance) features.
In other words, Reverse features are those which actually detract from the user’s level of satisfaction as they increase in functionality.
Examples of Reverse features include:
- Too many buttons on a steering wheel, to the point that the driver becomes distracted
- Advanced software features that are too complicated for the average user to utilize
- An excess amount of tables at a restaurant, to the point that it’s always crowded
To be a bit more extreme, Reverse features are those which, quite simply, are unexplainable, and clearly add no value to the product:
Ideally, developers will be able to avoid including Reverse features without much input from the user (since said features should be fairly obvious, like that ridiculous shirt above). Still, it’s important to notice when users denote a feature as being “Reverse,” as it will allow you to avoid unwittingly including a feature your customers actively do not want.
Now that we’ve explained the categories in which your product’s features can fall into (in the eyes of your customer), let’s discuss how to actually get your customers to categorize your product’s potential features for you.
As we’ve alluded to throughout this article, you’re going to need to survey your target customers in order to determine their views on a specific feature of your product. If you don’t have a way to do that now, Fieldboom can help.
The very first thing you’ll need to do, here, is create a comprehensive list of the features you want to focus your users’ attention on.
Note that this does not mean you’ll be asking your users about every feature of the product in question, as this would mean you’d need to create a rather lengthy survey (which, in turn, would likely cause your response rate to plummet).
Rather, you’ll want to focus on around 20 or so features that your team has determined to be most relevant to the value being provided to the user being surveyed.
That said, you also want to determine who amongst your entire customer base you intend to distribute the survey to. As different customer segments will likely place varying levels of value on different features of your product, you’ll want to tweak the list of features your survey focuses on accordingly. At the same time, however, you want to be sure to include the major features of your product in each of these surveys, no matter which audience you’re focusing on at the present moment.
(The point is that, upon analyzing the results of your surveys, you’ll want to be able to pinpoint which features are most valuable to your entire audience as a whole, as well as which features are valuable only to certain user segments.)
Once you’ve determined the features to focus on, and the audience segment you’ll be distributing the survey to, you’ll need to actually create the survey.
Basically, Kano surveys consist only of the following two questions, each asked once per feature:
- “How would you feel if (product) included (feature)?”
- “How would you feel if (product) did not include (feature)?”
For each question, you’ll provide the following response options (or something similar):
- “I would enjoy it”
- “I expect it”
- “I am neutral” (or “I don’t care”)
- “I would dislike it, but I can tolerate it”
- “I would dislike it, and I wouldn’t use the product because of it”
Now, while this seems rather straightforward, there are a decent amount of factors to consider that, if left unchecked, could end up skewing the results of your survey.
For one thing, you want to be clear about how the feature benefits the user (i.e., the value it provides them), rather than simply listing the feature itself. For example, if addressing a car’s miles-per-gallon metric, you might word the affirmative question as “How would you feel if you could drive the car 500 miles on a single tank of gas?” rather than “How would you feel if the car got 30 miles per gallon?”
Similarly, when possible, you’ll want to provide an illustration of the feature being focused on. For electronic surveys, this might mean including a video demonstrating the feature in action; for paper-based surveys, you might include a diagram of the feature, or a series of photographs illustrating the feature in use.
It’s also important to structure your survey in a way that minimizes the potential of receiving invalid or biased responses.
Typically, such invalid responses are due to one of three things:
- Questions that lack focus (and/or focus on multiple topics at once)
- Confusion regarding what the question is actually asking
- The order in which responses are listed
Regarding the first misstep, you should always focus on one feature per question pair. For instances in which multiple “sub-features” combine to create one major feature, list each of these sub-features separately.
Regarding the second misstep, it’s a common mistake for survey creators to word the “negative” question of the question pair as the opposite of the affirmative question, instead of referring to the absence of the feature in question. For example, if the affirmative question is “If the car gets 30 miles per gallon, how do you feel?”, the negative question would be “If the car gets less than 30 miles per gallon, how do you feel?” rather than “If the car doesn’t get 30 miles per gallon, how do you feel?”
Finally, the order of response options can cause confusion for respondents, as well. For example, some would say that “I expect it” should be the top response option, since it’s seemingly based more on fact than opinion (as opposed to “I would enjoy it,” which seems rather subjective). However, listing “I expect it” first may unintentionally lead to many more respondents selecting this option instead of “I would enjoy it” – which would lead the surveyor to believe that an “Attractive” feature is really a “Must-Be.”
(You should also explain to respondents exactly what the point of the survey is in the first place. In doing so, you’ll hopefully be able to avoid instances in which respondents simply pick the first response option that they believe applies to them for a given question.)
Before you distribute your survey, be sure to review it with your development, marketing, and other teams to ensure accuracy and preciseness across the board. Again, the last thing you want is to receive responses that, for one reason or another, have been rendered irrelevant.
While not completely necessary, you might also consider including an additional question after each question pair:
On a scale of 1-9 (9 being “incredibly important,” 1 being “not at all important”), how important is (the feature in question)?
Your respondents’ answers to this question can provide you with an extra layer of understanding in terms of which features they truly value most – especially in instances in which two features are categorized similarly.
For example, let’s say a car company’s customers define both a five-star safety rating and an internal GPS as being “Attractive” features (i.e., both are nice to have, but aren’t exactly necessary).
Without this supplemental information, it can be a tough call as to which feature to focus on improving. However, by asking this optional question, you’re able to determine that most of your respondents actually place much more importance on the safety rating than they do on the inclusion of a GPS. In turn, you’ll know to focus on ensuring the car passes the most strenuous safety tests available before focusing on improving the internal GPS capabilities.
Before you actually begin analyzing your responses, you need to understand how each pair of responses culminates in the categorization of a given feature.
Take a look at the following chart:
Depending on a respondent’s answers to both questions, they’ll have essentially stated which category they believe a feature belongs in.
For example, if a respondent says they’d “like it” if a feature was present, and would “live with it” if the feature was not present, then it’d be considered an “Attractive” feature. Or, if a respondent said they’d “expect it” for a given feature to be present, and “dislike it” if the feature was not present, the feature would be considered “Must-Be.”
One thing to note, here, is that responses which contradict each other (e.g., A respondent saying they’d “like it” if a feature was present and if it wasn’t present) are considered questionable. Ideally, you won’t receive many – if any – responses that fall into this category; if you do end up receiving a decent amount of questionable responses, though, it may be a sign that your survey was flawed in one way or another.
At any rate, your next order of business will be to go through each response and calculate the number of times a feature was defined as either Must-Be, Performance, Attractive, Indifferent, or Reverse. Whichever category has the most tallies is how you’ll define the feature in question.
In instances in which two or more categories have an equal number of responses, you’ll want to place the feature in the highest-valued category, according to the following hierarchy:
So, for example, if 47 respondents consider a feature to be a “Performance” feature, and 47 people also consider it to be “Attractive,” you’d define the feature as “Performance.”
For each feature, you’ll also want to calculate the average level of importance your respondents placed on it. Again, this will help you decide which features within a given category to prioritize, and which to put on the backburner for the time being.
Once you’ve determined for certain which category each feature belongs in, you’ll be ready to actually begin developing and optimizing these features.
While it probably goes without saying, your main focus should be on all of your Must-Be features. Remember: without these features in place, your users simply will not want anything to do with your product.
After optimizing all of your Must-Be features (again, we cannot overstate the phrase “all Must-Be features”), your next priority will be on optimizing the Performance (One-Dimensional) features that your respondents report as being most important. At this point, you’re likely going to want to consider shelving some of these features due to factors such as budget constraints or lack of perceived value (in the eyes of your customers). Ideally, there will be a rather obvious cut-off point at which you can decide whether or not to focus on a given feature (and all features above or below said point).
Finally, you’ll then want to consider including the most valuable of Attractive features on your list. At this point, you’ll almost certainly be facing budgetary constraints of some kind, so you’ll want to be extra certain that the Attractive features you focus on are those which your users will find valuable. In other words, you don’t want to waste the last of your budget on features that your users don’t care about – and that could have been put to better use elsewhere.
When developing a product or service of any kind, your main focus should always be on the value said product or service will provide your users.
More specifically, it should be on the value you’re providing your users as defined by the users, themselves.
That said, the Kano Model and its accompanying analysis can provide your development team with valuable insight regarding what your customers actually want from your company. In turn, you’ll be able to become laser-focused on optimizing these specific features in order to maximize the value you provide your target audience.
(Note: As you’ve just seen, the Kano model involves collecting and analyzing customer feedback. If you don’t have a way to do that now, Fieldboom can help.)