It seems as though every company is crazed about surveying after every customer interaction. If you call a support line, you can expect a satisfaction survey. If you stay in a hotel, or rent a car, or buy something online, you will invariably receive a survey. For large-scale brands and high-volume suppliers, some of this may make sense, but what about the small-volume suppliers?
The concept of Net Promoter Score (NPS) was the result of insightful research conducted years ago and published in the Harvard Business Review in December 2003 “The One Number You Need to Grow” by Frederick F. Reichheld. The idea was that consumers will either be promoters, detractors, or neutral. Neutrals do not influence the market, but promoters are good for business because they tell their friends how great your products or services are, and they advocate for your business. Detractors are bad for business because they complain about your products or services to anyone that will listen. NPS is a single number to gauge the offset between the people praising your business and those trashing your business. NPS is a high-level measure, that is influenced by numerous actions and interactions. A simple analogy is the ‘Check Engine’ light in a car. The car is constantly monitoring hundreds of things and checking for optimal operating conditions. It doesn’t display all of this information or constantly show us the nuances of engine performance. Instead, we get one indicator — the “Check Engine” light. That one light masks a ton of complexity that is supported by numerous diagnostics. If it comes on, we know we need to take action. Similarly, the NPS score is a single indicator for a raft of underlying information. Clearly, you want more promoters than detractors, so NPS has become our “Check Engine” light for the business.
Unfortunately, over the years, everyone has learned to game the NPS process. We have all heard providers tell us we will receive a survey, and anything other than a 10 will be a failure. They ask if there is anything they can do to ensure a perfect score, and they beseech us to say we are promoters. That is not how it is supposed to work. There is nothing spontaneous about being told what to say. On the consumer side, we recognize the NPS question, and we know how to use our response to inflict pain. We know that giving a score of 8 is a passive/aggressive response, and a 0 is the loudest form of insult. We also know that a 10 is golden, and sometimes we just want to be nice, even though we have no intention of promoting the service. We have become informed consumers and we are wise to our vendors’ attempts to finesse a measure of satisfaction out of us.
I mentioned the difference between a large-sample vendor and a small-sample vendor because in small samples the result is not all that meaningful. Consumers are often trying to send a message with their answer, knowing that in order to be heard they will have to be extreme. I was involved with a survey where after the results were gathered, I personally called a number of the respondents to discuss their answers. What I learned was that even if a respondent gave a low score, during my 1:1 discussion it became clear that they were using their survey response to get our attention. In reality they were very committed and wanted to succeed with our offering, but they gave a low score to keep us on our toes. One client, who gave a neutral score, opened our call by saying that they were telling all of their colleagues how happy they were and how much they were looking forward to continuing to work with us. They said it was the “most important thing they would do in their program this year.” Their score did not line up with their clear intent to be a promoter. NPS in small samples can give very misleading results because just a couple of promoters or detractors can change the entire character of the calculation. Individuals with strong emotions (usually negative), can skew the score dramatically. Even cultural biases can creep into the result in a small sample. Some cultures display more polar emotions than others, or are more subdued in their evaluations. The same level of service may receive different results across cohorts just because of cultural bias. Big samples tend to homogenize these variations.
When a company applies an NPS methodology to determine customer satisfaction, it is important to know what you are getting. Consider your sample size, and cultural biases. In addition to thinking about the promoters and detractors, consider the message being sent by the neutrals. These are the customers who are not moved to promote or detract. They are passive spokespeople, but they are still meaningful contributors to understanding customer satisfaction. If everyone gave you an 8 (all neutrals), your NPS would be 0. But, considered on a 1 - 10 scale, the message overall would be fairly positive (80% satisfaction).
In a small sample, it is also important to check the numeric result against the overall sentiment. Does the number change dramatically if you eliminate the few extreme answers in each direction. In particular, look at the 1’s and 0’s. These are often emotional responses, and in a small sample they can have an outsized impact. If possible, go beyond the quantitative score and actually speak with the outliers to get a qualitative understanding of their responses.
NPS is a tool, but despite the title of the HBR article, it is not the only number you should monitor. If you are a small-sample company, and do not have thousands of respondents, treat the NPS score with a healthy dose of skepticism, and dig deeper into the metrics and responses to get a true gauge of satisfaction.