Home » Nova »

How Serious Are Performance Metrics: What (Really) Works and What Has Actionability Challenges…

 

The recently published 2013 Honomichl Top 50 Report, chronicling business performance of major, U.S.-based marketing/advertising/public opinion research firms, shows an overall decline. Taking out the effect of reduced government spending on research, and a sluggish economy, there has been a major investment pullback in ad hoc survey studies, as clients experience increasing pressure on their marketing andmunication budgets.

This is not yet a doomsday situation, but it does bring front-and-center the requirement thatpanies, and especially researchers and marketing planners, should be getting much more serious about having, and leveraging, performance metrics which both reflect (i.e. show causation rather than just correlate with) and help build the most monetizing customer marketplace behavior. Two of the more recent, and actively adopted, approaches for doing this – Net Promoter Score (NPS) and Customer Effort Score (CES) – each carry some significant interpretation and actionability challenges; and there has been much useful (and sometimes heated) back-and-forth among practitioners about the relative merits of various customer experience and marketing metrics and models now being applied, and how granular their results can, or can’t, be.

Jim Tincher recently posted a CustomerThink blog on CES. Thoughtfully, what he has done to build interpretation value into the effort question is to add a follow-on question addressing effort expectations, a surrogate for degree of importance. This offers some dimensionality to the issue of effort. Here’s how I responded to Jim’s blog, also providing a link to perspectives on NPS:

“As an overall response to your blog, I’m not a strong supporter of CES, for the reasons I’ll enumerate; however, that said, your add-on expectations question provides a critical measure for assessing the level of importance attached to the level of effort. You and I also agree on challenges associated with endeavoring to use NPS for anything more granular than aggregated performance interpretation: http://customerthink/article/customer_advocacy_behavior_personal…

CES, for those who may be unfamiliar with the term, was originally introduced in mid-2009 by the Customer Contact Council (CCC) of the Corporate Executive Board, in a presentation titled “Shifting The Loyalty Curve: Mitigating Disloyalty by Reducing Effort”. A client asked me to review it at the time (when I was a Senior Vice President and Senior Consultant in Stakeholder Relationship Management at Harris Interactive); and, among my three pages ofment were:

‘There is no holistic view of customer experience in CCC’s conclusions represented in the CES or effort reduction/mitigation focus. With specific respect to the multiple CES methodological challenges, we (Harris senior methodologists and I) feel that a customer effort score is too one dimensional to capture the overall customer experience or, more narrowly, the customer service experience. Again, customer experience means looking at the overall perception of value through use or contact. It involves the entire system. CCC, for instance, is using callback tracking as a ‘standard proxy for customer-exerted effort’; and very much like NPS, building their case on a single question (“How much effort did you personally have to put forth to handle your request?”, on p. 71 of the presentation), and then taking it to the next level by having a CES Starter Kit (p. 73). Our approach is to validate the impact of customer service within the overall experience and, as well, more around actual behavior than anticipated behavior.’

If we’ve learned anything from the Kano Model since the 1980’s and early 1990’s, it’s a recognition that dissatisfiers can hurt loyalty behavior and enhancers can help drive more positive downstream customer action. Those receiving customer service will not be particularly energized by having their problem solved or questions answered, because these are table stakes and basic expectations. Positive service differentiators, though, can have a beneficial impact on customer experience and brand perception, informal peer-to-peermunication (offline and online word-of mouth), share of wallet, etc.”

To paraphrase some of Bob Thompson’s thoughts on CES, the measure principally gets at how well customers are placated and mollified, not the degree to which proactive, or value-add, benefits provided have leveraged their behavior, especially delight. Extensive measurement of neutral, reactive customer value delivery shows little effect on loyalty or advocacy. Expectations, from the vantage of customers, must be exceeded….and in ways that add to perceived value.

Largely irrespective of how much customers’ touchpoint effort level is mitigated, without meaningful value overdelivery there is little opportunity to have a positive impact on loyalty. Bing (really) serious about leveraging performance metrics means that users must get past the marketing hype and puff represented by some measures and models (http://customerthink/blog/modeling_customer_behavior_what_works_…) and focus only on what reliably helps move the customer behavior needle.

 
 
 

0 Comments

You can be the first one to leave a comment.

 
 

Leave a Comment