Sunday, October 19, 2008
Thursday, October 16, 2008
Customer satisfaction. You want it. You need it. Your KPIs demand it.
But how do you get it? And how do you measure it?
“Already onto it!” you say. “Covered that in our customer satisfaction survey, with some questions asking our customers how satisfied they are.”
Perhaps surprisingly, it’s not very helpful to ask your customers how satisfied they are. If they rate you highly, the most you’ll get out of it is a warm fuzzy feeling. If they rate you poorly, you’ll be scratching your head, wondering why.
Either way, it’s a waste of both time and money to ask them how satisfied they are without understanding what ‘satisfaction’ actually means to them.
An important step in the market research process – but one that’s all too often overlooked – is to identify the dimensions of satisfaction from your customers’ perspective.
How do they describe it? What does it feel like? What does it depend on? And so on…
Good qualitative research can answer these questions: it can give you important and relevant detail that you’d otherwise miss.
And this kind of information has legs that go beyond KPIs (which always seem to get in the way of organisations becoming truly customer focussed, but that’s another discussion in itself). Good qualitative research can tell you exactly what action you need to take. It can tell you what you’re doing right, what you could be doing better, and where the opportunities lie.Pretty good eh?
Monday, October 13, 2008
So it’s actually more like 100+ porcupine reads: each with its own clever blend of fine thinking and discourse. Value.
And Zebra Bites made the list! Currently sitting, stripey and pretty, at #96. Interestingly, Zebra Bites is one of only 3 market research blogs on the list. Even more interestingly, it looks like it's the only qualitative market research blog listed…
Wednesday, October 8, 2008
This is a qualitative make-over post.
I’m going to do a 'before and after' on a sentence I pulled from a qualitative research report I was asked to read recently.
Here’s the before sentence: Of the 4 groups, 83% of people said they liked the design.
Now some of you reading will, at this point, know exactly what I’m talking about: You can go and play. For those who don’t please keep reading.
There are many things wrong with the above sentence being part of a qualitative research report. Here are 3 points to start:
1. It’s not robust
Percentages, in a qualitative context, are pretty much meaningless. While at first blush, a grand 83% looks pretty good, what does it really mean? It means that, assuming 4 groups of 8 participants, 26.6 research participants, screened to fit a particular profile, and willing to attend a particular research group, said they liked the design. That’s a very small, skewed sample: hardly robust and hardly worth reporting. But that’s only the beginning…
2. It’s not controlled
To get a good, clean read on any particular issue in a quantitative survey, the way the questions are ordered and the way they are asked is key. For all intents and purposes, and as much as possible, the survey should be administered in a controlled environment. Even rotating the order of questions is controlled. This ensures reliability (being able to replicate the findings) and therefore, some confidence in the results.
In contrast, to get a good read in a qualitative study (we don’t necessarily go for clean in qual), we need to dance around a bit. Cover the floor. A good qualitative facilitator will bounce around, jump ahead, reverse, turn corners, step to the side…you might even see a grand jeté.
The point here is that the context within which the question is asked, ie the discussion group, will vary wildly for each group. In effect, it will be confounded by all sorts of, well, confounding variables; not least, the discussion itself. The fact that 83% of people said they liked the design means absolutely zip without understanding the discussion that came before.
3. It's open to misinterpretation
The third and most worrying point is the potential for misinterpretation. The most obvious here is making the assumption that 83% of people, per se, liked the design. And then using this ‘finding’ to make Big, Important and Expensive decisions, like changing the design.
Here’s the after sentence: Positive feedback for the design was based on factors X, Y and Z.
Note the glaring (and appropriate) lack of percentages?
Thursday, October 2, 2008
I’ve been watching the BigPond/Twitter proceedings with great interest.
In true Web 2.0 fashion, the discussion has taken on a life of its own.
Well, for the most part.
Ironically, BigPond’s own contribution to the conversation has been somewhat stilted. A bit reserved. A bit corporate.
No surprises there. It’s what they do and what they are. And being corporate is fine in a prospectus. And it’s fine on TV. It’s part of their groove and, no doubt, part of what gives the big in BIGpond some credibility.
But being corporate on Twitter just doesn’t work. And trying to be not corporate, when, by all intents and systems you are corporate, just isn’t very credible.
Can Twitter ever be the right medium for corporates? As corporates?
I don’t know, but I don’t think so.
P.S Then again...