Friday, June 27, 2008

The (Australian) qualitative blog space

Is very, very quiet.

There seem to be quite a few marketing blogs around. There are a few (albeit not many) mostly quantitative market research blogs too, eg the lovely Kate from Tribe, and the Hitwise blog (thank you to JC for the link). But there aren’t any other Australian qualitative market research blogs that I’m aware of.

How come? Not on the radar maybe? 

Off to ponder the above.

Wednesday, June 25, 2008

Is qualitative research science or art?

Both really. I came across this the other day, which made me think.

What I find most interesting is not the idea that qualitative research is ‘science’ too, but the idea of science itself, as (just) a set of epistemological principles. A theory of knowledge. An approach.

Monday, June 23, 2008

Distortion

The issue here isn’t the frameworks per se. Most are probably grounded in sound theory and I’ve no doubt that some high quality thinking goes into developing them. All good. In theory.

But real life just doesn’t work to a framework. When we’re actually in the field, or doing analysis, how can these products be anything but a hindrance?

This is what happens when you squeeze an unpredictable output into a pre-defined framework:

-       You’re likely to change its potential shape and direction, according to the framework.

-       You’re likely to miss an opportunity because you weren’t looking for it within the framework.

-       You’re likely give undue weighting to a particular finding, because you were looking for it within the framework.

Blinkered thinking.

Buyers beware. No matter how erudite and/or elegant the framework appears to be, unless it’s been built from scratch, around your unique and specific research issues, save yourself time and money: you might as well ask for the debrief before you do the research.

Thursday, June 19, 2008

Blinkered thinking - not good


And so back to qualitative research: What's good? What's not? 

I recently had a letter published in B&T magazine about qualitative frameworks.  In case you missed it, here’s the gist of it:

I’m not a fan of proprietary qualitative products. You know, those neatly packaged, pre-defined frameworks with the little TM (trademark) symbol tipping its hat, top right. New product development, brand development, needs based segmentations, etc. You name it – seems that there’s a framework for sale.

This is clever marketing. Research buyers, who might not be comfortable with qualitative research, who may find the approach a bit gossamer or nebulous, would probably be reassured by the cut-and-dried impression of control this gives.

But using a pre-defined framework in qualitative research is inappropriate.

Why?

Well, I don’t think I’m going to start any great debates by suggesting that good qualitative research output, by its very nature, is unpredictable. It’s exploratory: you’re (hopefully) learning something that you didn’t know. There are curls and swirls and surprises each and every way you look. That’s the magic of it.

Meanwhile, a pre-defined framework sets parameters that preclude the user from discovering much of the unexpected.

The point is that one should expect the unexpected from any decent qualitative inquiry (or why bother?).

Tuesday, June 17, 2008

How to sell it

Getting people to 'do' green is not just about selling environmentally-friendly products. It’s also, fundamentally, about selling (buying into) a state of mind. 

So I think the best use of market research in the green space will be around communications. Here we can help with the biggest question of all: how to sell green (vs what to sell).

Sunday, June 15, 2008

The great green marketing tidal wave (you ain’t seen nothing yet)


More on brilliant qualitative research later in the week.

But right now, some sustainable food for thought.

There’s an event called The Green Awards For Creativity in Sustainability in the UK. High time we had the same in Australia (think I might get onto the case).

Anyway, I’m looking forward to the 2008 UK awards. It’ll be very interesting to see if/how the key messages have changed since last year: leading/tapping into the fast-moving green collective consciousness.

The green marketing (tidal) wave presents a fascinating conundrum for market research. There's a gap between green intentions and green behaviours. So, for example, when presented with a new green product idea, and research participants say ‘I’d buy it’ or ‘Never in a million years!’, to what extent should we base decisions on their responses? Are attitudes to green changing just a bit too unpredictably and just a bit too fast for market research to be of much help here?

Yes and no.

Hang onto that cliff until next time…

Thursday, June 12, 2008

But, ahem...

Following on from my last post, what are the implications here for research buyers and research suppliers?

For research buyers

Beware the travelling junior. If, for example, you commission an 8 group project and 4 of those groups are run by JRs, you’ve just paid half of your research budget in training.

Unless you’re happy to subsidise these training sessions, and are confident that the output is sound enough to guide your business strategies, then insist that SRs moderate all your groups. You’ll be getting far better value for your investment.

For research agencies

It’s definitely a challenge. How will your JRs get experience if, as I’m suggesting, they shouldn’t be let loose on client work?

Well it’s a matter of rethinking the status quo. Maybe co-moderation – where JRs sit in on focus groups run by SRs to learn the qualitative ropes and practice their techniques in situ –  should be standard until the JRs get some business world experience under their belts. A few years experience at least.

There are two key benefits here: the client gets the goods and your JRs get the experience. Yes, it’s true, the cost of the training shifts back to you. But really, ahem, it should never have been otherwise.

Tuesday, June 10, 2008

Not really best practice

So why shouldn’t junior researchers (JRs) be allowed to moderate client-paid focus groups?

Two main reasons:

1.     They’re often still learning the mechanics of facilitating a group. The cognitive effort they need to master this art will be at the expense of the greater commercial purpose of the research.

2.     JRs are relatively new to the actual marketing business (vs the theory). In effect, and relatively speaking, they don’t have much business world experience.

This is not, by any stretch, a criticism of JRs. Most of the JRs I’ve worked with have been bright young things with verve, insight and a great passion for their work. But these wonderful qualities are no substitute for business world experience.

Consider this scenario:

A research agency gets a brief and a team, usually comprising senior researchers (SRs) and JRs, is put together. There’s a proposal to write and the JR will often be asked to ‘have a go’ at writing it. Of course, before it actually goes out to the client, it’s checked by the SR. More specifically, it’s checked, edited and/or rewritten. 

Ditto the discussion guide. The JR ‘has a go’ at writing it, and then, invariably, it’s pulled and pushed into shape; re-ordered and re-worked by the SR. Out in the field, the SR moderates the focus groups that the client wants to watch, while the JR does the regional/interstate focus groups that are unlikely to be viewed.

A couple of things to note here:

1.     That JRs are being involved in the whole research process and thus, getting experience in the field. Hooray, until you read the next point.

2.     That JRs, who aren’t quite across developing a proposal or a discussion guide, often end up moderating focus groups despite the fact that they don’t fully get it.

This cannot be best practice. Not by a long shot.

What are the implications here? And is there a solution? Stay tuned…

Saturday, June 7, 2008

Good moderation: what does it take?

I’ve poked a few sticks at quantitative market research practices recently. And why not? Sitting pretty from the observation deck, it’s always easy, and so much fun, to see what someone else could be doing better.

But fair is fair and now it’s time to look at qualitative research. And let’s pick on focus group moderation.

Before I go on, a little story:

When I was just starting out in qualitative research, I was based in London. Fine city, but I seemed to spend an awful lot of time out of London, on the road, conducting focus groups anywhere but London. I must have covered every corner of regional England, and a fair chunk of country-side Scotland too.

Not surprisingly, these were groups that the client didn’t see: couldn’t make it to, or didn’t want to travel for. Those who sent me off to such remote parts were clever enough to know not to put a junior moderator in front of a client. They obviously knew what any experienced moderator knows. Good moderation comes down to experience, and lots of it.

Junior researchers (JRs), perhaps straight out of university or a year or so into their careers, really shouldn’t be allowed to run client-paid groups.

There are two main reasons for this. Stay tuned...

Wednesday, June 4, 2008

A good sample

So if you're reading this, I'm assuming you didn't go for the Sony Bravia.

Following on from the last post, good, useful qualitative research output is sample dependent.

If you aren’t talking to the right people, then even the cleverest, most innovative techniques in the world won’t help.

So how do you make sure your sample is a good one? There are 3 things to keep in mind:

-  Sample definition

-  Recruitment

-  The research dynamic

Sample definition is a job for brand and product managers/marketers. Why? They typically know their business best, and it’s absolutely paramount that the sample is aligned with their business and marketing objectives. Anything short of a sample defined according to these objectives will be sub-optimal.

Recruitment is another important factor in getting the right sample. In this case, it’s down to a sharp screener and a fine recruiter (albeit, to get the well-screened and well-recruited participants to actually show up, you really just need luck).

Finally, the research dynamic. The aim here is to manage the research dynamic so that you get the best out of your sample. For example, deciding whether or not it’s appropriate to mix men and women in any particular group, or the best way to split ages across the groups, etc. These considerations play an important role in determining the difference between useful and useless research.

The zebra bite? A well-defined, well-screened sample, set to ‘work’ in an appropriate forum is the starting point for good, useful qualitative research.

Next time – fieldwork!

Monday, June 2, 2008

Brilliant qualitative research (and how to get it)

Qualitative research. 

Love it? Hate it? Not quite sure?

Without doubt, qualitative research output is hugely variable.

On the one hand, it can deliver quite stunning output: pragmatic insight that inspires brilliant marketing strategies.

On the other hand, the output can be a waste of time and investment. Possibly colourful and possibly entertaining, but nevertheless, quite useless.

If you want colourful and entertaining, buy a Sony Bravia. If you want inspiring, pragmatic research output, then stay tuned. Over the next week or so, I’m going to tell you how to get it.