Dupable You, Dupable Me
A scientific approach to marketing isn't for people who want to be proven right
If you’re human, you’re dupable. It’s not a question of how smart you are. Even the world’s leading scientists can be fooled, and they know it. That’s why they invented tools like blinded tests, strict controls, replication, and peer review. These remove personal bias so that facts can better speak for themselves.
Marketers would be wise to follow scientists’ lead.
Consider the arguments typically used to establish marketing success: sales are up; focus groups gave the ads a thumbs-up; survey respondents say they’ll buy; there were oodles of web hits; it went viral; it won awards; awareness shot up; and, not to be overlooked, someone’s (or someone’s spouse’s) gut intuition just knows the campaign worked.
It’s not difficult to call all of the above into question. Factors other than advertising can drive up sales; focus groups are not predictive; people who say they’ll buy may not; you can have oodles of web hits and go viral without selling a thing; awards do not mean market success; and you can have awareness without sales. And gut intuition? Be honest. Your gut tells you the sun orbits Earth.
One can hardly blame us marketers for swallowing the above arguments. We’ve heard them throughout our careers. They sound reasonable. And, admit it, we really, really want to think our stuff works.
But if you want to know-not-just-think, it pays to borrow a few standards from science.
Say you want to know whether it was your marketing or something else that caused a sales increase. First, ask yourself what it would take to convince you that your marketing didn’t work. If your answer is, “Nothing could convince me, because I know the marketing worked,” you’re not thinking critically, but dogmatically. If your answer is, “Empirical evidence,” good for you.
There are many ways to get to empirical evidence. Here’s one: Take a representative sample of the market and divide it into two lookalike groups. Target only Group 1 with the marketing, so that, to the best of your knowledge, there will be no other difference between the groups.
Do not haul the groups into a lab. Leave them in the real world, oblivious to the fact that a test is afoot. (An easy way to target one group and not the other is to use online media or direct mail.)
If you want to double-blind the test, be sure that whoever tracks results doesn’t know which group experienced the marketing campaign. To triple-blind it, be sure that whoever interprets the data doesn’t know, either.
If Group 1 purchases more than Group 2, you have an indicator that the marketing is working. If the groups perform equally, you have an indicator that the marketing makes no difference. And, heaven forbid, should Group 2 outperform Group 1, you have an indicator that your marketing is hurting sales. Yeah, that happens. More often than you might care to think.
Note that I said, “you have an indicator,” not, “you can be sure.” That’s because it’s important to remember that flukes happen. To reduce the odds of a fluke, retest. One or two retests yielding like results move you from “you have an indicator” to “you can be pretty darn certain.”
A scientific approach isn’t for people who want to prove themselves right. It’s for people who want to eliminate error and self-delusion, pick up new information about their products and markets, and honestly find out what works.
It’s not unusual for clients to dismiss the process as too much trouble. It is, of course, their right. But over the long haul, proceeding on a hunch usually costs more than taking the trouble to ensure a winner from the outset.