Reading Survey Results with a Critical Eye

WHEN IBM SPEAKS, it’s generally a good idea to pay attention. They are, after all, IBM. Perhaps you’ve heard of them. But with this year’s iteration of their annual State of Marketing Survey due soon, this strikes me as a good time to share some observations about last year’s State of Marketing Survey.

Not familiar with the IBM survey? That’s OK. This post is really about how to read research reports with a critical eye. No matter whose research you read, the advice applies.

Let’s start with the footnotes in the back. “… The margin of error for this sample at the 90 percent confidence level with a 50 percent response rate is ±4.3 percent.” Hold on. That’s the statistical margin for error, which would be great were we dealing with empirical data. We’re not. Look at the survey questions. “How much ownership do you and your marketing organization have for …” “Please rate the level of responsibility marketing has for …” “To what extent does your organization face …” These are not the sorts of questions that get you to empirical data. They get you to personal perceptions. The answers aren’t subject so much to statistical as to human error. There is no reason to assume that respondents’ perceptions are accurate. Nor is there any reason to assume respondents are leveling. Sometimes unwittingly, sometimes not, it is not unusual for respondents to provide the “correct” answer—one that makes the company look good, makes respondents feel smart, makes them feel like they’re being fair, or that gets back at that senior vice president for being so rude. Answers will reveal more about the biases and naiveté of respondents than about the subject they’re asked to comment on.

The report compares responses from low versus high performing companies and suggests causation. The leap is unwarranted. One should equally consider reverse or even no causation. “No causation” is another way of saying “coincidence.” Despite what New Age gurus say, there really is such a thing, and mistaking it for data can be costly.

The report suggests that large differences between low and high performers are significant indicators of how to do things right. This is selection bias at its best: what makes our point matters; what fails to make it doesn’t. There are almost as many instances of no or inverse differences between high and low performers. The authors could as easily have cherry-picked those as significant.

Finally, the questions themselves fail to impress. They are not so much insightful as a marketer’s wish list dressed up as a survey. It looks designed to produce “data” for marching on management and making demands rather than to yield real information.

I’m not accusing anyone of deliberately misleading. Bias and fallacy have their way with the best of us, despite best efforts and intentions. That’s why controls are important. So is a working knowledge of logical fallacies and biases.

I’m not above fooling myself. Neither is anyone else. But if we’re serious about real data that yields real insights, we need to exercise vigilance. Even upon ourselves.

Comments are closed.