Posts Tagged ‘Marketing Research’

How to Research your Customers’ Buying Process

How to research the buying process of your customers and shorten the buying cycle

Read More Post a comment (0)

Beware the Oracles

Listening to the pundits speculate about why the polls failed to predict the Clinton win in New Hampshire, why they had prematurely counted out McCain, Huckabee and Obama, why they had paid so much attention to Thompson and why they can’t call the nominees yet, has caused me to think about experts and expertise, particularly in marketing.

The presidential elections are intensely competitive, often interactive, marketing campaigns. As such, they fascinate me.

For once, the political pundits were talking about something that I know a lot more about than they do. They were talking about marketing research, even though they talked in terms of polls and elections, rather than surveys and purchases. Polls = surveys and elections = a purchase, or product selection. What’s different is that in elections, the products talk (I know, the analogy breaks down with Elmo).

What was amazing to me is that there were the pundits doing exactly the same thing as marketing executives, agency executives and consultants that I have been observing for decades. They blindly project past behavior and intentional data (what people say they will do) into the future. They are very, very often wrong. But they are highly paid, so they have to look right. So what do they do? They make up plausible stories.

Sometimes, they even go out and find the data that will back up their stories. What’s going on here?

The network’s producers call up an expert who they are about to book for a show. They ask that expert what he/she thinks. Producers are not going to provide an airplane and limousine to the studio for an expert who was simply going to say, “we just don’t know.” They want people with definite opinions, strongly held. They want controversy. They want plausibility. The pundits are all too willing to provide that. They spin out plausible explanations with great certainty.

The same is true with the marketing pundits, when they are making predictions.

I’m standing up, like that little boy in the Emperor’s New Clothes, and telling you there’s nothing there. They don’t have a clue.

Not when they’re explaining the past or predicting the future or describing the present. Why? Because there is much more that they don’t know than they do know. Furthermore, they don’t know it, because if they knew it, they would know it. You don’t know what you don’t know. So, while do you know that you don’t know some things, you severely underestimate the things that you don’t know.

So, they are saying things like the Clinton voters showed up in greater numbers than were expected because they were angered by other candidates ganging up on Hillary, or because of Hillary’s becoming choked up. Never mind that this was not reflected in the exit polls. They quote their mothers, their friends, or passersby in the streets.

The actual experts in polling, who are, for the most part, pretty dull and therefore don’t make it to television interviews, are saying that it was probably the Bradley effect (blacks do worse in the actual elections and they do in the polls), or the fact that lower income people to not like to be interviewed and have a higher refusal rate, and favor Clinton. They would be likely to refuse both polls prior to the election, and exit polls. These, to me, are the most likely explanations, but they are not as politically correct as other explanations. Notice that I said “most likely.” The so-called experts rarely use this or equivalent phrases. The fact is, that we don’t know, and may never know.

So, what are we to do? In politics, we have to make predictions. In business, we have to make forecasts. One very successful marketing vice president, who came up from marketing research, when I asked him once about forecasting said, “give them a date, and give them a number, but never, ever at the same time.”

I used to be asked, “Based on the focus groups, will the product be successful?” I suspect that my answers were rather disappointing. Now, I am never asked that question because I make it clear beforehand that focus groups and surveys can’t predict a product’s success, although they can sometimes predict a product’s failure when there is a fundamental flaw in the product. There are just too many things that have to go right for success. For instance, there is no way to predict what competitors will do. What if an iPod or an iPhone comes along? What if you are Alta Vista, doing a great job, and a Google comes along?

There are last-moment, decisive factors that hit people when they are in the privacy of the voting booth, or are about to click their product choice, or standing at the shelf in the store.

People do not know what they are going to do. They don’t know what they will buy or not buy (or vote). They do not know how strongly held their preferences are. They do not know “what it would take to get you to buy the product,” a favorite, stupid question that marketers like to ask. Or, “On a scale of one to 10, 1 being least likely and 10 being certain, how likely are you to vote for your previous choice.?” (Who says there are no stupid questions?)

If people (including pollsters) can’t predict their own behavior, how do you think they’re going to predict others’?

So, what can focus groups, polls and surveys tell us? They can tell us about many obvious and hidden attitudes, opinions, beliefs, wishes, fears, etc. that may need to be addressed. They can tell us, for instance, that people are frustrated because their music libraries are a mess. They can tell us that the iTunes/iPod system of keeping them organized addresses that frustration. They can’t tell you that these will displace the ubiquitous Walkmans and CD players. They can’t tell you that these will take over the music industry.

They can’t tell you that an obscure Arkansas governor (Bill Clinton) can go up against a wildly popular president who just won the Gulf War (1), who the Democrats were despairing about running against, and who had lost the first primaries, could go on to win the presidency.

The Taurus, wildly popular in its time, was ridiculed as a “jelly bean” in focus groups. The VW bug, as well as its revived version decades later, was also ridiculed, but found its niche, who probably weren’t well represented in the surveys and focus groups. Respondents loved the Edsel and New Coke.

The Oracles are frauds. Predicting is a con game. Historians are fiction writers. Stock pickers are just racetrack touts. Forecasting is only on target by chance. Get it?

Sometimes, you just have to refine your guesses by marketing research, then put them out into the marketplace and let reality decide.

The main lesson: Clues are clues. Reality is reality. Sometimes they coincide. Sometimes… You get the picture.

Things are not always as simple as they seem in surveys

The New Hampshire Primary was a cautionary marketing tale. It shows us why surveys (polls) can be very misleading.

The polls missed the Clinton victory by a mile yesterday, yet they were right on target with the McCain victory. Why?

No one knows exactly what happened for sure yet, but several possibilities illustrate some of the pitfalls in marketing research.

First of all, there is the assumption that you can believe people when you ask them what they have decided to do in matters of simple choice like whether they have a preference for Coke or Pepsi, or which candidate they favor. In most cases, this is a reasonable assumption, and most of the time, polls are accurate.

Other times, what looks like a simple question is not. It turns out that in the minds of New Hampshire voters, McCain versus Romney was one of these simple questions. Though polls got it right, both on preference and amount.

On Clinton vs. Obama, not so simple, on many grounds.

Here are some of the possibilities that are yet to be investigated and quite possibly never definitively determined.

(1) What if people’s minds are not made up? Not a problem, at least as far as taking a snapshot. While that would make prediction difficult, with these people would have turned up as a large number of undecideds. This is not what happened. So, the pundits say, the people didn’t make up their minds at the last moment. But what if they were undecided, but didn’t know it? What if they thought they favored one candidate over the other, but this was a weakly held preference and they were easily swayed by last-minute remarks that they heard on the radio or from their friends, on the way to the voting booth?

So, when people hold an opinion, the strength of the opinion is just as important as the opinion itself. Sometimes opinions can be held so weakly that they might as well not be an opinion at all. But that’s not the way it’s experienced by the person.  In the absence of a challenge, it’s often experienced as an opinion that is pretty firmly held.  So it is of no use to ask the person in the survey, “on a scale of 1-10, how strongly do you hold that opinion?” It’s also equally nonsensical to ask people what it would take to change their minds.

So, the first possibility is that they changed their minds at the last moment, perhaps even in the voting booth, but did not know until that moment that their previous choice was weakly held.

(2) People may have in fact held a very strong beliefs, but changed their minds quickly and decisively when they saw, for instance, Hillary Clinton cry the day before the voting. This may have been too close to the voting for the polls to have picked up. But, I don’t think so because it wasn’t picked up in the exit polls either.

(3) Here is the most intriguing one for me: What if racism isn’t dead? Well, of course racism isn’t dead. It didn’t evaporate just because Obama it isn’t running as a black candidate. In that case, many people (it would only have to be about 10% to account for the data) might not want to tell a pollster that he or she was not voting for Obama, even though they thought that Obama was the best candidate. They might be feeling guilty, or they might expect disapproval, or they might just experience a vague sense of unease about Obama that makes them feel vaguely uncomfortable. Or, they may be worried about being perceived as racists even though they are not. So, when asked, they blurt out “Obama,” and maybe even mean it at the moment. But in the privacy of the voting booth, that vague sense of unease — which, I suspect, is the main way that racism is experienced among people who are just mildly racist, especially those who are ashamed of it or are unaware of it— rears its annoying little head and causes a private little finger twitch that never gets reported.

This is a well-known, and well-documented effect called the Bradley Effect, or the Wilder Effect, where blacks often poll with stronger support than they ultimately show in the polling booth. Even exit polls often say that the politically correct person won, but when the votes are counted, the politically correct is not the politically erect.

It will be interesting to see what happens in the primaries versus the caucuses. For instance, in the Iowa caucus, people had to stand up in front of their peers openly and declare their allegiance, unlike in the New Hampshire secret ballot.

So the marketing lessons here are:

  • “Take a survey [poll]” isn’t always the answer.
  • Simple questions are often not.
  • Asking questions gets answers, but not necessarily the truth.
  • People don’t always know what they believe or how strongly they believe it.
  • People often have beliefs that they don’t know they have.
  • People can’t even predict their own behavior.
  • People often say things based upon what they think you want to hear.
  • Distrust after-the-fact explanations from pundits, including me.

What I am really saying here is things are not as simple as they seem. If you have a product with any degree of controversy you are navigating a mine field when you try to assess public opinion. Even professionals don’t always know what they are doing.