Change One Thing to Better Evaluate New Product Concepts

How do people make choices about whether or not they will buy a specific product? Think about the last time, maybe pre-COVID19 life, that you were deciding whether or not to buy something. What was the process? Chances are likely, depending on the type of product and how big of a decision it was, that you researched it a bit, maybe checked out reviews on Amazon, maybe compared it to other similar products, on the shelf or online.

Remember those record stores? They made choosing fun. Special thanks to Esther Driehaus for putting her photo on Unsplash.

Now compare that process to how you probably typically test new product concepts. In my experience, most people show the concept in some form to the respondents, and then among the questions asked is a purchase intent--”how likely are you to purchase this product?” in more or less those words. And the choices are definitely will, probably will, and so on. Have you thought about it in comparison to how you make decisions about what product or which brand to purchase? Let’s forget for a moment the issue around people being really poor at predicting their future behavior--including what they will buy or not buy. Think about the choice you are asking them to make with that question compared to how they will more likely make that choice in the real world. How congruent are the two? It is my fervent belief that this is the number one mistake we make in evaluating concepts. We ask the survey respondent to make a choice that they won’t make in the real world. We are asking them to make a product choice in isolation when eight times out of ten they will make that decision in a comparative context. This leads to results that can often steer us in the wrong direction.

Fine and dandy, you say, but who does this guy think he is to dismiss decades of practice and normative databases. Just an insights rebel and a rogue, I suppose. (Especially now that I haven’t had a haircut in way too long. Definitely getting that roguish look going.)

Wearing a roguish mask and supporting a roguish team with the cap. It's hard to see the bad haircut with the cap. And that's the point.

We can borrow ideas from the behavioral economics people who study choices all the time, as well as pulling from some deep thinkers in the research industry over the past decades. There are, I am confident, multiple ways we can make research fit the comparative purchase model. One way that I have used for decades is actually relatively simple, yet effective. It is based on a method we used to test concepts and make decisions about pricing, advertising, product feature sets, and other issues in the new product development process when I was working at Hewlett-Packard. Show the respondent the relevant options between your concept and the primary competitive choices. Back then we used a paper “catalog” of products. Nowadays you can simply expose the concept randomly along with the products of two or three competitors. Just use a similar format to describe each concept/product. Rather than asking how likely the respondent would be to buy your concept, ask them instead which of the choices they have seen they would purchase. How you do this can vary to your liking. I typically use a continuous sum and ask them to allocate 10 points across the choices. And I make sure to include a choice that is “I would buy none of these” and/or other/fill in the blank. Over the years I have compared this approach with the standard purchase intent and found the standard purchase intent typically overstates likelihood to purchase quite significantly, whereas selecting from among competitors tends to reflect a more grounded, more realistic number. For example, for a new floor mop concept, we asked the question in both ways. On initial exposure to the concept and for each competitive choice, we asked the traditional purchase intent question. Then after viewing all the brands, a continuous sum question was used to ask them for their likely purchase choices. Based on the traditional purchase intent question, we could surmise that the consumer would end up buying, on average, 3.7 floor mops. Take a look in your cleaning storage room or closet. How many floor mops do you own? (I have to admit, I have two. But, one was inherited after a study for this same client.) Most households need only one, or maybe two. But almost four? This was definitely a case of making each purchase decision in isolation, and the inherent American positivity and optimism that, yeah, sure, I like this mop and so I would definitely (or probably) buy it. Sure. And while we are at it, let’s do lunch. The consumers weren’t lying. But they were predicting a future by answering our question in the context it was set. And, in their defense, we were asking the wrong question there. The new concept scored a top two box likelihood to purchase somewhere in the sixty percent or higher range, comparable to the competing brands.

Maybe we just surveyed members of the national mop team, who always keep a number of mops on hand for training purposes. (I don't think so.)

When asked their preferences for purchasing among the brands--when this was made comparative as it would likely be in real life--the new concept garnered viewer votes than with the standard purchase intent’s top two box score, but, showed comparatively that the new concept could be expected to reap an equal number of buyers as the leading brand in the category.

This was a positive case. I have seen a cleaning product where the top two box purchase intent was out of this world, like above 90%, and then when it got on store shelves sold disappointingly poorly. In isolation, it was the rational and sure choice. In a comparative situation, other options were more appealing. That said, there are products and situations where purchase decisions are made in isolation. Often products featured in an infomercial reflect buy/don’t buy decisions made in isolation. Sometimes products offered via other direct-to-consumer channels such as email are likewise made in isolation. Or, sometimes the decision is made in isolation because store shelves are empty, as we have seen far too often in this time of panic buying and hoarding. Next time you are preparing to test a new product concept, I would suggest that the first question to ask is how will potential buyers likely evaluate the product in real life--will they likely do so in isolation or by comparing options? And if you aren’t sure, lean towards the comparative. Then design the research to reflect that decision.