A Practical Guide to Forecasting New Products - YouTube

Channel: unknown

[1]
Imagine you have been given a new task at work: You have been asked to forecast sales
[5]
of a new product that your engineering team has been working on for the past 18 months.
[11]
Most of us, who have lived the world of corporate market research have been there: come up with
[15]
an accurate forecast for the new product. Hi, my name is Miklos Kremser, - I am the
[26]
Founder and Principal of Choice Based Market Insights and we specialize in helping companies
[32]
navigate the challenging world of new product forecasting.
[36]
If you say that the challenge of forecasting new products does not cause you heartburn
[42]
or other stress related symptoms – I beg to differ. Forecasting new product launches
[48]
is stressful. Most of us know that historically, most new product launches are considered a
[54]
failure. 75% of consumer package goods and retail products fail to earn $7.5 million
[61]
during their first year – which is often the measuring stick for success. To make it
[66]
more stressful, there is a myriad of unknowns: Is distribution going be sluggish? Is the
[71]
budget going to be adequate? Is there a precise understanding of the target market? In an
[76]
article, published by the Harvard business review by Joan Schneider and Julie Hall, the
[81]
authors listed 40 different factors that heavily influence the success of a product launch.
[88]
To add to your heartburn, there is usually a significant internal pressure to provide
[91]
an optimistic forecast. Think about it, your engineering team has just spent 18 months
[97]
designing and producing a new product, wholeheartedly believing that this product would disrupt
[102]
the industry as we know it. By the time they task you with providing a forecast, it’s
[108]
likely the company has already spent millions in man hours and materials. Right now, the
[113]
only thing that stands between them and eternal engineering glory
 is you. So hurry up!
[121]
And hurry up is exactly what you should NOT do. In this video, I am going to provide you
[126]
a framework for new product forecasting. I will give you some tools, steps and some equations
[132]
all of which will give you more confidence in providing more accurate forecasts.
[137]
The very first thing – however – that you will have to do is ask the right questions.
[143]
Is there a market for this product? Because if the answer is no – or not very clear
[149]
– then the recommendation must be a swift and quick death. Now, the second question
[154]
then must be: if yes, how big? Again, if not big enough – the project must be killed
[161]
– and you must warn your organization – no matter how unpopular you may become. Only,
[168]
if there are signs for a big enough market – can you proceed with the actual forecasting.
[174]
Measuring market appeal – and forecasting sales are two completely distinct phases in
[178]
this process. And you should never skip the first.
[183]
First, I am going to give you some tools for the first phase: Measuring Market Appeal.
[189]
Measuring market appeal cannot happen without tapping into the market and actually asking
[193]
the market about it. In other words: market research.
[196]
Long gone are the days of asking survey respondents “how likely are you to purchase this product?”
[201]
These are stated likelihoods – that tends to be biased. For example: if you end of with
[205]
a result of 43% “Somewhat likely” – can you turn that into a market size? What does
[211]
“somewhat likely mean?” There is a much more useful approach – in
[215]
which we calculate market appeal or potential market size by making survey respondents choose
[220]
among product alternatives – and by analyzing the choice patterns, we can set up a statistical
[226]
model. This is a choice-based conjoint methodology – and there are lots of scientific publications
[231]
that claim these methods to be less biased and more accurate.
[234]
Here is how it works: First you set up a survey based choice experiment
[240]
that survey respondents participate in. Make sure all the relevant product attributes are
[245]
included in the experiment. Then, based on the respondents’ choice patterns, you create
[250]
a statistical model – that you can use in a market simulation to measure the appeal
[256]
of the potential product. This was very high level, so let me dive a bit deeper:
[260]
What does this choice experiment look like? Well, let’s imagine a product has three
[264]
important attributes about it. For example, these important attributes may be: flavor,
[269]
size and price. Or any other relevant attribute. For simplicity sake, let’s pretend these
[274]
three important attributes are: triangle, square and circle. Each of these attributes
[279]
can have different levels: in our case let’s pretend the triangle can be yellow, or blue
[284]
or orange or green. In the choice exercise we use these attribute
[288]
level combinations – and present seemingly random configurations as products to respondents,
[294]
and ask them to select the one out of these products – that they would be most likely
[297]
to purchase. Although it’s best to strive for ‘realistic’
[302]
combinations of these attributes – most often it’s fine to have as random combinations
[306]
as much as we’re able to suspend our disbelief. After the respondent selected an option, these
[312]
options go away – and another seemingly random combination of options come up – and
[316]
the respondent, again, is asked to make a choice. Remember, this is not a product concept
[320]
test – so it may not even be necessary at this phase to show the actual product you’re
[325]
looking to forecast. This may come as a shock to some. But more on it later

[330]
The only purpose of the choice experiment was to use these hundreds and often thousands
[335]
of choices respondents made – among seemingly random product configurations – and be able
[340]
to calculate the “value” or utility of every one of the attribute levels. This is
[346]
where the statistical model gets calculated – a multinomial logistic regression model
[351]
often using a hierarchical Bayesian technique. Now that we have the model, we can use these
[356]
utilities and simulate what the market looks like when our product is in the market.
[362]
In this illustration, we imagine that the market is made up of three players. Product
[366]
1, Product 2 and Product 3. Using these attribute level utilities (also known as “part worth
[372]
utilities”) we can calculate a product’s “total utility” – or total attractiveness.
[378]
By exponentiating these total utilities – and using the formula on the screen, we can then
[382]
calculate a Share of Preference for each product. What does Share of Preference mean? What it
[388]
means is: in our choice experiment, if we had shown these three products to the respondents
[394]
– 87 out of 100 people would have chosen Product 1; 1% would have chosen Product 2
[400]
and 12% would have picked Product 3. If we assume that our sample of respondents
[406]
was representative, and we assume that all three of these products are available in the
[411]
market, and everyone is aware of all these products – then the share of Preference
[417]
should be very close to a Market Share. Therefore, in our first phase – where we
[423]
only want to find out whether or not there is a market for your product – a share of
[428]
preference can tell you: in a perfect word, with perfect awareness and distribution – this
[434]
is the amount of customers that would pick your product out the products in the market.
[440]
This is Phase 1. Is there or is there not a market appeal? There are quite a few steps
[446]
to accomplishing Phase 1: using the right attributes and levels as we plan the choice
[450]
task, then designing an experimental design that is balanced and will allow for a robust
[456]
model, then programming the choice task into a survey, then fielding it with a representative
[462]
sample, creating an accurate statistical model – and finally assessing the market using
[467]
a simulator. The image here shows a sample simulator to
[471]
assess market appeal for online trading services. In this illustrative example, we assumed the
[476]
market is made up of 4 key players with differing attributes in their offerings. Just a side-note
[482]
that this example does not contain real data for confidentiality purposes – so
 no
[486]
need to pause the video as scribble down the most appealing levels.
[491]
So, let’s get to phase 2 now
 the forecasting part. The result of phase 1 is some potential
[498]
size of customers who would pick your product over competitors. But we also had to assume
[505]
100% awareness, distribution – which are unrealistic. In the real world, awareness
[511]
and distribution take a long time to build – and we need to incorporate this into our
[516]
forecast. The question you need to ask as you head into Phase 2 is: “What is the path
[522]
to reaching the product’s potential?” In order to do that – I will now take a
[527]
theoretical detour – and discuss two common data-distributions: the S-curve and its cousin:
[535]
the Normal distribution. First, let’s talk about the S curve. Those
[540]
knowledgeable about statistics may be familiar with the S curve – especially in the context
[545]
of logistic regression. However, when it comes to forecasting that is NOT the context I would
[550]
like you to think about. The S curve, is also cumulative distribution
[554]
– of the normal distribution. For example, this curve shows how intensive the sun is
[559]
during the day – hour by hour. While this curve is the cumulative exposure to the sun
[565]
– the total sun rays I would absorb if I sat all day under that sun. Now, the true
[571]
distribution looks more like this – but the point is still the same: the cumulative
[575]
distribution of the normal curve looks like an S-curve.
[579]
The normal distribution, as a function of time tends to reflect a period of beginning
[584]
(or ramp up), a period of intense activity (the peak) and a period of decline activity.
[590]
Interestingly, the normal distribution and its cousin the S curve are all around us:
[597]
Here we see the growth of the sunflower plant – and we can clearly notice a slow ramp
[602]
up at the beginning, a period of intense growth and a slowing down after two and a half months.
[607]
Children learning vocabulary follows the cumulative normal distribution.
[612]
And artists’ productivity follows the cumulative normal distribution – here is the cumulative
[616]
works of Mozart – even though Mozart died young. The last dot, that is above the line
[622]
is the year Mozart died – already suspecting his death he worked around the clock

[628]
So what does this detour on the normal and the S curves have anything to do with forecasting?
[633]
Everett Rogers hypothesized that new products get adopted – according to a normal distribution
[638]
that is a function of time. That the first adopters, the innovators, are promptly the
[643]
first 2.5% of adopters – who then influence the next 13.5% called early adopters – after
[649]
which an intense growth happens when the early majority adopt – which is called “closing
[654]
the chasm” or “tipping point.” Rogers’ curve has been used for forecasting
[659]
purposes – thousands of times since the theory was published; however, there are several
[664]
limitations: First, the model is based on a distribution
[668]
about the mean time of adoption. Which you really don’t know until the whole adoption
[673]
process is completed. In other words, you don’t know if you’re an early adopter
[677]
until we’ve accounted for the early majority, late majority and laggards, and only then
[683]
can we designate what phase was what time. Second, Rogers’ curve makes the assumption
[691]
that innovators can only adopt at the very beginning of the process. Empirical evidence
[696]
however shows that innovators can adopt a product at any phase of the diffusion. Albeit
[703]
a lesser degree. An objectively superior forecasting approach
[706]
was developed by Frank Bass in which he attributed product adoption to two main influences:
[711]
1. One that is due to external influences – such as advertising or believing in the
[718]
cause of the product. Obviously a larger impact in adoption at the beginning – they tend
[724]
to diminish with time. 2. And one that is due to internal influences
[729]
– such as word of mouth, peer pressure, etc

[732]
Looking at the cumulative adoption curve, the equation to estimate cumulative sales
[736]
has three unknowns: - The size of the total market or m
[741]
- p – the coefficient of innovation – how big is the effect of external inflencers
[747]
- q – the coefficient of imitation – how big is the effect of internal influencer?
[752]
Well, we know from our phase 1 activities what the market potential for our product
[757]
is. In fact, that is the whole reason we conducted phase 1. But what numbers do we use to find
[763]
p and q? Here, I wish there was a handy technique that
[769]
would calculate the most appropriate p and q. Instead, what we have are standards and
[774]
guidelines. A typical p value is around 0.03 – rarely
[779]
if ever above 0.04 and often less than 0.01. A typical q value however is around 0.38 – with
[787]
a range between 0.3 and 0.5. These are based on several examples from different industries.
[791]
Finding the right figures does make a difference. The blue curve shows the uptake using the
[796]
typical q of 0.38 and typical p of 0.03. However, changing the q to the upper limit of 0.5 will
[804]
create the orange line which will achieve a certain sales level many years quicker.
[810]
Notice however, that swings in the p value make less of a difference in the accuracy
[815]
of the curve. So when it comes to using the p and q values
[819]
– here are some suggestions: - take a look at historical product launches
[825]
within your organization – and try to fit a curve to them, adjusting the levels of p
[830]
and q. - if there isn’t much history – start
[833]
out with typical values, - and, it is always smart – and helps save
[839]
on heartburn medication – if you create conservative and liberal forecasts.
[844]
- And finally, keep track of sales – and use the equation to continue to adjust your
[848]
p’s and q’s. What you’ll find is eventually locking in on figures you feel good about.
[857]
Hopefully with all this information you now feel more confident in helping your organization
[861]
approach new product launches in a scientifically sound and robust way. First quantifying market
[867]
appeal for your product and then using the Bass formula to quantify the path to reaching
[873]
the market potential. I hope this was helpful and I hope you’ll
[875]
reach out to me if you have any further questions.