Demand Forecasting New Products: Triangulated Methods, P10/P50/P90, and a Worked Example
Demand forecasting new products is hard because there is no sales history, yet the business needs credible numbers for inventory, launch plans, and cash flow. This guide gives you a practical process to forecast demand for new products before and after launch, including methods, data sources, and a worked example.
What makes new product demand forecasting different
Sources of uncertainty and risk
Zero or limited history creates structural uncertainty, not just noise.
Early signals (preorders, surveys, ads) are biased and often non-representative.
Marketing, price, and distribution are still moving targets.
Cannibalization and competitive responses can change category share.
Plan for wider error bands and make the uncertainty explicit in your forecast.
Use cases across product, supply, sales, finance
Product: feature prioritization, MVP scope, roadmap gates.
Supply chain: capacity, materials procurement, safety stock.
Sales: launch targets, incentives, channel allocations.
Finance: revenue plans, cash conversion, working capital.
Define decisions that depend on the forecast, then set the needed granularity and cadence.
Forecast horizons and granularity (SKU, channel, region)
Horizon: at least 13 weeks for near-term operations; 6–18 months for S&OP.
Granularity: SKU or option-level if variants, by channel and major region.
Time buckets: weekly for first 8–12 weeks; monthly thereafter.
Granularity should match lead times and where decisions are made (e.g., factory vs DC vs store).
Key outcomes the forecast must support
A triangulated P10/P50/P90 demand forecast with named drivers.
Channel- and region-level sell-in and sell-through plans.
Inventory and capacity plan linked to service targets and cash.
Governance for reforecasting and change control.
Choose the right approach: a method selection framework
When to use qualitative vs quantitative methods
Qualitative: early concept phase, minimal data; use expert panels, internal benchmarks, and customer interviews.
Quantitative: when you have analog SKUs, survey/choice data, or market signals; use structured models and experiments.
Best practice: combine both; use qualitative to set priors and quantitative to size and calibrate.
Analog method vs market research vs diffusion vs ML
Analog method: scale an existing product’s launch curve for price, positioning, and distribution; fast and interpretable.
Market research: conjoint/discrete choice to estimate trial, repeat, and price elasticity; adds defensibility.
Diffusion models: Bass or Gompertz to describe adoption shape; good for consumer durables or innovations.
ML/ensemble: blend signals (search, ads, social, waitlists) with analog and seasonality; useful post-launch for nowcasting.
Use the simplest model that explains the decision, then layer complexity as signals arrive.
Data availability matrix and decision tree
If close analogs exist and go-to-market is similar: start with analog scaling.
If no analogs but target segment is clear: run choice modeling or intent surveys.
If network effects or innovative category: use diffusion model guided by expert priors.
If launch is digital-first with strong prelaunch ads: use experiment-driven calibration.
Post-launch weeks 1–4: switch to signal-weighted nowcasting; re-estimate parameters weekly.
Triangulation and weighting logic
Start with three legs: Analog (A), Research (R), Diffusion/Model (D).
Assign initial weights based on reliability: e.g., A 40%, R 40%, D 20%.
Reweight as evidence arrives: increase weight for the method that best explains early sales (FVA gain) and reduces bias.
Produce P10/P50/P90 by perturbing key drivers (distribution, conversion, price response) and using historical error.
Pre-launch data you can actually use
Internal signals: preorders, waitlists, CRM, expert input
Preorders/waitlists: adjust for conversion drop-off (e.g., 20–60% depending on deposit).
CRM segments: estimate awareness, reach, and repeat among existing customers.
Expert input: structured elicitation with ranges and rationales, not point guesses.
Document assumptions and sampling frames for each signal.
External signals: search trends, social, panels, marketplace data
Search interest: measure branded and generic query growth; normalize vs category.
Social engagement: baseline vs lift after creative tests; watch for influencer spikes.
Panel/marketplace data: category size, growth, seasonality, competitive pricing.
Compare patterns to analog launches to set priors on launch curve shape.
Retailer feedback and distribution commitments
Capture store count by week, planogram depth, and online availability dates.
Note retailer pipeline expectations and minimum order quantities.
Record launch windows, promotional slots, and shelf resets.
Distribution is often the biggest driver of early volumes.
Cleaning, bias checks, and representativeness
Remove duplicates, bots, and internal traffic from waitlists and landing pages.
Check demographics and geos vs target market; reweight if skewed.
Apply conservative conversion factors for signals captured in high-intent contexts.
Keep a log of adjustments so your post-mortem can refine them.
Build a triangulated baseline forecast
Construct an analog curve and scaling factors
Select 1–3 analog products with similar price, category, and channel mix.
Extract their launch curves (weekly sales for first 26 weeks).
Apply scaling:
Price/positioning multiplier (e.g., −20% for premium price).
Distribution coverage by week vs analog.
Seasonality alignment to your launch month.
Marketing spend and reach relative to analog.
Use the average of scaled analogs as Analog Forecast A(t).
Conjoint/market research to estimate trial and repeat
Run choice modeling on target buyers; estimate:
Trial rate at planned price and feature set.
Share of preference vs top alternatives.
Willingness-to-pay and cross-price elasticities.
Convert to units:
Units = Addressable audience × Awareness × Consideration × Trial × Unit per trial.
Repeat = Repeat rate × Interpurchase cycle × time horizon.
This yields Research Forecast R(t) with trial/repeat dynamics.
Diffusion models (Bass/Gompertz) with parameter tips
Bass model parameters:
p (innovation): higher with heavy launch marketing and seeding.
q (imitation): higher with word-of-mouth, reviews, and network effects.
Calibrate p/q using analogs and category history; apply constraints so cumulative adoption does not exceed addressable market.
Gompertz is useful for asymmetric S-curves (slow start, faster middle).
Use diffusion to shape adoption timing; tie scale to research or analog totals.
Combine methods and produce P10/P50/P90 scenarios
Weighted baseline: F(t) = wA·A(t) + wR·R(t) + wD·D(t).
Uncertainty drivers:
Distribution on-time vs delayed.
Conversion rates and media effectiveness.
Price elasticity and promo lift.
Build scenarios:
P90: optimistic drivers (early distribution, higher conversion).
P50: expected drivers.
P10: conservative drivers (delays, lower conversion).
Quantify by running a simple Monte Carlo (or discrete scenario grid) over key inputs and reading percentiles.
Report ranges, not single numbers.
Adjust for price, promo, seasonality, and distribution
Price: apply own-price elasticity from research; cap at plausible ranges.
Promo: add lift factors by promo type and depth; include pull-forward effects.
Seasonality: multiply by category index; confirm with analog seasonality.
Distribution: apply weekly weighted distribution and on-shelf availability; split ecommerce vs retail.
Document every adjustment and its source.
Model portfolio effects and cannibalization
Identify at-risk SKUs and overlaps
Map features, price tiers, and use cases across the portfolio.
Flag SKUs within adjacent price bands and similar benefits.
Include accessories or bundles that might be upsold or displaced.
Estimate substitution and cross-price elasticity
Use conjoint cross-utilities or historical elasticity matrices.
Start with a substitution matrix S where columns sum to the share stolen from each incumbent.
Apply cross-price effects for planned price moves or promos.
Quantify both volume shift and margin impact.
Category growth vs share shift
Decompose new volume into:
Category expansion (new buyers/occasions).
Share shift from competitors and your own SKUs.
Align with marketing’s category growth assumptions to avoid double counting.
Guardrails to avoid double-counting
Ensure total category volume remains plausible given seasonality and macro trends.
Cap cannibalization so incumbents cannot drop below feasible baseline absent broader dynamics.
Reconcile top-down category view with bottom-up SKU projections.
More time, More Sales
Channel, region, and go-to-market specifics
Ecommerce vs retail dynamics
Ecommerce:
Faster learn-test cycles; demand shaped by search, ads, and reviews.
Inventory pooling reduces stockout risk; watch for fulfillment constraints.
Retail:
Sell-in timing and compliance matter; shelf availability drives sell-through.
Store distribution ramp and merchandising quality vary by banner.
For ecommerce operators, connecting a Shopify inventory forecasting app can help align digital demand with stock, particularly when early reviews move conversion. See the Verve AI Shopify inventory forecasting for an example of how to integrate channel-level signals.
Sell-in vs sell-through and pipeline inventory
Separate retailer orders (sell-in) from consumer sales (sell-through).
Model pipeline inventory: initial DC fill, store fill, and safety stock.
Tie replenishment to sell-through with vendor-managed or point-of-sale feeds.
This prevents overestimating true demand from front-loaded sell-in.
Rollout curves by region and store count
Build rollout schedules by week:
Number of stores live, average facings, and compliance rate.
Regional ecommerce availability windows.
Apply distribution-weighted sales curves; earlier regions inform later ramps.
B2B vs B2C nuances
B2B:
Longer evaluation cycles, contract-driven orders, lumpy volumes.
Forecast at account or segment level with probability-weighted opportunities.
B2C:
Shorter cycles, media-driven spikes, returns rate matters.
Emphasize early review volume and rating thresholds.
If you operate on WooCommerce, ensure your demand plan reflects plugin-driven traffic and promotions; a WooCommerce inventory forecasting plugin can translate site signals into weekly demand estimates without overfitting to early spikes.
Validate with experiments and signals
Smoke tests and landing page pre-sell
Run landing pages with real pricing and checkout; collect deposits when possible.
Measure click-to-cart, checkout completion, and refund rates.
Calibrate to historical conversion funnels for similar AOV and category.
Geo-lift pilots and test markets
Launch in a few geographies with controlled media and measure incremental sales vs control regions.
Use matched-market tests to estimate true lift independent of macro noise.
Scale cautiously; apply dilution factors for national rollout.
Ad and creative pretests and intent surveys
Test multiple creatives; use response elasticities to forecast spend-to-demand curves.
Combine with intent surveys to estimate conversion by audience.
Signal-to-demand calibration methods
Build a simple regression that maps signal metrics (e.g., brand search, CTR, paid reach) to weekly units for analogs.
Apply coefficients to your prelaunch and early signals, adjusted for media mix.
Blend with other methods using weights based on out-of-sample error.
For deeper reading on experiments, S&OP, and replenishment, browse our inventory management blog for related playbooks and checklists.
Operationalizing the forecast in S&OP
RACI, cadence, and change control
RACI:
Responsible: Demand planning (modeling and reconciliation).
Accountable: GM/BU lead (signs off P50 and risk range).
Consulted: Sales, Marketing, Supply, Finance.
Informed: Exec team, Customer success.
Cadence:
Pre-launch: weekly reviews from T−8 weeks.
Post-launch: twice weekly in first 4 weeks, then weekly.
Change control:
Version forecasts; log driver changes; require rationale for overrides.
Capacity and supply constraints integration
Translate demand into material and capacity requirements with lead-time offsets.
Decide make-to-stock vs assemble-to-order policies by variant.
Set safety stock using service-level targets and forecast error (initially larger).
KPIs: WAPE, bias, FVA, OTIF impacts
WAPE/MAE: error magnitude by week and channel.
Bias: persistent over/under forecast; track by sign and absolute.
Forecast Value Add (FVA): compare methods and overrides vs naive baseline.
OTIF and lost sales: operational impact of accuracy on customer service.
Tie incentives to process adherence and bias reduction, not just point accuracy.
Exception management and decision thresholds
Define action thresholds:
If P10 exceeds supply, increase production or expedite.
If bookings lag P10 for 2 consecutive weeks, cut P50 by a defined percent.
Automate alerts when key signals deviate beyond control limits.
Post-launch nowcasting and continuous improvement
Early-week heuristics and leading indicators
Use day‑of‑week patterns to project week’s total by Wednesday.
Monitor review counts and star ratings; dynamic effect on conversion.
Track stockouts and page speed; correct for lost-sales distortion.
Variance decomposition and rapid reforecasting
Decompose miss vs plan into:
Distribution variance (stores live, OSA).
Traffic/reach variance.
Conversion/price variance.
Assortment/cannibalization variance.
Update the forecast weekly using the variance drivers, not just a scalar.
Learning loops and model reweighting
Reweight A/R/D methods based on rolling 2–4 week FVA.
Update elasticities and conversion multipliers with observed data.
Archive learnings to refine priors for the next launch.
Dashboard design and alerts
Minimum widgets:
P10/P50/P90 vs actuals, by channel.
Distribution and OSA gauges.
Traffic, conversion, AOV, returns.
Supply position, OTIF, and projected stockouts.
Alert rules on threshold breaches with owner and due date.
Worked example and templates
Sample calculation with small dataset
Context:
Consumer electronic accessory; $49 price; ecommerce + two retail banners.
Target: 10k units in first 8 weeks.
Analog scaling:
Analog A launch curve totals 9k units at $39; scale −15% for price, +10% for higher spend, distribution 80% of analog in week 1 ramping to parity by week 4.
Result A(t): 7.8k units over 8 weeks.
Research:
Addressable audience 500k; awareness 20% at launch; consideration 30%; trial 4%; units per trial 1.05.
Weekly phasing with 40% in first two weeks due to launch media.
Result R(t): 8.4k units over 8 weeks.
Diffusion:
Bass p=0.04, q=0.35; market potential M=60k for first 6 months; yields 8-week D(t)=8.0k.
Triangulated P50:
Weights A 35%, R 40%, D 25% → F(t)=8.1k.
P10/P90:
Drivers: distribution −/+15%, conversion −/+20%, price elasticity −1.2.
P10: 6.5k; P90: 9.6k.
Split by channel:
Ecommerce 55%, Retail Banner 1 30%, Retail Banner 2 15%.
Apply rollout: Retail stores live 60% in week 1 → 100% by week 3; ecommerce live day 1.
Forecast funnel spreadsheet structure
Tabs:
Inputs: price, promos, media, distribution, elasticities.
Analogs: weekly curves and scaling multipliers.
Research: funnel (awareness → trial → repeat).
Diffusion: parameters and weekly adoption.
Triangulation: weights, P10/P50/P90 scenarios.
Channel split: ecommerce and each retailer with rollout.
Portfolio: cannibalization matrix and net impact.
Supply: lead times, capacity, safety stock.
KPIs: WAPE, bias, FVA, OTIF.
Where helpful, use ready-made calculators from inventory planning tools to speed up scenario building and uncertainty analysis.
If your ecommerce stack is Shopify, consider connecting a Shopify inventory forecasting app to bring POS and traffic signals into your weekly nowcast without manual exports. If you operate on WooCommerce, a WooCommerce inventory forecasting plugin can streamline demand updates from site analytics and orders.
Checklist for assumptions and documentation
Analog selection and reasons; scaling factors with sources.
Research methodology, sample size, and reweighting steps.
Diffusion parameters with analog justification.
Price and promo elasticity assumptions.
Distribution schedules and compliance assumptions.
Cannibalization assumptions and substitution matrix.
Channel mix and rollout plans.
Scenario drivers for P10/P50/P90.
S&OP cadence, RACI, and change control rules.
Soft CTA: If your team is debating estimates, start with the smallest set of assumptions you can defend, produce a P10/P50/P90 range, and agree on the decision thresholds that will trigger action as new data arrives.
FAQs
How do I forecast a completely novel product with no analogs?
Start with a top-down research approach (choice modeling or intent surveys) to estimate trial and repeat, then use a diffusion model to shape adoption. Add small experiments (smoke tests, geo pilots) to calibrate conversion. Express the output as P10/P50/P90 with wide bounds and tighten as signals arrive.
What accuracy should I expect for a new product forecast?
Expect higher error early. As a rule of thumb, week-level WAPE of 30–50% in weeks 1–4 is common, improving to 20–30% by weeks 5–12 as signals stabilize. Focus on bias reduction and speed of reforecasting rather than hitting a single point estimate.
How far out should I forecast and at what granularity?
Build weekly forecasts for at least 8–12 weeks to manage launch, then monthly out to 6–18 months for S&OP. Segment by SKU/variant, channel, and major region to match decision points and lead times.
How do I incorporate price and promotions for a new item?
Use conjoint-derived elasticities or analog price response. Apply promo lift factors by type and depth, including pull-forward effects. Run scenarios to see how price/promo shifts move P10/P50/P90 and margin.
How can I estimate cannibalization before launch?
Map overlaps, then use conjoint cross-utilities or historical cross-price elasticities to build a substitution matrix. Apply it to incumbents to estimate volume shifts and net portfolio impact.
Which pre-launch signals (search, social, waitlists) are most predictive?
It depends on context, but brand search lift and deposit-backed waitlists tend to be more reliable than social engagement. Calibrate each signal using analogs and adjust for representativeness and conversion decay.
How do I reconcile sell-in and sell-through in the forecast?
Model them separately. Use retailer orders and pipeline fills for sell-in, and POS-based curves for sell-through. Link replenishment to sell-through and monitor on-shelf availability to avoid misreading front-loaded sell-in as demand.
What KPIs should I use to evaluate and improve the forecast?
Track WAPE/MAE for magnitude, bias for direction, FVA to judge method value, and OTIF/lost sales for operational impact. Use variance decomposition to attribute misses and drive targeted fixes.
