Statistical Thinking for Marketers
The productive tension
Statistical significanceandpractical significance
The synthesis
The significance-obsessed marketer runs an A/B test, finds p < 0.05, and declares victory -- even when the effect is a 0.3 per cent lift in click-through rate that will never move the business. The significance-dismissive marketer ignores statistical rigour entirely and makes decisions on gut feel and anecdote. Both are wrong. Statistical significance tells you whether an effect is real -- whether it is likely to replicate rather than vanish on the next run. Practical significance tells you whether the effect matters -- whether it is large enough to justify action, investment, or strategic change. The evidence-based marketer demands both: a result that is statistically real AND commercially meaningful. A tiny effect that replicates reliably is a fact worth knowing but not necessarily worth acting on. A large effect that fails to reach significance is a signal worth investigating but not worth betting the budget on.
Learning objectives
- →Explain what statistical significance means and, critically, what it does not mean -- correcting the most common misinterpretations
- →Distinguish between statistical significance and practical significance (effect size) and explain why both matter for marketing decisions
- →Identify and explain the major statistical fallacies that plague marketing -- correlation vs causation, regression to the mean, survivorship bias, Simpson's paradox, and the base rate fallacy
- →Apply Bayesian thinking to update marketing beliefs when new evidence arrives, rather than treating each study as a fresh verdict
- →Develop statistical intuition about sample sizes, confidence intervals, and the limits of data-driven decision-making
Members only
This lecture is part of a paid plan
The first lecture of every module is free — no account needed. The rest unlocks with a subscription. One price, all 120 lectures, both languages.