The School of Real Marketing
Back to module
F3-01·F3 — Market Research & DataFree

What Market Research Actually Is

The productive tension

Research as uncertainty reductionandas decision support

The synthesis

One school treats market research as a truth-finding mission -- collect enough data and the answer will reveal itself. Another school dismisses research altogether, trusting gut instinct, experience, and speed over evidence. Both are wrong. Research does not give you the answer. It improves the question. It reduces uncertainty without eliminating it, and it supports judgement without replacing it. The evidence-based marketer uses research to make better decisions under uncertainty -- not to avoid making decisions at all, and not to make decisions in the dark. Research AND judgement. Evidence AND experience. Data AND intuition. The error is treating research as either an oracle or an obstacle.

Learning objectives

  • Define market research and explain its role within the broader marketing strategy process
  • Describe the market research process from problem definition through to actionable insight
  • Distinguish between exploratory, descriptive, and causal research designs and identify when each is appropriate
  • Explain Ritson's hierarchy of evidence in marketing and apply it to evaluate the quality of marketing claims
  • Articulate the Both/And of research as uncertainty reduction and decision support, recognising that research improves questions rather than providing answers

F3-01: What Market Research Actually Is

In 1985, Coca-Cola made what is widely considered one of the most catastrophic marketing decisions of the twentieth century. They reformulated their flagship product — the most recognised brand on the planet — and launched "New Coke." The backlash was immediate, visceral, and commercially devastating. Within seventy-nine days, they reversed the decision.

Here is the part that should unsettle you: the decision was based on research. Extensive research. Coca-Cola conducted nearly 200,000 taste tests. The data was unambiguous — consumers preferred the new, sweeter formula to both the original Coke and to Pepsi. The research was methodologically sound. The sample was enormous. The results were clear.

And the conclusion drawn from it was spectacularly wrong.

The taste tests measured taste preference. They did not measure what Coke meant to people — the identity, the nostalgia, the cultural significance that had nothing to do with flavour chemistry. The research answered the question it was asked. The problem was that Coca-Cola asked the wrong question.

This is the central paradox of market research, and it is where this module begins: research is essential, research is powerful, and research is dangerous. It is essential because marketing decisions made without evidence are gambles. It is powerful because good research reveals things that intuition cannot. And it is dangerous because bad research — or good research misapplied — creates a false confidence that is worse than honest uncertainty.

This lecture will define what market research actually is, map the process from problem to insight, and establish the intellectual framework for everything that follows in this module.


Part 1: Defining Market Research

1.1 What It Is

Market research is the systematic process of gathering, analysing, and interpreting information about a market, about a product or service to be offered for sale in that market, and about the past, present, and potential customers for the product or service (Malhotra, 2019). It is the mechanism through which marketers replace assumptions with evidence, hunches with data, and opinions with insights.

That definition contains a word that does a great deal of work: systematic. Market research is not a casual glance at the competition's website. It is not reading the comments section under your latest social media post. It is not asking your spouse whether they like the new packaging. These activities may generate useful observations, but they are not research. Research requires method — a deliberate process of inquiry designed to produce reliable, valid, and actionable information.

Burns and Bush (2014) define it more operationally: market research is the process of designing, gathering, analysing, and reporting information that may be used to solve a specific marketing problem. Note the phrase "specific marketing problem." Research does not exist in the abstract. It exists to serve a decision. If there is no decision to be made, there is no research to be done. This is a point we will return to repeatedly, because a surprising amount of research is commissioned without a clear decision in mind — and research without a decision is a report that nobody reads.

1.2 What It Is Not

Market research is not a substitute for strategy. It is an input to strategy. This distinction matters because in many organisations, research has become a procrastination mechanism — a way of deferring difficult decisions under the respectable guise of "needing more data." Ritson (2024) has been characteristically direct on this point: the purpose of research is to inform the diagnosis, not to replace the diagnostician. You still need a strategist who can interpret the evidence, weigh the trade-offs, and make the call. Data does not make decisions. People do.

Market research is also not market intelligence, though the two are often conflated. Market intelligence is the ongoing, ambient collection of competitive and market information — monitoring competitor activity, tracking industry trends, reading trade publications, attending conferences. It is important, but it is not systematic inquiry designed to answer a specific question. Churchill and Iacobucci (2018) draw this distinction clearly: intelligence is continuous and broad; research is episodic and focused.

And market research is not data analytics, though in the digital age the boundary has blurred considerably. Data analytics examines behavioural data that already exists — website visits, purchase histories, click-through rates, app usage patterns. Market research generates new data to answer questions that existing data cannot. Both are valuable. Neither is a substitute for the other. We will explore their relationship in later lectures.


Part 2: Why Research Exists — The Uncertainty Problem

2.1 Marketing Decisions Under Uncertainty

Every marketing decision is made under uncertainty. Should we enter this segment or that one? Should we price at the premium or the mid-market level? Should we invest in brand building or performance marketing? Should we launch the line extension or invest in the core product? Should we expand into Germany or consolidate in the UK?

These are consequential decisions. They allocate significant resources. They are difficult or impossible to reverse quickly. And they are made without perfect information, because perfect information does not exist in commercial markets. The future is unknowable. Competitors are unpredictable. Consumer preferences shift. Economic conditions change. Technology disrupts.

Market research exists to reduce this uncertainty. Not to eliminate it — that is impossible — but to narrow the range of possible outcomes, to distinguish between plausible hypotheses and implausible ones, and to give the decision-maker a better basis for judgement than intuition alone.

Malhotra (2019) frames this as the fundamental value proposition of research: it reduces the risk of making the wrong decision. The cost of research is the investment in the process. The benefit is the value of the improved decision. When the potential cost of a bad decision is high — a product launch that fails, a market entry that backfires, a repositioning that alienates existing customers — the investment in research to reduce that risk is almost always justified.

2.2 The Cost of Not Doing Research

The marketing graveyard is filled with products that failed because nobody asked the customer. The Segway was going to revolutionise urban transport — its inventor predicted it would be "bigger than the internet." It was not, because nobody researched whether consumers actually wanted a two-wheeled electric scooter that cost five thousand dollars, could not be used in the rain, and had no clear use case beyond novelty. The technology was brilliant. The market understanding was absent.

Google Glass launched with extraordinary technological ambition and almost no research into social acceptability. It turned out that people did not want to interact with someone wearing a camera on their face. The privacy concerns, the social awkwardness, the aesthetic objections — all of these were discoverable through basic qualitative research before a single unit was manufactured. Google chose to skip that step. The product was discontinued within two years.

These are dramatic examples, but the more common failures are quieter and more insidious. The brand extension that cannibalises the core product. The advertising campaign that resonates with the marketing team but bewilders the target audience. The pricing strategy based on cost-plus logic that ignores willingness to pay. The distribution expansion into channels where the brand has no salience. These are not spectacular disasters. They are the slow accumulation of suboptimal decisions made without adequate evidence — death by a thousand uninformed cuts.

2.3 The Cost of Bad Research

But here is the other side of the coin — and this is where many research textbooks lose their nerve. Bad research is not merely useless. It is actively harmful, because it creates confidence in the wrong direction.

New Coke is the canonical example, but it is far from unique. In the late 1950s, Ford's market research indicated strong consumer interest in a bold, distinctive mid-size saloon. The result was the Ford Edsel, launched in September 1957 and discontinued by November 1959 — one of the most expensive commercial failures in automotive history. The research was real. The interpretation was catastrophically wrong. Ford had measured stated interest in abstract attributes ("distinctive design," "powerful engine") without testing whether consumers would actually choose the specific design that embodied those attributes. The gap between what consumers say they want and what they actually choose — a gap we will explore extensively in F3-02 — swallowed the Edsel whole.

The lesson is not that research is unreliable. The lesson is that research is only as good as the question it asks, the method it employs, and the judgement applied to its interpretation. Research that asks the wrong question with the right method produces precise irrelevance. Research that asks the right question with the wrong method produces misleading answers. And research that asks the right question with the right method but is interpreted through the lens of confirmation bias produces sophisticated self-deception.


Part 3: The Research Process

3.1 The Six Stages

The market research process follows a logical sequence that Malhotra (2019) formalises into six stages. Each stage constrains and informs the next, and skipping stages produces unreliable results.

Stage 1: Problem Definition. This is the most important and most frequently botched stage of the entire process. The research problem is not the same as the marketing problem. The marketing problem might be "sales are declining in the North." The research problem is the specific question that, if answered, would illuminate the marketing problem: "What factors are driving the decline in purchase frequency among existing customers in the North?" Problem definition requires dialogue between the decision-maker (who knows what decision needs to be made) and the researcher (who knows what questions can be answered with evidence). When this dialogue fails — when the brief is vague, when the decision-maker does not know what they want to learn, or when the researcher does not understand the strategic context — the research that follows is almost guaranteed to be irrelevant.

Stage 2: Research Design. Once the problem is defined, the researcher selects the appropriate approach. This is where the distinction between exploratory, descriptive, and causal designs becomes critical — a distinction we will examine in detail in Part 4. The design also specifies the data sources (primary or secondary), the data collection method (survey, interview, observation, experiment), the sampling approach, and the analytical framework.

Stage 3: Data Collection. This is the stage most people think of when they hear "market research" — the surveys, the focus groups, the interviews, the experiments. It is operationally complex and logistically demanding, but it is not intellectually the most challenging stage. The intellectual heavy lifting was done in stages one and two. Data collection is the execution of a plan. Its quality depends on the quality of the plan.

Stage 4: Analysis. Raw data is not information, and information is not insight. Analysis is the process of transforming data into patterns, patterns into findings, and findings into implications. It encompasses statistical analysis (for quantitative data), thematic analysis (for qualitative data), and the interpretive judgement that connects analytical output to the original research problem. Burns and Bush (2014) emphasise that analysis without interpretation is incomplete — the numbers must be read in context, with an understanding of both the method's limitations and the strategic situation.

Stage 5: Insight. This stage is often collapsed into analysis, but it deserves separate treatment because it involves a qualitatively different cognitive act. Analysis tells you what the data says. Insight tells you what the data means for the business. Insight is the bridge between evidence and action — the "so what?" that transforms a finding into a recommendation. A finding says "forty-three per cent of target consumers cannot recall our brand unaided." An insight says "our mental availability is critically low, which explains why our penetration is stalling despite adequate distribution."

Stage 6: Action. Research that does not lead to action is waste. This is a brutal truth that the research industry sometimes resists, because it subordinates the intellectual elegance of the research to the messy pragmatism of commercial decision-making. But it is inescapable. Research exists to improve decisions. If the decision-maker cannot use the research — because it was not connected to a real decision, because the findings are ambiguous, because the presentation was impenetrable, or because the organisation lacks the capability to act on it — then the money spent on research was wasted, regardless of how methodologically rigorous it was.


Part 4: Research Design — The Three Types

4.1 Exploratory Research

Exploratory research is used when the problem is not yet clearly defined. The marketer knows something is happening — sales are changing, customers are behaving differently, a new competitor is gaining traction — but does not yet know why or what the relevant variables are. The goal is not to test a hypothesis but to generate hypotheses.

Exploratory research is typically qualitative: depth interviews, focus groups, ethnographic observation, expert consultations. It is flexible, open-ended, and iterative. The sample sizes are small and non-representative by design, because the goal is depth of understanding, not breadth of generalisation.

Churchill and Iacobucci (2018) describe exploratory research as "the detective work" of the research process — the stage where you do not yet know what you are looking for, so you cast a wide net and follow the clues. It is the stage that should precede more structured research, because trying to measure something before you understand it is a reliable recipe for measuring the wrong thing.

4.2 Descriptive Research

Descriptive research is used when the problem is defined and the key variables are identified, and the goal is to measure the size, frequency, or distribution of those variables. It answers questions like: How many? How often? Who? Where? When?

Descriptive research is typically quantitative: surveys, panel studies, observational counts, secondary data analysis. It requires large, representative samples because its value lies in generalisation — projecting from the sample to the population. Market sizing, usage and attitude studies, brand tracking, customer satisfaction surveys — these are all descriptive research.

The critical limitation of descriptive research is that it describes but does not explain. It can tell you that thirty-seven per cent of your target market has used a competitor's product in the past six months. It cannot tell you why they used it. Description without explanation is information without insight — useful for benchmarking but insufficient for strategy.

4.3 Causal Research

Causal research is used when the marketer needs to establish whether one variable causes a change in another. Does this price change cause a change in purchase intent? Does this advertising exposure cause a change in brand awareness? Does this packaging redesign cause a change in perceived quality?

Causal research requires experimental or quasi-experimental design — the controlled manipulation of one variable while holding others constant. A/B testing is the most common form of causal research in contemporary marketing. Conjoint analysis, which isolates the contribution of individual product attributes to overall preference, is another.

Burns and Bush (2014) emphasise that causal research is the most powerful but also the most demanding design. It requires rigorous control, careful operationalisation, and sophisticated statistical analysis. When done well, it provides the strongest evidence base for marketing decisions. When done poorly — when the controls are inadequate, the sample is biased, or the experimental conditions are unrealistic — it produces false confidence in causal claims that are actually correlational.


Part 5: The Hierarchy of Evidence

5.1 Not All Evidence Is Equal

Mark Ritson (2024), drawing on principles from evidence-based medicine, argues that marketers need a hierarchy of evidence — a framework for evaluating the quality of the claims on which they base their decisions. Not all sources of evidence are equally reliable, and a marketer who treats a case study with the same confidence as a randomised controlled trial is making a systematic error.

Ritson's hierarchy, adapted for marketing, looks approximately like this, from strongest to weakest:

Level 1: Meta-analyses and systematic reviews. Studies that synthesise the findings of multiple independent studies, such as the IPA Effectiveness Databank analyses by Binet and Field, or Ehrenberg's cumulative body of work on buyer behaviour patterns (Ehrenberg, 1974). These are the strongest form of marketing evidence because they aggregate across many contexts, reducing the influence of any single study's idiosyncrasies.

Level 2: Randomised controlled experiments. A/B tests, field experiments, and randomised controlled trials where treatment and control groups are randomly assigned. These establish causation, not merely correlation.

Level 3: Large-scale observational studies. Panel data analyses, econometric modelling, and large-sample surveys that identify patterns across substantial populations but cannot definitively establish causation.

Level 4: Qualitative research. Focus groups, depth interviews, and ethnography that provide deep understanding of mechanisms and motivations but cannot generalise to populations.

Level 5: Expert opinion and case studies. The views of experienced practitioners and the lessons of individual cases. Valuable for hypothesis generation and practical wisdom, but vulnerable to survivorship bias, hindsight bias, and the cherry-picking of convenient examples.

Level 6: Gut instinct, anecdote, and "best practice." The default basis for most marketing decisions in most organisations. Often wrong. Sometimes catastrophically so.

The hierarchy does not say that lower levels of evidence are worthless. Qualitative research is essential — it generates the hypotheses that higher-level research tests. Expert opinion carries real insight, especially from practitioners with decades of pattern recognition. But the hierarchy does say that when you are making a consequential decision, you should seek the highest available level of evidence and be honest about the level you are actually operating at.


Part 6: The Both/And — Research AND Judgement

6.1 The Oracle Fallacy

There is a persistent fantasy in marketing — and in business more broadly — that if you just do enough research, the answer will emerge. Commission one more study. Run one more survey. Analyse one more dataset. The truth is in the data. You just need to find it.

This is the oracle fallacy, and it is a trap. Research reduces uncertainty. It does not eliminate it. Even the best research — methodologically rigorous, well-designed, properly executed — leaves residual uncertainty. The future remains unpredictable. Consumer behaviour remains complex. Competitive dynamics remain fluid. At some point, someone has to make a judgement call that goes beyond what the data can definitively support.

The oracle fallacy leads to analysis paralysis — the perpetual commissioning of additional research to defer the moment of decision. It leads to the abdication of strategic responsibility, as decision-makers hide behind data rather than exercising the judgement they are paid to exercise. And it leads, paradoxically, to worse decisions, because the delay itself has costs — competitive windows close, market conditions shift, organisational momentum dissipates.

6.2 The Maverick Fallacy

The equal and opposite error is the maverick fallacy — the belief that great marketing decisions come from visionary intuition, not pedestrian research. Steve Jobs famously said that consumers do not know what they want until you show it to them. James Dyson developed his bagless vacuum cleaner through five thousand prototypes, driven by engineering conviction rather than consumer research. These stories are real, and they are appealing. They are also survivorship bias incarnate.

For every Jobs or Dyson, there are thousands of entrepreneurs who trusted their gut, ignored the evidence, and failed. We do not hear about them because failure is not newsworthy in the same way. The maverick narrative selectively remembers the hits and forgets the misses, creating the illusion that intuition is a reliable substitute for evidence. It is not. Ritson (2024) puts it plainly: "The plural of anecdote is not data."

The maverick narrative also misrepresents its own heroes. Jobs was an obsessive student of consumer behaviour — he simply studied it through observation and immersion rather than through traditional surveys. Dyson's five thousand prototypes were, in effect, a massive experimental research programme — iterative testing of hypotheses against physical reality. Neither man ignored evidence. They simply gathered it in unconventional ways.

6.3 The evidence-based Position

The synthesis is that research and judgement are not alternatives. They are complements. Research without judgement is a pile of data. Judgement without research is a gamble. The effective marketer uses research to inform, discipline, and improve their judgement — and then exercises that judgement to make decisions that the research alone cannot make.

This is what good diagnosis looks like in practice. In F1-03, we established that marketing strategy begins with diagnosis — understanding the market situation before deciding what to do about it. Research is the primary tool of diagnosis. But diagnosis is not merely the accumulation of facts. It is the interpretation of facts through the lens of strategic understanding. The diagnostician brings knowledge, experience, and judgement to the evidence. The evidence constrains and informs the diagnosis. The diagnosis constrains and informs the strategy. The strategy drives the action.

Research improves the question. It does not provide the answer. And that is exactly as it should be, because the answer to any strategic question depends on values, priorities, risk tolerance, and organisational capability — factors that no amount of data can resolve. The marketer who understands this is liberated from both the tyranny of "we need more data" and the recklessness of "I just know." They can use research for what it is good at — reducing uncertainty, testing hypotheses, revealing patterns — and reserve their judgement for what only judgement can do — making the call.


Key Takeaways

  • Market research is the systematic process of gathering, analysing, and interpreting information to solve a specific marketing problem. It requires method, rigour, and a clear connection to a decision. Without a decision to serve, research is an expensive academic exercise.

  • Research exists to reduce uncertainty, not to eliminate it. Every marketing decision is made under uncertainty. Research narrows the range of possible outcomes and distinguishes plausible hypotheses from implausible ones, but it cannot make the future predictable.

  • The research process follows six stages: problem definition, research design, data collection, analysis, insight, and action. Problem definition is the most important stage and the most frequently botched. Research that answers the wrong question with perfect methodology produces precise irrelevance.

  • Three research designs serve different purposes: exploratory (generate hypotheses), descriptive (measure variables), and causal (establish cause and effect). Each has strengths and limitations, and each is appropriate for different stages of the marketing problem.

  • Not all evidence is equal. Ritson's hierarchy of evidence — from meta-analyses down to gut instinct — provides a framework for evaluating the reliability of the claims on which marketing decisions are based.

  • The Synthesis: research AND judgement. Research without judgement is a pile of data. Judgement without research is a gamble. The effective marketer uses research to inform and discipline their judgement, then exercises that judgement to make decisions that research alone cannot make.


Sources

Burns, A.C. and Bush, R.F. (2014). Marketing Research. 7th ed. Harlow: Pearson.

Churchill, G.A. and Iacobucci, D. (2018). Marketing Research: Methodological Foundations. 12th ed. Nashville: Earle McPeek.

Ehrenberg, A.S.C. (1974). Repetitive Advertising and the Consumer. Journal of Advertising Research, 14(2), pp. 25-34.

Kotler, P. and Keller, K.L. (2016). Marketing Management. 15th ed. Harlow: Pearson.

Malhotra, N.K. (2019). Marketing Research: An Applied Orientation. 7th ed. Harlow: Pearson.

Ritson, M. (2024). Marketing Week Mini MBA Lectures. Marketing Week.

Sharp, B. (2010). How Brands Grow: What Marketers Don't Know. Melbourne: Oxford University Press.


Discussion Questions

  1. Think about a significant marketing decision you have witnessed or been involved in. What level of Ritson's hierarchy of evidence was the decision actually based on? With hindsight, was that level appropriate for the magnitude of the decision, or should stronger evidence have been sought?

  2. The New Coke case illustrates the danger of asking the wrong research question. Identify a current marketing practice where you believe the industry is systematically measuring the wrong thing. What question should they be asking instead, and what research design would answer it?

  3. Ritson argues that most marketing decisions are based on gut instinct, anecdote, and "best practice" — the lowest levels of the evidence hierarchy. Why do you think this persists in an industry that has more data available to it than at any point in history? Is the problem a lack of data, a lack of analytical capability, a lack of time, or something else entirely?

Primary sources

  • Malhotra (2019)
  • Ritson (2024)
  • Burns & Bush (2014)

Secondary sources

  • Churchill & Iacobucci (2018)
  • Kotler & Keller (2016)
  • Sharp (2010)
  • Ehrenberg (1974)
Quizzes

Take the quiz · 12 questions