New Coke — The Most Famous Research Failure in Marketing History
Covers lectures
F3-02 · F3-03 · F3-08
New Coke — The Most Famous Research Failure in Marketing History
Module: F3 — Market Research & Data Type: Research Failure Case Cross-references: F3-02 (qualitative vs quantitative research), F3-03 (research design and methodology), F3-08 (from data to insight)
The Situation
On 23 April 1985, Roberto Goizueta, Chairman and CEO of The Coca-Cola Company, stood before a room of journalists in New York City and announced that the company was changing the formula of the world's most popular soft drink. The original Coca-Cola formula — a recipe that had remained essentially unchanged for ninety-nine years — was being retired. In its place, a new, smoother, sweeter formulation would be sold simply as "Coca-Cola." The old formula was gone. Forever.
Goizueta was confident. He had reason to be. Behind the decision lay the most extensive market research programme in the history of consumer goods. Over the course of several years, The Coca-Cola Company had conducted more than 200,000 taste tests across the United States. The research was unambiguous: consumers preferred the new formula to the old one. They preferred it to Pepsi, too. By every measurable criterion the research team had applied, New Coke was a superior product.
The research was right. And the decision was a catastrophe.
Within days of the announcement, The Coca-Cola Company was inundated with complaints. Consumer hotlines received over 40,000 calls and letters of protest. Protest groups formed spontaneously. A Seattle man named Gay Mullins founded the "Old Cola Drinkers of America" and filed a class action lawsuit. Psychiatrist hired by Coca-Cola to listen to the complaint calls reported that some callers sounded as though they were describing the death of a family member. In the American South — Coca-Cola's historical heartland — the reaction was particularly visceral. People stockpiled remaining cases of original Coke. Black market prices for the old formula rose sharply. The company's consumer affairs department, which typically handled around 400 calls per day, was receiving over 1,500.
Seventy-nine days after the launch, on 11 July 1985, The Coca-Cola Company reversed course. Donald Keough, the company's President, announced the return of the original formula as "Coca-Cola Classic." At the press conference, Keough said: "We have heard you. The simple fact is that all the time and money and skill poured into consumer research on the new Coca-Cola could not measure or reveal the deep and abiding emotional attachment to original Coca-Cola felt by so many people."
This is the single most instructive case in the history of market research. Not because the research was poorly conducted — it was conducted with exceptional rigour. Not because the methodology was flawed in any obvious technical sense — the taste tests were well-designed by the standards of the field. The New Coke case is the definitive example of a deeper problem: research that answers the wrong question with perfect methodology produces perfectly wrong answers.
The Data
The Research Programme
The Coca-Cola Company's research programme leading to New Coke was, by any quantitative standard, extraordinary.
Scale. The company conducted approximately 200,000 taste tests over several years — the largest consumer taste-testing programme ever undertaken. These tests were conducted across multiple US cities, with demographically representative samples. The sheer scale of the programme was itself intended to eliminate sampling error and provide statistically unassailable results.
Methodology. The core methodology was the blind paired comparison test — the same technique Pepsi had used in its Pepsi Challenge campaign. Consumers were presented with unmarked cups containing different formulations and asked which they preferred. The tests included multiple conditions: old Coke vs. new Coke, new Coke vs. Pepsi, old Coke vs. Pepsi, and various intermediate formulations. The methodological design was rigorous by industry standards.
Results. The findings were consistent and clear. In blind taste tests, 55% of consumers preferred New Coke to old Coke. When told the test involved Coca-Cola formulations, the preference for New Coke rose to 61%. New Coke also beat Pepsi in blind tests by a wider margin than old Coke did. By the metric the research was designed to measure — taste preference — the new formula was unambiguously superior.
Financial context. The research programme was not conducted in a vacuum. Throughout the late 1970s and early 1980s, Coca-Cola's market share in the US had been declining. Pepsi's share was rising. The Pepsi Challenge — a public blind taste test campaign that consistently showed consumers preferring Pepsi — was eroding Coca-Cola's confidence in its own product. Between 1972 and 1984, Coca-Cola's share of the US cola market had fallen from approximately 24% to 21.7%. Pepsi had narrowed the gap to less than three percentage points. For a company accustomed to dominant market leadership, this was alarming.
The decision logic. The logic that led from the research to the decision was straightforward and, on its own terms, impeccable. If consumers prefer New Coke to old Coke, and consumers prefer New Coke to Pepsi, then launching New Coke should simultaneously improve the product AND increase market share relative to Pepsi. The research supported the decision. The data was clear. The logic was sound.
The logic was also wrong. Not because the data was wrong, but because the question was wrong.
What the Research Measured
The 200,000 taste tests measured one thing: relative taste preference in a controlled setting. This is what blind taste tests are designed to measure, and they measured it accurately.
But taste preference in a controlled setting is not the same as purchase behaviour in a market. The gap between the two is the gap that destroyed New Coke, and understanding this gap is the central lesson of this case.
The sip test problem. As with the Pepsi Challenge (see F2, Case 01), single-sip and small-sample taste tests systematically favour sweeter products. New Coke was sweeter than old Coke — that was the point of the reformulation. In a quick taste comparison, sweetness wins. Over a full can consumed during a meal, watching television, or at a barbecue — the actual consumption contexts — the preference picture may differ substantially. Malcolm Gladwell, in Blink (2005), documented this problem in detail: what people prefer in a sip is not necessarily what they prefer in a serving.
The absence of context. The taste tests were conducted in shopping centres, offices, and research facilities. Consumers tasted liquids from unmarked cups, in isolation from every variable that characterises real-world consumption. There was no brand. No packaging. No social context. No history. No ritual. The tests measured the flavour of a liquid in the same way a laboratory might measure its chemical composition — accurately, precisely, and completely disconnected from how human beings actually experience a soft drink.
The replacement question that was never asked. The most devastating methodological gap was the question the research never posed directly, or at least never weighted appropriately. The taste tests asked: "Which do you prefer?" They did not ask — or did not adequately probe — what amounted to the real question: "We are going to take away the Coca-Cola you have been drinking your entire life and replace it with this. How do you feel about that?"
Some evidence suggests that a version of this question was asked in at least some of the later research. Roy Stout, the head of Coca-Cola's market research department, later acknowledged that when consumers were told the new flavour would replace the old one, approximately 10-12% of those who preferred New Coke said they would be upset. The research team apparently judged this an acceptable loss — roughly 200,000 of the company's then 40 million daily US consumers. What they failed to recognise was that this 10-12% were not a random sample of mild dissenters. They were the most passionate, most vocal, most brand-loyal consumers in the franchise — the people who would make phone calls, write letters, organise protests, and generate the media firestorm that followed.
What the Research Missed
The research missed four things that turned out to matter more than taste.
The emotional meaning of the brand. Coca-Cola was not simply a beverage. It was a cultural artefact. It was the drink of American identity — present at every barbecue, every ball game, every Thanksgiving dinner, every childhood memory of summer. It was what soldiers drank in World War II. It was what appeared in Norman Rockwell paintings. It was, as the company's own advertising had spent decades establishing, "the real thing." The emotional associations with Coca-Cola were not about the liquid in the can. They were about what the brand represented: continuity, tradition, authenticity, America itself.
The research team measured taste. They did not measure — and arguably could not have measured with their methodology — the depth of this emotional attachment. The attachment was not to the flavour. It was to the idea. And when the company announced it was changing the formula, what consumers heard was not "we are giving you a better-tasting drink." What they heard was "we are taking away something that belongs to you."
Loss aversion. Daniel Kahneman and Amos Tversky's prospect theory, published in 1979 — six years before the New Coke launch — had established that people feel losses approximately twice as intensely as equivalent gains. Applied to New Coke, this means that the pain of losing original Coke was felt approximately twice as intensely as the pleasure of gaining a better-tasting formula. The research measured the gain (a preferred taste). It did not measure — or account for — the loss (the removal of a beloved product). Even if the gain was real, the loss was more powerful.
This is a fundamental principle of consumer behaviour, and it was available in the academic literature at the time. The Coca-Cola research team either did not engage with prospect theory or did not consider it applicable to a product reformulation. They should have. Loss aversion is not a niche phenomenon. It is one of the most robustly replicated findings in behavioural economics, and it applies with particular force to products and brands that consumers consider part of their identity.
The identity dimension. For many consumers, particularly in the American South and among older demographics, drinking Coca-Cola was an identity statement. "I'm a Coke drinker" was not merely a beverage preference — it was a self-description, a tribal affiliation, a way of being in the world. When the company changed the formula, these consumers experienced it as an assault on their identity. The company was not just reformulating a drink; it was telling consumers that their taste, their judgement, their loyalty had been wrong.
The anger was not about flavour. It was about respect. The message consumers received was: "Your opinion of this product — the product you have been loyal to for decades — is incorrect, and we are fixing it." This is precisely the kind of insight that emerges from qualitative research — depth interviews, focus groups, ethnographic observation — rather than quantitative taste tests.
The difference between individual preference and collective meaning. The taste tests measured individual responses in isolation. Each consumer tasted two liquids and indicated a preference. But Coca-Cola is not consumed in isolation. It is consumed in a cultural context — shared with friends, associated with occasions, embedded in rituals. The meaning of Coca-Cola is collective, not individual. No taste test, regardless of sample size, can capture collective meaning, because collective meaning does not reside in any individual consumer's palate. It resides in the shared cultural understanding of what the brand represents.
The Analysis
The Wrong Question, Perfectly Answered
The New Coke case is not a story about bad research. It is a story about the wrong research.
The Coca-Cola research team asked: "Do consumers prefer the taste of New Coke?" This is a perfectly legitimate research question, and they answered it with extraordinary rigour. The answer was yes. The problem is that this was not the question that mattered.
The question that mattered was: "What will happen when we replace Coca-Cola with a new formula?" This question encompasses taste preference, but it also encompasses loss aversion, emotional attachment, brand identity, cultural meaning, consumer autonomy, and the psychology of change. The taste test, by design, could only answer the taste question. It was blind to everything else — and everything else turned out to be everything that mattered.
This is the central lesson for any market researcher. Methodology does not validate research. Relevance validates research. A perfectly designed study that measures the wrong thing is worse than a rough-and-ready study that measures the right thing — because the perfect methodology creates false confidence. The 200,000 taste tests gave the Coca-Cola leadership team enormous confidence in a decision that was fundamentally misconceived. If the research had been less rigorous — if the sample had been smaller, if the results had been more ambiguous — the leadership might have proceeded with more caution. The rigour of the research was, paradoxically, the cause of the catastrophe. It silenced doubt.
The Qualitative Gap
Perhaps the most consequential methodological failure was the near-total reliance on quantitative research to inform a decision that was fundamentally qualitative.
What quantitative research can do. Quantitative methods — taste tests, surveys, conjoint analysis, market modelling — excel at measuring things that can be counted. How many consumers prefer taste A to taste B? What is the mean satisfaction score? How does purchase intent correlate with price? These are quantitative questions, and quantitative methods answer them well.
What quantitative research cannot do. Quantitative methods cannot easily capture the emotional depth of a consumer's relationship with a brand. They cannot reveal the cultural meanings embedded in consumption rituals. They cannot surface the unconscious assumptions and identity associations that drive behaviour in ways consumers themselves may not understand or be able to articulate. These are qualitative questions, and they require qualitative methods — depth interviews, ethnographic observation, projective techniques, semiotic analysis.
What qualitative research might have revealed. If the Coca-Cola research team had conducted extensive qualitative research before (or alongside) the quantitative taste tests, they would likely have uncovered the emotional and identity dimensions of the brand that the taste tests missed. Qualitative interviews with loyal Coke drinkers — particularly long-standing customers, Southern consumers, and older demographics — would almost certainly have revealed the depth of emotional attachment to the original formula. Not as a data point ("12% say they'd be upset"), but as a lived experience: the stories, memories, rituals, and identity meanings that consumers associated with Coca-Cola.
Ethnographic research — observing consumers in their homes, at barbecues, at family gatherings — would have revealed that Coca-Cola's meaning was inseparable from its context. The drink was not consumed in a research lab; it was consumed in life. And in life, the brand carried meanings that no taste test could capture.
The lesson is not that qualitative research is superior to quantitative research. The lesson is that qualitative research should precede quantitative research — particularly when the decision involves changing something that consumers may have deep emotional relationships with. Qual before quant is not merely a methodological preference. In this case, it might have been the difference between the most successful product strategy in Coca-Cola's history and the most embarrassing.
The Stated Preference Problem
The New Coke case is a defining example of the gap between stated preference and real-world behaviour — a theme that runs through the entire F3 curriculum and connects directly to consumer behaviour theory (F2-07).
Stated preference is what consumers say they will do when asked in a research setting. "I prefer this taste." "I would buy this product." "I would recommend this brand." Stated preferences are easy to measure, easy to quantify, and easy to present in a PowerPoint slide to a board of directors. They are also, in many categories and contexts, poor predictors of actual behaviour.
Revealed preference is what consumers actually do in the market — what they buy, what they choose, what they reach for. Revealed preferences are harder to measure (they require behavioural data, not survey data), but they are vastly more reliable as predictors of market outcomes.
The 200,000 taste tests measured stated preference. "Which do you prefer?" is a stated preference question. And the stated preference was clear: consumers preferred New Coke. But stated preference in a controlled taste test bears almost no resemblance to the choice behaviour that occurs in a supermarket, a restaurant, or a vending machine. In those contexts, the consumer is not evaluating taste in isolation. They are choosing a brand — with all the emotional, social, and identity baggage that brand choice entails.
The gap between stated preference and revealed preference is not an anomaly. It is the normal condition of consumer behaviour. Consumers routinely say one thing and do another — not because they are dishonest, but because the psychological processes that drive answers to research questions are different from the psychological processes that drive behaviour in markets. Research operates in System 2 — conscious, deliberate evaluation. Markets operate in System 1 — fast, automatic, heuristic-driven choice. A finding that is robust in System 2 may be entirely irrelevant in System 1.
What Coca-Cola Got Right (Accidentally)
The story has an ironic coda. The return of "Coca-Cola Classic" generated an enormous wave of positive publicity, consumer goodwill, and media coverage. Sales of Coca-Cola Classic surged after its reintroduction. Coca-Cola's market share recovered and then exceeded its pre-New Coke levels. Some commentators have speculated — with no supporting evidence — that the entire episode was a deliberate marketing strategy: withdraw the product, let consumers realise how much they love it, then bring it back to a hero's welcome.
This theory is almost certainly wrong. The internal accounts, including those from Goizueta, Keough, and the research team, consistently describe the decision to withdraw original Coke as genuine and the decision to bring it back as a panicked response to an unexpected crisis. But the accidental outcome reveals a deeper truth about brand equity.
The New Coke episode was, in effect, a massive natural experiment in loss aversion. By taking away original Coke, the company inadvertently demonstrated — at a scale no research project could replicate — exactly how much consumers valued it. The intensity of the backlash was itself data: data about the depth of emotional attachment that 200,000 taste tests had failed to capture. The crisis proved what the research had missed: that Coca-Cola's value to consumers was not primarily about taste. It was about identity, continuity, and the emotional meaning of a brand that had been part of their lives for as long as they could remember.
Donald Keough, reflecting on the episode, offered what may be the most honest post-mortem in marketing history: "Some critics will say Coca-Cola made a marketing mistake. Some cynics will say that we planned the whole thing. The truth is we are not that dumb, and we are not that smart."
The synthesis
The New Coke case is a Both/And case, though not in the way most analyses present it.
The conventional telling positions this as a story of emotion beating rationality — of soft, unmeasurable brand attachment defeating hard, quantifiable taste data. But this framing is too simple.
The truth is that the research AND the emotion were both real. Consumers genuinely preferred the taste of New Coke. And consumers genuinely had a deep emotional attachment to original Coke. Both were true simultaneously. The failure was not in measuring taste preference — that measurement was valid. The failure was in treating taste preference as the only relevant variable.
The evidence-based lesson is this: good research requires measuring the right things AND measuring them well. Coca-Cola measured one variable superbly and ignored several others entirely. A research programme that had measured taste preference AND emotional attachment, that had used quantitative methods AND qualitative methods, that had asked "which tastes better?" AND "what does this brand mean to you?" — that programme would have revealed the full picture. And the full picture would have shown that the taste advantage of New Coke was real but insufficient to overcome the emotional and identity costs of replacing the original.
The research was not wrong. It was incomplete. And incomplete research, presented with the false confidence of a 200,000-person sample, is more dangerous than no research at all — because it creates the illusion of certainty in a situation that demanded humility.
The Questions
F3-02 Application. The New Coke research programme relied almost entirely on quantitative taste tests. Design an alternative research programme that integrates qualitative and quantitative methods. What qualitative approaches would you have used first, and what questions would they have aimed to answer? How would the qualitative findings have shaped the subsequent quantitative phase?
F3-03 Application. Evaluate the methodology of the 200,000 taste tests using the principles of research design from F3-03. What was the research question, and was it the right question? What variables did the methodology control for, and what variables did it fail to account for? How does this case illustrate the difference between internal validity (the study measures what it intends to measure) and external validity (the findings generalise to the real world)?
F3-08 Application. The Coca-Cola research team had data that 10-12% of taste-test participants would be upset by the formula change. This was treated as an acceptable loss. Apply the principles of insight development from F3-08: how should this data point have been interpreted? What additional investigation should it have triggered? How does this case illustrate the difference between a data point and an insight?
Sources
Oliver, T. (1986). The Real Coke, the Real Story. Random House.
Schindler, R.M. (1992). "The Real Lesson of New Coke: The Value of Focus Groups for Predicting the Effects of Social Influence." Marketing Research, 4(4), 22-27.
Gladwell, M. (2005). Blink: The Power of Thinking Without Thinking. Penguin.
Kahneman, D. & Tversky, A. (1979). "Prospect Theory: An Analysis of Decision under Risk." Econometrica, 47(2), 263-291.
Kahneman, D. (2011). Thinking, Fast and Slow. Penguin.
Hays, C.L. (2004). The Real Thing: Truth and Power at the Coca-Cola Company. Random House.
Bastedo, R. & Davis, T. (1986). "The Coke Taste Test Revisited: A New Look at the Data." Working paper presented at the American Marketing Association conference.
Sharp, B. (2010). How Brands Grow: What Marketers Don't Know. Oxford University Press.