The Brand Tracker That Missed the Decline
Covers lectures
F5-05 · F5-08 · F5-11
Case 6: The Brand Tracker That Missed the Decline
Module: F5 — Brand Strategy Cross-references: F5-11 (Brand Health Measurement), F5-05 (Mental Availability), F5-08 (Distinctive Assets)
This case is a composite, drawn from patterns observed across multiple industries. No single company is described. The data patterns presented are representative of dynamics documented in the Ehrenberg-Bass research literature and Romaniuk (2023). They are illustrative, not attributable to any specific firm.
The Situation
A mid-size consumer brand — call it BrandM — operated in a mature, competitive FMCG category with six major national competitors and growing private-label presence. BrandM held approximately 14% market share, making it the third-largest branded player. It had been in the category for over twenty years and had built solid awareness and a base of regular buyers.
BrandM ran a conventional brand tracker. The tracker was a quarterly online survey of 1,200 category buyers per wave. It had been in place, with minor modifications, for eight years. The tracker measured the following:
- Prompted brand awareness (unaided and aided)
- Stated brand consideration ("How likely would you be to consider purchasing BrandM?")
- Brand image attributes — 64 attribute statements rated on a 5-point agree/disagree scale, including "good quality," "trustworthy," "innovative," "good value for money," "a brand for people like me," and so on
- Net Promoter Score (NPS)
- Overall brand health index — a proprietary composite of the above, weighted and indexed to a baseline year
The tracker was managed by a research agency that had held the account for the duration. Reports were delivered quarterly as a 90-page PowerPoint deck. A 3-page executive summary accompanied each deck, and this summary was what the CMO presented to the board.
For the three years in question — Years 1 through 3 of the decline — the tracker told a reassuring story.
The Tracker Data (Years 1-3)
| Metric | Year 0 (Baseline) | Year 1 | Year 2 | Year 3 |
|---|---|---|---|---|
| Aided awareness | 89% | 88% | 89% | 87% |
| Unaided awareness | 41% | 40% | 39% | 38% |
| Stated consideration | 52% | 51% | 53% | 50% |
| NPS | +18 | +20 | +19 | +17 |
| "Good quality" (% agree) | 67% | 68% | 66% | 65% |
| "Trustworthy" (% agree) | 61% | 62% | 63% | 60% |
| "A brand for people like me" (% agree) | 44% | 43% | 42% | 40% |
| Brand health index | 100 | 101 | 100 | 97 |
Nothing in this data triggered an alarm. Awareness was stable. Consideration fluctuated within the margin of error. NPS moved within normal range. Image attributes showed modest, gradual softening on a few dimensions, but nothing that would prompt a diagnostic investigation. The brand health index remained within three points of baseline for three consecutive years.
The board was satisfied. The CMO was satisfied. The research agency renewed its contract.
The Business Data (Years 1-3)
Meanwhile, the commercial data was telling a different story.
| Metric | Year 0 (Baseline) | Year 1 | Year 2 | Year 3 |
|---|---|---|---|---|
| Market share (volume) | 14.0% | 13.4% | 12.6% | 11.8% |
| Category penetration | 28.1% | 26.7% | 24.9% | 23.3% |
| Average purchase frequency (among buyers) | 4.2x/year | 4.1x/year | 4.1x/year | 4.0x/year |
| Weighted distribution | 82% | 81% | 80% | 79% |
Market share had declined by 2.2 points — approximately 16% of the brand's total share. Category penetration had dropped by nearly five points, meaning the brand was being bought by substantially fewer households. Distribution had slipped modestly, though this was interpreted as a consequence of the share decline, not a cause. Purchase frequency among remaining buyers was essentially stable, which is consistent with the double jeopardy law (Sharp, 2010): the brand was losing buyers, not seeing its remaining buyers buy less frequently.
The organisation was aware of the revenue decline but attributed it to category headwinds, competitive promotional activity, and supply chain disruptions in Year 2. The brand tracker gave no reason to suspect a brand problem. The brand appeared healthy. The business was not.
The Data (After Diagnostic Redesign)
In Year 4, a new CMO commissioned a diagnostic study using a measurement framework aligned with Romaniuk's (2023) Better Brand Health principles. The diagnostic measured what the old tracker did not.
Mental Availability Metrics (Year 4 vs. estimated Year 0 baseline)
| Metric | Year 0 (estimated) | Year 4 |
|---|---|---|
| Mental penetration (% of category buyers who think of BrandM in at least one buying situation) | 58% | 43% |
| Mental market share (BrandM's share of all brand mentions across category entry points) | 16% | 11% |
| Network size (average number of CEPs linked to BrandM per buyer) | 3.1 | 2.2 |
The mental availability decline was severe. Mental penetration had dropped by 15 points. Mental market share had declined by 5 points. Network size had contracted by nearly a full CEP per buyer. BrandM was being thought of by fewer people, in fewer buying situations.
Category Entry Point Analysis
The diagnostic mapped eight category entry points (CEPs) — the buying situations that trigger category purchase. It measured BrandM's linkage to each CEP among category buyers and compared this to the two leading competitors (Competitor A, the market leader; Competitor B, a challenger brand that had been growing aggressively).
| Category Entry Point | BrandM Linkage | Competitor A Linkage | Competitor B Linkage |
|---|---|---|---|
| "When I want something quick and easy" | 22% | 41% | 34% |
| "When I'm shopping on a budget" | 18% | 28% | 31% |
| "When I want to treat myself" | 9% | 19% | 24% |
| "When I'm buying for my family" | 31% | 38% | 20% |
| "When I see it on promotion" | 15% | 22% | 27% |
| "When I'm buying my usual" | 26% | 44% | 18% |
| "When I want something healthy" | 8% | 12% | 29% |
| "When someone recommends it" | 5% | 11% | 16% |
Two patterns were immediately visible. First, BrandM retained its strongest linkage to "buying for my family" and "buying my usual" — CEPs associated with habitual, established purchase. Second, Competitor B had built substantially stronger linkage than BrandM to growth CEPs: "treat myself," "something healthy," and "when someone recommends it." These were the buying situations growing in prevalence among category buyers, and BrandM was losing or absent from all of them.
Distinctive Asset Assessment
The diagnostic also assessed BrandM's distinctive assets using Romaniuk's (2018) Distinctive Asset Grid.
| Asset | Fame (% who associate it with BrandM) | Uniqueness (% who associate it with BrandM only) |
|---|---|---|
| Logo | 62% | 48% |
| Primary brand colour | 55% | 31% |
| Packaging shape | 41% | 29% |
| Tagline | 23% | 14% |
The tagline, which had been changed twice in the preceding five years, had low fame and low uniqueness — it was not functioning as a distinctive asset. More concerning, BrandM's primary brand colour had lost uniqueness. Competitor B had introduced packaging in a closely adjacent colour palette eighteen months earlier. The result was a distinctiveness encroachment: category buyers exposed to both brands' packaging were increasingly failing to distinguish between them at the shelf. BrandM's advertising was, in effect, partially building memory structures for Competitor B.
What Went Wrong
The diagnostic study revealed a clear pattern: the old tracker had been measuring at the wrong level. It measured attitudes (what people say they think about the brand when asked) rather than memory structures (whether the brand comes to mind in buying situations without prompting). It measured awareness — whether people know the brand exists — when it should have measured salience — whether the brand is retrieved at the moments that trigger purchase. It used a composite health index that averaged together stable lagging indicators and concealed the collapse of the one leading indicator (mental availability) that the tracker did not include. And it benchmarked the brand against its own history rather than against the competitive set, producing an illusion of stability in a market where competitors were actively gaining ground.
The data is now in front of you. The question is: how would you have caught this earlier?
The Questions
Diagnose the tracker's failures using Romaniuk's (2023) hierarchy of brand health metrics. At which level of the hierarchy was the old tracker operating? Which levels were missing entirely? If you were redesigning the measurement system, which specific metrics would you include in the tracking dashboard (no more than seven), and which would you reserve for on-demand diagnostic studies?
Analyse the CEP data. Why is it significant that BrandM retained strong linkage to "buying my usual" and "buying for my family" but weak linkage to "treat myself," "something healthy," and "when someone recommends it"? What does this pattern suggest about the type of buyers BrandM was retaining versus losing? How does this connect to Sharp's (2010) argument that brands grow through penetration, not loyalty?
Assess the distinctive asset problem. Using Romaniuk's (2018) Distinctive Asset Grid, map BrandM's four assets. Which are performing, which are vulnerable, and which have effectively failed? What is the strategic risk of colour encroachment by Competitor B, and what should BrandM do about it? Why did changing the tagline twice in five years damage its performance as a distinctive asset?
The old tracker showed stable NPS throughout the decline. Why did NPS fail to signal the problem? What does this reveal about the relationship between NPS and actual buying behaviour? Is NPS a tracking metric, a diagnostic metric, or neither?
Design a measurement system for BrandM going forward. Specify the tracking dashboard (metrics, frequency, audience) and the diagnostic toolkit (modules, triggers, audience). Be explicit about what you would stop measuring, what you would start measuring, and how you would connect each metric to a specific management decision.
Framework Guide
F5-11 (Brand Health Measurement): This is the primary framework for this case. Apply Romaniuk's (2023) Better Brand Health framework to diagnose what the old tracker missed and design what the new system should measure. Pay particular attention to the distinction between tracking (monitoring vital signs on a regular cadence) and diagnostics (investigating specific questions when triggered). The lecture's hierarchy of metrics — mental availability metrics at Level 1, distinctive asset metrics at Level 2, attitudinal metrics at Level 3, behavioural outcomes at Level 4 — provides the diagnostic structure. Consider also the lecture's discussion of measurement rhythms: which metrics should be measured continuously, quarterly, annually, or on demand?
F5-05 (Mental Availability): The CEP analysis in this case is a direct application of the mental availability framework. Use the concepts of mental penetration, mental market share, and network size to explain why BrandM's business was declining while its tracker scores held steady. The distinction between being known (awareness) and being thought of in buying situations (mental availability) is the central diagnostic insight.
F5-08 (Distinctive Assets): Apply the Distinctive Asset Grid to BrandM's asset portfolio. The tagline data illustrates the cost of asset instability — a point Romaniuk (2018) makes about the time required to build asset fame. The colour encroachment illustrates what happens when distinctive assets are not monitored and defended. Consider the lecture's guidance on which types of assets (colour, shape, logo, character, sonic) are most and least robust to competitive encroachment.
Sources
Romaniuk, J. (2023). Better Brand Health: Measures and Metrics for a How Brands Grow World. Oxford University Press.
Romaniuk, J. (2018). Building Distinctive Brand Assets. Oxford University Press.
Sharp, B. (2010). How Brands Grow: What Marketers Don't Know. Oxford University Press.
Keller, K.L. (2013). Strategic Brand Management: Building, Measuring, and Managing Brand Equity (4th ed.). Pearson.
Reichheld, F.F. (2003). The One Number You Need to Grow. Harvard Business Review, 81(12), 46-54.
Ehrenberg, A.S.C. (1988). Repeat Buying: Facts, Theory and Applications (2nd ed.). London: Oxford University Press.