Key Performative Indicators and the Cowardice of Convenient Data
In a world overflowing with dashboards, metrics, and reports, it’s easy to mistake activity for achievement. But what if the real breakthroughs—the ones that drive our teams, products, and organizations forward—are waiting just beyond the glow of the familiar?
In many companies, activity masquerades as progress. We have become skilled at generating reports, populating dashboards, and tracking metrics that create a compelling illusion of control and forward momentum. These are performative indicators: metrics that are easy to measure and present, chosen not for their strategic value but for the story they allow us to tell.
This reliance on easily gathered metrics stems from a deeper issue: the cowardice of convenient data. It is an organisational reluctance to engage with ambiguous, complex, and often qualitative information that holds the real insights. Instead, we retreat to the safety of spreadsheets and charts, measuring what is simple rather than what is important. This practice doesn't just misdirect effort; it actively prevents us from making the brave decisions necessary.
The Scene: Under a Lamppost - Searching Where the Data Is Brightest
There is an old parable, popularised by the philosopher Abraham Kaplan, about a man searching for his keys under a lamppost. When a passerby asks if he is sure he lost them there, the man replies, "No, I lost them in the park. But the light is much better over here."
This is the daily reality in many businesses. We search for answers not where they are most likely to be found, but where the data is most readily available.
- We analyse the minutiae of existing customer clicks and usage patterns because the data is abundant and clean. This comes at the expense of the much harder work of talking to potential future customers about unsolved problems and market gaps—operating in the darkness to find new opportunities.
- We celebrate teams for shaving a few percentage points off a cloud computing bill—a tangible, well-defined task with a clear metric. We are less willing to invest the same effort in market-expanding ideas, where the outcomes are less certain and the data is predictive, not historical.
This behaviour is a form of risk management, but it is not strategy. It optimises for certainty and internal activity over potential and external impact, choosing the convenience of the lamppost over the value hidden in the park.
Act I: Developer 'Productivity'
Consider the persistent effort to measure developer productivity. The easiest things to measure are outputs: lines of code written, reviews conducted, number of commits, or pull requests merged. These metrics are tangible, readily available, and give the impression of activity (and entire software categories exist to service this appetite), they are insufficient at best and dangerously misleading at worst.
A team that merges dozens of small, trivial changes looks busier than a team that delivers a single, complex feature that unlocks significant customer value.
This approach measures activity, not progress. As I've discussed previously regarding the Theory of Constraints, the goal of any system is to increase its overall throughput of value. In software engineering, the bottleneck is rarely the speed of typing; it is the cognitive load of architectural design, quality assurance, and system integration. Focusing on activity can incentivize behavior that clogs the system's actual constraints, leading to more work-in-progress, higher review burdens, and ultimately, a slower delivery of value.
The real measures of engineering effectiveness are outcomes: the throughput of reliable and valuable features, system stability, and the mean time to recovery after an incident. These are harder to quantify, but they are what actually matter to the business.
ACT II: Product Analytics
A similar pattern emerges in product management. The rise of sophisticated analytics tools allows us to track every click, hover, and interaction within an application. In theory, this data should provide unparalleled insight into user behaviour. In practice, it often creates an abyss of information that is rarely acted upon.
Teams spend weeks instrumenting new features, building elaborate dashboards to monitor engagement. A feature launches, and the dashboard shows that 8% of users have clicked the new button. What decision does this data point drive? Is 8% good or bad? Does it mean the feature is a failure, or a niche success?
The data tells us what is happening, but it seldom tells us why. The pursuit of quantitative perfection can lead us to neglect the fastest path to learning: qualitative feedback. A fifteen-minute conversation with five actual users will almost always yield more actionable insight than a dashboard tracking thousands of data points.
The measurement here becomes a crutch, an excuse to avoid the difficult work of engaging directly with customers and forming a hypothesis that can be tested through experimentation, not just passive observation.
The dashboard is the lamppost; the customer conversations are in the dark.
Act III: Watermelon Projects: The Danger of the "All-Green" Status Report
- The Performative Indicator: Project Milestones Completed. Project status is often reported as "Green" based on a checklist of completed activities: "design document signed off," "kick-off meeting held," "server provisioned."
- The Cowardice of Convenient Data: It is far more comfortable for a project manager to report progress based on activity than to flag a fundamental, underlying risk or admit that a core assumption has been invalidated. The meaningful metric isn't the completion of tasks, but the continuous Validation of Business Value and Mitigation of Critical Risks.
- The Dysfunction: This leads to "Watermelon Projects"—Green on the outside in every status report, but deeply Red on the inside. Executives are given a false sense of security, believing everything is on track. The project then collapses suddenly near the deadline when an unaddressed fundamental issue can no longer be ignored. The performance of reporting "Green" has replaced the actual work of managing the project's health. The phenomenon of "Watermelon Projects" typically arises from a confluence of factors within an organization. A primary cause is a culture of fear or blame, where project managers are hesitant to report problems for fear of negative repercussions. This can lead to a "shoot the messenger" environment, incentivizing teams to maintain a facade of success
Act IV: The KPI Compensation Cycle
Nowhere is the preference for convenient data more entrenched than in leadership compensation. The need to establish Key Performance Indicators (KPIs) for department heads often triggers a predictable and wasteful cycle that prioritises measurement over results.
- Invest in Tooling: The process begins with a well-intentioned investment in a new analytics module or business intelligence platform, promising a single source of truth. (See also the Cottage Industry in Spreadsheets)
- Reverse-Engineer Compensation: Leaders, now tasked with setting their KPIs, examine the new tool's capabilities. Instead of starting with the strategic question "What outcome does the business need?", they ask, "Of the things this tool can easily measure, which metrics can I use for my compensation?" Executive goals are thus defined by the limitations of the tool, not the needs of the business.
- Rebuild for Nuance: Within a year, it becomes clear these convenient metrics are being gamed or are driving the wrong behaviour. A scramble ensues to rebuild dashboards and data pipelines to capture more nuanced, meaningful measures.
- The Baseline Delay: With a new set of metrics, the clock resets. The next two quarters are spent "establishing a new baseline," a period during which accountability is conveniently suspended. This process can be repeated indefinitely.
- The Technology Treadmill: By the time a stable baseline is established for the "correct" metrics, the underlying technology is often obsolete or business priorities have shifted, triggering a new investment and restarting the entire cycle.
This entire loop is a form of institutionalised performance. It creates the appearance of rigorous, data-driven management while actively delaying accountability and consuming vast resources.
From Measurement to Experimentation
Breaking free from the measurement trap does not mean abandoning data. It means reorienting our approach to place data in service of decision-making, not in place of it. This requires a fundamental shift in mindset.
- Frame the Decision First. Before asking "What can we measure?", ask "What is the most important decision we need to make?" This simple reframing focuses effort. The goal is not to produce a report; it is to gain the minimum information necessary to make a confident, reversible decision.
- Favour Experimentation Over Exhaustive Analysis. When faced with uncertainty, the default should not be to gather more data, but to run a small, low-cost experiment. For example, a team debating two competing technical architectures could spend six weeks writing detailed analysis documents and benchmarking theoretical performance. Alternatively, they could spend one week building a crude prototype of the riskier option. The prototype—the experiment—will almost certainly provide a clearer answer, faster. It replaces analysis with learning through action.
- Measure Outcomes, Not Outputs. Leaders must be ruthless in aligning metrics with business outcomes. Instead of tracking "features shipped," track "customer adoption rate" or "reduction in support tickets." Instead of "sales calls made," track "pipeline velocity" or "customer lifetime value." This shifts the focus from being busy to being effective.
Final Act
"Not everything that counts can be counted, and not everything that can be counted counts." William Bruce Cameron. (Not Albert Enistein!)
Data is essential. Measurement is a vital tool for any well-managed organisation.
I've warned here of overusing convenient metrics but many fields depend on rigorous, quantitative measurement. In areas critical to public safety and welfare—such as aviation, healthcare, and environmental monitoring—precise data is non-negotiable for preventing catastrophic failures. Similarly, regulated industries like finance, along with scientific research and manufacturing, rely on exact metrics to ensure compliance, maintain quality control, and guarantee the integrity of their results.
But when the process and theater of measurement becomes a shield against the risk of decision-making, it becomes a liability. It creates the illusion of progress while fostering a culture of inaction.
The most effective leaders I have worked with do not wait for perfect, complete data. They understand that business happens under conditions of uncertainty. They use the best available information to form a hypothesis, test it with decisive action, and then use the results of that action—the ultimate metric—to inform their next move.
The challenge is to cultivate this courage within our teams: the courage to step away from the comforting glow of convenient data and into the shadows where real opportunities lie.
Assess your own metrics—are they driving real value? Need Help?