
“That which is Measured, Improves”: Balancing Efficiency and Adaptation
“That which is measured, improves.” This simple, yet compelling idea often guides organizations, governments, and individuals. It suggests that by identifying a metric – be it production speed, sales conversions, or website traffic, active monitoring can spark improvement. What could be more straightforward? We set a target, watch our progress, and celebrate the gains.
But this logic masks a subtler truth: measurement doesn’t merely reveal reality; it transforms our perceptions of it and reshapes it. When we observe and quantify human behaviour, we don’t just see improvements; we also incentivise adaptations that can sometimes be detrimental.
One needs to look no further than the Hawthorne effect to understand the power of observation. In classic industrial experiments, whenever workers perceived that they were being monitored, their productivity increased. Similarly, goal-setting theory emphasizes that specific and measurable objectives help people focus, persist, and ultimately perform better. From these perspectives, monitoring is a powerful tool because it clarifies what matters, keeps us accountable, and propels forward momentum.
Yet, the story grows complicated once we consider the observer effect and the problem of incentives. Measurement itself can create biases. Once we measure something, people tend to change their behaviour to look good by that metric. If a team is judged solely on how many widgets it produces, its members will likely churn out widgets more rapidly. But what if, in the process, they sacrifice quality or ignore the indirect consequences of their actions? The measured improvement might be superficial—an artefact of how the metric changed incentives rather than propel genuine progress.
This tension is not new. James C. Scott’s Seeing Like a State chronicles how large-scale efforts to make populations “legible” to governing authorities often produce unintended consequences. He offers the example of the French door-and-window tax, established under the French Directory and abolished only in 1917. This tax was designed to determine how much to levy by assuming the number of doors and windows in a home reflected its overall size. To simplify the process, tax officials simply counted openings instead of actually measuring the size of each house.
Although this formula was straightforward and easy to apply, it had far-reaching effects. Peasant families began building and renovating homes with as few openings as possible to minimize their tax burden. This not only harmed residents’ health by reducing ventilation and natural light but also increased the risk of fire. Homes with fewer doors and windows provided limited escape routes, turning a practical tax measure into a hazard to both health and safety. “By a kind of fiscal Heisenberg Principle,” Scott writes, “shorthand formulas through which tax officials must apprehend reality are not mere tools of observation; they frequently have the power to transform the facts they take note of.” In this way, a seemingly practical metric transformed behaviour and ultimately did harm – an outcome of shortsightedness rather than sound judgment.
Despite this cautionary tale, the observer effect does not always lead to detrimental outcomes. Under certain conditions, people respond to measurement by innovating smarter ways to achieve objectives. This beneficial outcome, however, requires that the chosen metrics align closely with meaningful goals and respect the complexities of the system being measured. Without such alignment, even creative adaptations can end up being counterproductive.
This reveals the essential challenge: Can we design measurement systems that improve performance without warping it? Goodhart’s Law warns, “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” Marilyn Strathern phrases it even more succinctly: “When a measure becomes a target, it ceases to be a good measure.” The risk is that we get too fixated on the number in the dashboard and lose sight of its underlying purpose.
If we measure classroom success by standardized test scores alone, teachers may focus on teaching students how to score tests, neglecting creativity and critical thinking. The overall metric improves, but the broader goal of cultivating well-rounded learners with a good understanding of the material, is sacrificed.
So how do we reconcile these tensions? First, we must acknowledge that while measurement is a powerful intervention, it is not a neutral act. It shapes behaviour, sometimes in unpredictable ways. Second, we should be prepared to reconsider. The process shouldn’t be “set and forget.” Instead, we continuously refine our metrics based on observed adaptations. This iterative process might mean adding qualitative measures, seeking inputs from those being measured, and maintaining humility in the face of complexity.
Finally, trust and transparency are paramount. When those subject to monitoring believe the metrics are fair, meaningful, up for discussion, and open to revision, they are more likely to respond productively. A similar principle is articulated in the Agile Manifesto, which prioritizes "individuals and interactions over processes and tools." This does not mean dismissing processes altogether but recognizing that their value lies in how well they support collaboration and shared understanding.
Metrics, too, should function as enablers rather than constraints. When people are included in shaping the systems that measure their performance and feel their input is valued, they are more likely to engage constructively. Conversely, if metrics are perceived as arbitrary or disconnected from reality, they can foster resistance instead of meaningful improvement.
In the end, “That which is measured, improves” contains a kernel of truth, but it can be just as true that “That which is measured, can be distorted”. Measurement can indeed drive accountability and enhance performance, but making the most of this potential requires us to look beyond the surface. By recognizing that measurement changes the game, incentives shape what we see, and adaptive agents may adapt perversely, we can embrace complexity, strive for genuine progress and measure what truly matters.