How Optimising for Short-Term Metrics Causes Long-Term Harm

strategy technology ethics systems-thinking

Lucas Pierce’s post, Short-term metrics, long-term harm, analyses how optimising for short-term metrics leads to negative long-term consequences, even when builders have no malicious intent.

The Optimisation Engine

The engine for this is the A/B test, which allows companies to measure a change’s impact on a target metric like ‘time spent’. Instead of debating a feature’s merit, teams can simply test it. As Pierce notes, “It no longer matters why it works, just that you can prove it does work.”

The relentless pursuit of engagement has led to features that exploit human psychology, such as variable reward schedules, social validation, and infinite scroll.

The process requires no more intent than natural selection does. It’s just thousands of little experiments, with the most compulsive features surviving because they satisfy a simple fitness function: does time spent go up?

Pierce warns that this pattern is repeating with Large Language Models (LLMs). When optimised for engagement, chatbots learn to tell users what they want to hear, not what is accurate. This encourages sycophantic behaviour and creates personalised echo chambers.

This creates a direct trade-off between easily measured metrics and harder-to-quantify user well-being.

Short-Term MetricPotential Long-Term Harm
Increased Time SpentFosters addiction; reduces well-being.
More Likes & CommentsExploits need for social validation; can harm mental health.
Higher Post FrequencyEncourages performative behaviour over genuine connection.
Engaged Chatbot SessionsReinforces biases and echo chambers; promotes sycophantic responses over accuracy.

Systemic Problems Require Systemic Solutions

These outcomes are not the result of “evil” people but are the consequence of a system that rewards short-term growth. Pierce argues for fixing the system, not blaming individuals, using root-cause analysis like the 5 Whys.[1]

The article concludes with a call for professional responsibility. Builders must actively investigate and mitigate the harms of their products, rather than deferring to “consumer choice.”

If you build a product, you are responsible for understanding its long-term impact on users… the burden should be on the builder of the product proving their product isn’t harmful, and mitigating what harm they do discover.

This challenges the industry to implement mechanisms that prevent known patterns of harm from repeating.


  1. The 5 Whys is an interrogative technique used to explore the cause-and-effect relationships underlying a problem. By repeatedly asking “Why?”, teams can move past surface-level symptoms to identify a systemic root cause. You can read more about it at interaction-design.org. ↩︎


View this page on GitHub.