You're sitting in a leadership meeting and someone cracks out the dashboard. Velocity is up 18% quarter-on-quarter. Story points are tracking. Burn-down looks clean. Everyone nods. You think you're winning.
Meanwhile, your best engineers are quiet in that meeting. They know. Your product sits in the backlog. Your customers are waiting. Your release cycles keep slipping. The teams feel burned out. But the charts say you're flying.
This is what happens when you optimise for metrics instead of outcomes. Vanity metrics don't lie — but they're so good at obscuring the truth that your entire engineering operation can drift away from what actually matters while the dashboards say everything is fine.
The velocity trap is real — and it's everywhere
Velocity was invented to help teams forecast, not to measure performance. But somewhere in the last decade, it became performance. Teams learned fast: if velocity matters, we'll increase it. You can do this in roughly 47 ways, none of them involving better software:
Break stories into smaller stories. Inflate estimates on new features. Count refactoring work as delivery. Classify bug fixes as features. Move story point goalposts. Compress planning estimates just to hit the ceiling. Your teams aren't dishonest — they're rational. They're just optimising for what the system measures.
The problem is velocity is a steering metric that became a performance metric. It's like optimising for the dashboard rather than the road. And the smarter your teams are, the better they'll get at this. Your top engineers won't fight you on velocity metrics — they'll just game them and go home frustrated.
Lines of code, commits, and other toxic signals
Some organisations measure developer productivity by lines of code written. Others count commits. Pull request frequency. Keyboard activity (yes, really). These metrics share a beautiful property: they correlate precisely with bad outcomes.
A team that writes a clean 200-line refactor that removes 3,000 lines of technical debt looks lazy. A developer who reduces a data pipeline from O(n²) to O(n) writes fewer lines. A well-designed API that simplifies the codebase by 40% looks like a productivity dip.
The research is unambiguous here. Satya Nadella, in internal communications, has repeatedly pushed Microsoft toward measuring team outcomes rather than individual activity. The data across the industry is clear: productivity metrics at the individual level are not predictive of business value, retention, or performance — and they actively harm all three.
What actually matters: The signals your teams are already sending
Your teams know what's working. They can tell you the impediments. They understand where the friction lives. The issue is that your metrics aren't listening to them.
Here's what the top-quartile companies measure instead: Do teams finish what they plan? Are releases predictable? When a team identifies an impediment, does it get resolved? How long between "we wrote it" and "it's in production"? Do people want to stay? Are they learning?
"Top-quartile tech companies achieve 35% faster revenue growth. They don't achieve this because they write more code or move faster. They achieve it because they move predictably, they remove impediments faster, and their teams stay."
McKinsey State of Tech 2024
These are correlated with business outcomes. Predictability reduces risk. Impediment resolution rate predicts team health and delivery. Retention predicts institutional knowledge and continuity. Cycle time — actual time from committed to delivered — tells you something real about your organisation's friction and flow.
The real cost of gaming the metrics
When teams learn that velocity matters, a few things happen. First, estimates drift. They stop reflecting reality. Capacity planning becomes fiction. You run sprints that fail more often but look better on the dashboard.
Second, technical debt accelerates. If story points are the game, shipping faster wins. Shortcuts compound. Your system gets more fragile. The debt becomes a drag on every team, but the metrics don't capture it until you hit a cliff and wonder why.
Third — and this is the brutal one — your best engineers leave. Not because they don't care about delivery. Because they care about it and the system doesn't. They see the friction, they see the waste, they see the metrics lying, and they see that leadership is optimising the dashboard instead of the delivery. Senior engineers don't stay long in organisations like that.
How to stop measuring the wrong thing
First: kill individual productivity metrics. Today. Your engineers will feel it immediately. The relief alone will change your culture.
Second: measure predictability. Did the team finish what they committed? Over 8-12 weeks, what's the actual completion rate? This is harder to game (though people will try) and actually predictive of leadership confidence and business planning.
Third: measure cycle time. Code committed to production. If you're at 48+ hours, you have friction. If you're at 2+ weeks, you have serious problems. This metric reveals impediments, bottlenecks, and broken processes in ways velocity never will.
Fourth: listen to your teams directly. Use retrospectives not as a checkbox but as a data collection engine. What's slowing us down? What's got better? What do we actually need to improve next? Then measure whether you acted on it.
Stop measuring effort. Start measuring outcome. The difference is that outcome metrics are actually aligned with what your business needs and what your teams believe in.