Metric Naming Convention: A Practical Mental Model
“Issue rate is up 12%.” It sounded decisive. It looked clean on a slide. But every time that number moved, the room filled with interpretation instead of clarity. Were we getting more issues because usage increased? Were the same customers reporting more often? Did one workflow break, or did lots of small edge cases add up? And most importantly: were these issues even the kind we could do something about?
I learned this lesson the painful way: most “metric debates” aren’t debates about performance. They’re debates about what the metric actually means. I’ve sat in reviews where smart people argued for 20 minutes about a single chart, only to realize we were each carrying a different definition in our heads. The number wasn’t wrong. The name was vague.
That’s why I now treat metric names like contracts. A good name forces clarity before the dashboard ever exists. My personal mental model is simple: Metric Name = [Scope] + [Rate | Count].
Scope answers, “What population are we talking about?” Rate or Count answers, “How are we measuring it?” When you force those two decisions into the name, you eliminate ambiguity and you make ownership obvious.
Take a real example: Issue Rate versus Resolvable Issue Rate. “Issue Rate” tells me the overall inflow week over week—everything entering the system. It’s a health signal, and often a product or systemic quality signal. “Resolvable Issue Rate” is deliberately narrower: it isolates the subset of issues that we can actually resolve week over week. That’s the lever. That’s where operational improvement lives. Same structure, different scope, completely different conversation.
I also pair rates with counts on purpose. Rates help me compare across time, segments, or launches. Counts keep me honest about scale and capacity. But I never let naming drift: if it’s a rate, it must have a stable, documented denominator. If it’s a count, it’s volume no hand-waving.
This convention has done more than tidy up dashboards. It has reduced meeting noise, made metrics portable across teams, and helped us build clean ladders from broad to specific: Issue Rate → Resolvable Issue Rate → Escalation Rate. Each step narrows scope and sharpens accountability.
The principle is simple: you improve what you measure. My add-on is equally simple: you only improve it consistently if everyone agrees on what the metric is starting with its name.