In analytics, we always strive for apples-to-apples comparisons. For example, if you say that turnover in Department A is bad compared to turnover in Department B, then that insight loses credibility if the jobs in Department A are call-center operators and those in Department B are accountants and hence are not comparable.
In people analytics, numbers rarely have meaning without some comparator. Whether we are talking about turnover or engagement or compensation, we almost always want to say, “Our number is higher [or lower] than some relevant comparator.”
The trouble is that tidy apples-to-apples comparisons are hard to find. Every department will have unique features that draw into question any comparison.
Even comparing this year’s metrics to last year’s may not be apples to apples because a great deal may have changed in the interim. You might say, “The data shows lateness is up 15% compared to last year, which is bad.” But a manager might respond, “Yes, but we changed location so that uptick is to be expected. We are actually doing well, all things considered.”
What to Do About the Lack of Good Comparators
When there are not good comparators, the first instinct of an analytics pro is to try to adjust for the confounding factors. In sales analysis, we never compare sales in March to sales in November without adjusting for seasonality. In people analytics, we can try to do the same sort of thing, adjusting the numbers to account for the differences between different groups.
The trouble with these adjustments is that it moves us further and further away from easy-to-understand hard data. It leads to making a comparison of two numbers — e.g., turnover in Department A compared to Department B — followed by a dozen footnotes on each adjustment we made to try to get an apples-to-apples comparison. How we handle each adjustment is subject to debate or even manipulation.
Article Continues Below
Talent42 - The #1 Tech Recruiting Conference
Yes, we should seek the best comparators we can, and yes, where possible we should try to adjust for factors that will make for a fairer comparison. But it’s always fraught with difficulties.
The takeaway is that leadership wants apples-to-apples comparisons and often leans towards believing they are possible — but much of the time they are not. You may be asked, “Show me which programming team has the best productivity,” which seems a fair question. Yet, when you get into the details of how you assess productivity and the difference between programming projects, the shortage of apples-to-apples comparisons means that there is no simple answer to the simple question.
What we do in practice — the only thing we can do in practice — is work with the best comparable we can find, try to adjust for, or at least make note of, any big differences between comparators, and then use our judgment to interpret the results.
We have to educate leaders that judgment will always be a big factor in people analytics, and there is no getting away from that. Data informs judgment — it does not make the decision for you.