
Regardless of which workplace you have, at least in corporate world and at least once a year, you will be in a situation to either be rated (usually by your boss) or be rater (in case you are the boss). These things can actually happen between the team members as well as requested by HR dept. But either way, this is the most impossible thing that a human can do to another human. Those at HR departments who think they will ever have any accurate results to such ratings, they are forever wrong. In most cases they rate good guys with bad results because they are simply disconnected from reality and their ratings are totally based on wrong data. Or if some still think that they do a good job, HR partner or team leader or manager of whatever his/her rank is, then let me ask as follows: questions:
- How much do you think you can know about a person simply by watching him?
- If you work with him every single day, do you think you can figure out what drives him?
- Could you spot enough dues to reveal to you whether he’s competitive, or altruistic, or has a burning need to cross things off his list every day?
- How about his style of thinking? Are you perceptive enough to see his patterns and pinpoint that he is a big-picture, what-if thinker, or a logical, deductive reasoner, or that he values facts over concepts?
- And could you parse how he relates to others, and discern, for instance, that he’s far more empathetic than he appears, and that deep down he really cares about his teammates?
Perhaps you can answer objectively. Perhaps you are one of those people who instinctively picks up on the threads of others’ behaviors and then weaves these into a detailed picture of who a person is and how he moves through the world.Certainly, the best team leaders seem able to do this. They pay close attention to the spontaneous actions and reactions of their team members, and figure out that one person likes receiving praise in private, while another values it only when it’s given in front of the entire team; that one responds to dear directives, while another shuts down if you even appear to be telling him what to do. They know that each member of their team is unique, and they spend a huge amount of time trying to attend to and channel this uniqueness into something productive. So then I keep asking:
- How about rating your team, though? Do you think you could accurately give your team members scores on each of their characteristics?
- If you surmise that one of your team is a strategic thinker, could you with confidence choose a number to signify how good at it he actually is?
- Could you do the same for his influencing skills, or his business knowledge, or even his overall performance?
- And if you were asked how much of these things he had in relation to everyone else on the team, do you think you could weigh each person precisely enough to put a number to each person’s relative abilities?

This might sound a bit trickier – you’d have to keep your definition of influencing skills stable, even while judging each unique person against that definition. But if I gave you a scale of 1 to 5, with detailed descriptions of the behaviors associated with each number on the scale then:
- Do you think you could use that scale fairly, and arrive at a true rating?
- And even if you are confident in your own ability to do this, what do you think about all the other team leaders around you? Do you think they would use the scale in the same way, with the same level of objectivity and discernment as you?
- Or would you worry that they might be more lenient graders, and so wind up with higher marks for everyone, or that they might define “influencing skills” differently from you?
- Do you think it’s possible to teach all of these team leaders how to do this in exactly the same way?
It’s a lot to keep straight – so many different people rating so many other different people on so many different characteristics, producing torrents of data. But keep it all straight we must, because this data represents people, and once collected, it comes to define how people are seen at work.
At least once a year, a number of your more senior colleagues will gather together in a room to discuss you. They will talk about your performance, your potential, and your career aspirations, and decide on such consequential issues as how much bonus you should get, whether you should be selected for a special training program, and when or if you should be promoted. This meeting, as you might know, is called a “talent review”, and virtually every organization conducts some version of it. The organization’s interest is in looking one by one at its people-its talent-and then deciding how to invest deferentially in those individuals. The people who display the highest performance and potential – the stars, if you like-will normally get the most money and opportunity, while those further down the scale will get less, and those struggling at the lower end of the scale will more than likely be moved into a euphemistically described Performance Improvement Plan (P.I.P.) and thereby eased out.

These talent reviews are the mechanism that organizations use to manage their people. They want to keep the best people happy and challenged, and simultaneously weed out those who aren’t contributing. Since, in most organizations, the largest costs are people’s wages and benefits, these meetings are taken very seriously, and the most pressing question-a central preoccupation of all senior leaders in all large organizations-is, “How can we make sure that we are seeing our people for who they really are?”
This is a wake-up-in-the middle-of-the-night sort of question for senior leaders, because they worry that their team leaders might not, in fact, understand the sort of person the organization needs nearly as clearly as the senior leaders do, and further that the team leaders might not be objective raters of their own people. To combat this worry, companies have set up all sorts of systems designed to add rigor to this review process. The one you may be most familiar with is the nine box.

This is a graph showing performance along the x-axis and potential up the y-axis, with each axis divided into thirds – low, medium, and high-to create nine possible regions. Each team leader is asked to think about each person on his or her team and then place them, in advance of the talent review, into one of the nine boxes -to rate them, that is, on both their performance and their potential. This system is designed to allow a team leader to highlight that a particular person might have bags of potential, and yet not have translated that potential into actual performance, whereas another team member might contribute top-notch performance, and yet have very little potential upside – he’s maxed out in his current position. With this data displayed in the talent review, the leadership team can define different courses of action for each person: the former will be given more training and more time, for example, while the latter might just be offered a healthy bonus.
Historically, the talent review has happened only once or twice a year. With the arrival of smartphones it’s now technologically possible for an organization to launch short performance-ratings surveys throughout the year. Each person can be rated by their peers, direct reports, and bosses, and then the scores can be aggregated either at mid-year or at year’s end to produce a final performance rating. This race to real-time ratings appears as inevitable as it is frenzied, and all of it is in service of the organization’s interest, which is to answer the question, “When it comes to our people, what do we really have here?”

You will come to wonder about these rating scales, these peer surveys, and these always-on 360-degree apps, and you will hope that there is enough science in them, enough rigor and process, that you-ideally, the best of you – will be portrayed accurately. After that, let the chips fall where they may. At least, then, you will have been given a fair hearing on your true merits as a person, and as a team member. It is going to bother you greatly to learn, then, that in the real world, none of this works. None of the mechanisms and meetings – not the models, not the consensus sessions, not the exhaustive competencies, not the carefully calibrated rating scales – none of them will ensure that the truth of you emerges in the room, because all of them are based on the belief that people can reliably rate other people. And they can’t. This, in all its frustrating simplicity, is a lie.

It’s frustrating because it would be so much more convenient if, with enough training and a well-designed tool, a person could become a reliable rater of another person’s skills and performance. Think of all the data on you we could gather, aggregate, and then act on! We could precisely peg your performance and your potential. We could accurately assess your competencies. We could look at all of these and more through the eyes of your bosses, peers, and subordinates. And then we could feed all this into an algorithm, and out would come promotion lists, succession plans, development plans, nominations for the high-potential program, and more. But none of this is possible, despite the fact that many human capital software systems claim to do exactly what’s described above. For any other work – which means most work – we have no way of knowing what drives performance, because we have no reliable way of measuring performance. We don’t know:
- whether bigger teams drive performance more than smaller teams.
- whether remote workers perform better than colocated workers.
- whether culturally more diverse teams are higher performing than less diverse ones.
- whether contractors are higher performers than full-time employees, or if it’s the other way around.
- We can’t even show that our investments in the training and development of our employees lead to greater performance.
We can’t say anything about any of these things, precisely because we have no reliable way to measure performance.
Leave a Reply