The Rater Effect In Performance Assessments
It can be very interesting to look at how people will rate themselves and others under different psychological contexts. Dan White, the director of Ezra's Impact Lab, talks about how assessments go beyond who and what you ask, but also how you ask them.
When you conduct a 360-degree feedback report, who do you expect to be the toughest raters?
The person who is the focus of the feedback (self), the line manager, direct reports, or possibly peers?
We were discussing this in the Ezra Impact Lab a little while ago and we all had different answers: some felt self, others thought line manager or peers. This felt odd that we would reach different conclusions, so we took a little time to look at the data.
This is important because many organisations make significant investments (in time and money) in conducting 360 feedback exercises as part of performance coaching. We would contest that this investment is a highly valuable one of course, but when you’re rolling out a programme like this what kind of impact do you want to have?
It turns out you can to an extent determine whether the self-score is relatively high or low compared to their other respondents (more senior stakeholders, peers and direct reports). Consider, do you want people to see 360 reports where in the majority they are told they are better at things than they rated themselves to be, or worse? Because that is likely to drive quite different reactions.
Imagine receiving a report where on the whole you are told your perception of self is overly generous, and others see you as being less good at things. This is likely to be taken more seriously than receiving a report where on the whole you are told you are overly self-critical and others think you are better at things that you do yourself. Here the reaction may be less serious, but is probably more positive, at least in the short term.
So how do you influence this?
Free White Paper Letting The Learner Choose (2021)
Instructors typically make decisions on who learns what and how.
But what happens if we give learners more control? We examine the benefits of a more autonomous approach to learning.
Well, let’s return to our disagreement about who were the toughest respondents. We had a good look at the data and the answer we found is interesting: the ratings that people give themselves or others all depends on how transparent you make the feedback.
In our normal Ezra set up, we gather 180-degree feedback (self and line manager) at the beginning and end of coaching assignments. Typically, we have found that the line manager rating is substantially higher than the self-score, both at the outset and conclusion of a coaching engagement.
In our standard set up, the line manager is told that their ratings and comments will be visible to the coachee. It’s hardly surprising then that the line manager might be reticent to score down, presumably because they don’t want to have a difficult conversation later on about any negative comments.
One of our team made the point that line managers may score higher simply because they don’t want to hurt the feelings of the coachee. But our experience is that if you make the line manager’s input anonymous (by positioning it in a “senior stakeholder” category), then they tend to be rather more critical.
So, we know that line managers can be tough raters unless they can be seen, in which case they are typically more generous.
Given what we know about human nature, none of that is particularly shocking. What did interest us was the difference in self-rating we could elicit using different psychological contexts.
To support our performance analysis of Ezra’s competency model within our coaching app, we asked 400 randomly selected leaders who worked in different industries around the world to perform a self-assessment. We told them that no one else would see those self-assessments.
We found that these leaders were significantly more flattering in assessing themselves than we would have expected when they pass through the Ezra ecosystem and their self-rating was going to be read by their coach (not mandated in our system, but typical). Same questions, same trends, but different absolute scores.
So, it seems the transparency factor is in full effect here. If you respond to a survey knowing that no-one is ever going to see the results or speak to you about it afterward, it allows you to be far more lenient on yourself. If, on the other hand, you know that someone is going to see the self-rating and talk to you about them in your coaching sessions, your self-scores come down a peg or two.
There is clearly a social-desirability bias here; people score themselves lower to avoid appearing arrogant and too self-confident. Thus, if we look at all three types of data we have collected so far, the following trends are established:
- Highest scoring – self (no consequences, data not seen by self or a coach)
- Middle – line managers (not anonymous, answers attributable to them)
- Lowest – self (knowing their scores will be compared to line managers and discussed with a coach)
So, when rolling out a 360-feedback exercise, it is worth considering the degree of transparency your approach affords raters? The more transparency you give, the more generous the responses tend to be.
This has reminded us that when it comes to performance assessments, what and who you ask is important, but so is how you ask them.