Insights

← Back to Insights

Performance Reviews That Are Not a Waste of Everyone's Time

Manager conducting a meaningful performance review

The performance review process at most organizations is a three-way failure. Managers spend hours writing assessments they find meaningless. Employees dread conversations that feel like verdicts rather than development. HR administers a process that generates documentation but rarely produces genuine performance improvement. Everyone involved thinks the current system is broken. Almost nobody changes it.

The problems are structural and mostly fixable. But fixing them requires separating two things that most review processes conflate: the compensation decision and the development conversation. When those two purposes are in the same meeting, the development conversation dies.

The Compensation-Development Conflation Problem

Think about what happens in a typical annual review meeting where both compensation and development are discussed. The direct report has one main cognitive focus: what is my raise going to be? Everything the manager says before the number is announced is processed through that lens. "You've been doing well on client relationships but could improve your written communication" registers as either "preamble before the good news" or "justification for the lower number."

Once the compensation number is delivered, the conversation is effectively over regardless of how long it continues. If the number is good, the employee is happy and wants to leave. If the number is bad, the employee is defensive and wants to understand why. In neither case is the employee absorbing developmental feedback or thinking about growth goals.

The solution is calendar separation. Deliver compensation decisions in a distinct, brief conversation. Then, 48 to 72 hours later, hold the development conversation with no compensation discussion. Both conversations become more effective. The development conversation is no longer contaminated by the compensation anxiety. The compensation conversation is no longer padded with developmental context designed to soften the number.

The Recency Bias Problem

Most managers writing annual performance reviews draw primarily from the last two to three months of the review period. Work from January through September is hazy. A strong Q4 can overshadow a difficult Q2. A stumble in November can depress the rating for the whole year. This is not deliberate unfairness - it's how human memory works. Recency bias in performance evaluations has been documented across every organizational context that has studied it.

The structural fix is a running performance document. Not a formal document - a simple running note per person, updated monthly, capturing significant observations: delivered the client proposal under pressure, struggled with scope management on the Q2 project, the cross-functional collaboration with design team improved notably, peer feedback from the September product review was specifically positive about their presentation clarity.

With 12 months of notes, the annual review writes itself from evidence rather than memory. The evaluation is more accurate, the feedback is more specific, and the direct report is more likely to feel that the assessment reflects their actual performance rather than the manager's recollection of highlights.

The Goal-Setting Problem

Annual goals set in January are usually irrelevant by September. Priorities shift, strategies change, the business context the goals were written in no longer exists. A direct report who was set goals in January and is being evaluated against them in December is being measured against a specification that the business itself has already abandoned.

The fix is quarterly goal calibration. Formal annual goals are fine as a directional framework, but the working targets should be recalibrated every 90 days. The quarterly check-in is not a review - it's an alignment conversation. "Given what's changed since we set these goals, what should you be prioritizing in Q3? What were you accountable for in Q2 and how did it go?" This 30-minute quarterly conversation prevents the January-December gap from becoming the December-discovery problem.

360 Feedback - What Works and What Doesn't

360-degree feedback processes - where peers, direct reports, and cross-functional stakeholders provide input on a manager's performance - can be genuinely useful or a bureaucratic ritual depending entirely on how they're structured.

The problems with most 360 processes: feedback is collected anonymously in bulk, which reduces specificity and makes the output hard to act on. The aggregated feedback report arrives weeks after the period being evaluated. Managers receive the report without facilitation support for interpreting it. The result is a 12-page document that sits unread in an HR portal.

The version that works is smaller-scale and more targeted. Three to five specific stakeholders are asked for behavioral feedback on two or three specific questions relevant to the manager's current development focus. The feedback is collected within two weeks of the period being evaluated. A coach or HR partner helps the manager interpret the patterns and make concrete development commitments. The feedback cycle is quarterly, not annual.

This approach generates less data and more action. The goal is not comprehensive assessment - it's useful input for a specific development focus.

What to Write in the Performance Assessment

Performance assessments that are valuable share specific structural characteristics. They describe observable behaviors, not character assessments. "Consistently brings written summaries to complex conversations, which has shortened decision cycles with cross-functional partners" is useful. "Has strong communication skills" is not.

They include both contribution to results and approach to the work. Results matter, but a person who hit every target while burning relationships and leaving a trail of organizational damage behind them should not receive the same assessment as someone who hit every target while building cross-functional trust and developing their direct reports.

They contain at least one specific forward-looking development commitment with a timeline. "Over Q1 2025, focus on shortening the feedback loop with the design team - specific target is reducing the average revision cycle on creative briefs from 3 rounds to 1.5." That commitment is trackable. It's also meaningful to the direct report because it names a specific professional development area rather than a vague aspiration.

Calibration Meetings and the Bias Problem

When managers discuss their direct reports' ratings together in calibration sessions, the meeting can either improve rating accuracy or entrench biases depending on how it's facilitated. Uncalibrated manager assessments of similar work can vary by a full rating tier based on how the manager describes the contribution, which is heavily influenced by the manager's implicit biases and advocacy ability.

The most effective calibration meetings require evidence, not advocacy. A manager who says "she's a strong performer" without behavioral specifics should be asked for examples. The calibration's job is to ensure that the standards being applied are consistent across managers - which requires that the managers are reasoning from the same type of evidence. HR professionals who facilitate calibration as behavioral evidence review rather than manager debate significantly improve the equity of outcomes.

For the regular feedback infrastructure that makes annual reviews easier and more meaningful, read our piece on feedback that changes behavior. Reviews that summarize 12 months of consistent feedback are fundamentally different from reviews that deliver new information for the first time.

Is your review process producing development or just documentation? Our cohort programs include a session on performance conversations - both the annual review and the ongoing feedback that makes it meaningful. See the cohort curriculum →