How I Run Code Reviews That Actually Improve the Team

by Eric Hanson, Backend Developer at Clean Systems Consulting

Code review done badly is a bottleneck and a morale tax. Done well, it's the most efficient team-improvement tool you have. The difference is almost entirely in how you think about what it's for.

What Most Code Reviews Actually Are

In most teams, code review is effectively a gate. The reviewer's job is to catch things before they merge: bugs, security issues, style violations, logic errors. The author's job is to get it approved. It's adversarial in structure even when the participants aren't adversarial in temperament.

This model works, sort of. Bugs get caught. Standards get enforced. But it doesn't do much to actually improve the people doing the writing. The author learns that their PR was approved or what they need to change to get it approved. They don't necessarily learn why, or what they might have thought differently.

Code review as a learning tool requires a different frame entirely.

The Review Is a Conversation, Not a Verdict

I try to approach every PR with the assumption that the author made reasonable decisions given what they knew. My job as a reviewer is to understand their reasoning, share mine, and leave both of us with a clearer picture than we started with.

In practice, this means asking questions more often than issuing instructions:

  • "What's the behavior if this returns null?" instead of "handle the null case"
  • "I'm not sure I understand why this is here — can you explain the intent?" instead of "this doesn't make sense"
  • "Have we thought about what happens when X?" instead of "X is a problem"

The difference seems subtle. The effect isn't. When you ask questions, the author engages their own thinking. They defend a decision or they realize on reflection it doesn't hold up. Either way, they've done cognitive work, which means they learn something. When you issue directives, the author implements the change without necessarily understanding why.

A directive produces a code change. A question produces understanding.

Separating the Three Types of Comments

I've found it useful to make explicit which kind of comment I'm leaving:

Blocking issues — things that, in my judgment, genuinely need to be addressed before this ships. Bugs, security holes, correctness problems, things that will confuse maintainers and create future incidents.

Suggestions — things I'd do differently, but where the author's approach is defensible. I offer my reasoning and leave the decision to them. If they make the call and can explain it, that's fine.

Observations — things I noticed that don't need action on this PR but might be worth discussing: patterns I'm seeing across the codebase, a refactoring opportunity that's outside this PR's scope, a broader design question worth revisiting.

Making this distinction explicit reduces friction considerably. Authors don't have to guess which comments are blockers and which are preferences. It's also honest — it acknowledges that a lot of what reviewers call "issues" are actually preferences, and the author deserves to know the difference.

The Asymmetry of Spotting vs. Teaching

There's a version of code review that's pure pattern-matching: you scan for the things you know are bad and flag them. This makes you a fast reviewer and a useful safety net, but it doesn't transfer the pattern-matching to the author.

When I flag something, I try to explain the category of problem, not just the instance. Not just "this query will be slow at scale" but "queries that return full rows without pagination tend to become performance problems once the table reaches a certain size — here's how I'd think about when to add limits." The goal is that the next time they write a query, they ask themselves the right question without needing me to ask it for them.

This takes more time per comment. It pays back in fewer comments needed over the course of months.

What to Do When You Disagree With the Outcome

Sometimes you flag something, the author pushes back, and you're not sure they're right but you can't articulate why they're wrong. This is common. Technical judgment is genuinely uncertain in a lot of cases.

My default in this situation: if the author's position is coherent and defensible, approve it and note my concern. "I'm still not fully convinced about X, but I can't point to a specific problem — let's merge and revisit if it causes issues." This keeps things moving, respects the author's judgment, and creates an explicit record of the concern.

Holding up a PR because you have a vague feeling something is wrong is a cost you're imposing on the team. That cost needs to be justified by something more specific than discomfort.

Reviews as a Signal

If the same issues keep appearing across multiple authors' PRs, that's a signal. The team hasn't internalized the standard — which means either the standard needs to be communicated differently, or it needs to be codified (in a linter, a style guide, a project template, a checklist).

Code review feedback that has to be given repeatedly isn't a code review problem — it's a documentation or automation problem. The right response is to move that knowledge somewhere structural, not to keep catching it manually.

I keep rough notes on patterns I'm seeing in reviews, and every few months I look for things that could be automated or written up. The goal is to make code review progressively lighter over time, not to make myself permanently necessary as a human filter.


The best code review leaves both the reviewer and the author thinking more clearly than when they started.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

How to Handle a Client Who Wants to Pay Less Than Your Rate

A client pushing back on your rate is not automatically a problem. How you respond determines whether you end up with a better deal, an adjusted scope, or a politely declined engagement.

Read more

Why Remote Contractors Deliver Faster Than Office Teams

Remote contractors focus on results, not office presence. With fewer meetings and clearer scope, work moves faster and more efficiently.

Read more

Your Pull Request Is Too Big. Here Is How to Tell.

Large PRs get worse reviews, merge slower, and introduce more bugs than small ones — not because reviewers are lazy, but because the human brain has a finite working memory. Here is how to diagnose and fix the problem.

Read more

The Hidden Costs of Hiring a Full-Time Backend Engineer Nobody Talks About

The salary is the number everyone negotiates. It's not the number that surprises founders six months into a backend hire.

Read more