Code Review Is Not a Gate. It Is a Conversation.

by Eric Hanson, Backend Developer at Clean Systems Consulting

The Gatekeeping Failure Mode

You've seen this team: PRs take four days to get reviewed. Reviewers leave terse, critical comments. The author defends every decision. Changes go back and forth three times. By the time the PR merges, both parties are exhausted and slightly resentful. The code that merged is marginally better than it would have been, but not proportionate to the time spent.

This is code review as gatekeeping. The reviewer's role is implicitly adversarial — they're guarding the codebase against bad changes. The author's role is to convince the gatekeeper to let them through. It's a negotiation, not a conversation.

The failure is structural. When review is the only mechanism for catching problems, it has to be rigorous to the point of paranoia. When review is one part of a system that includes tests, type checks, linters, and architectural guardrails, it can be collaborative — focused on things that only humans can evaluate.

What Code Review Is Actually Good At

Code review has genuine strengths. It catches logic errors that automated tools miss. It transfers domain knowledge — the reviewer learns about the payment system, the author learns about the reviewer's concerns. It produces better code through design dialogue: "have you considered the case where the session expires mid-request?" is a question that improves the code, not just a gate condition.

Code review is not good at catching style inconsistencies (that's a linter), enforcing security rules (that's static analysis — tools like Semgrep or SonarQube), or verifying test coverage (that's a coverage tool). When review is the primary mechanism for all of these, reviewers become exhausted and authors become frustrated at comments that feel like they could have been automated.

The right setup:

Automated:
  - Style/formatting (Prettier, Black, gofmt)
  - Security patterns (Semgrep, Snyk)
  - Test coverage thresholds (Jacoco, pytest-cov)
  - Type errors (TypeScript, mypy)
  - Obvious bugs (ESLint rules, SpotBugs)

Human review:
  - Logical correctness of complex behavior
  - Architecture and design decisions
  - Edge cases in business logic
  - Readability and intent clarity
  - Knowledge transfer in both directions

When automated tools handle the mechanical concerns, human reviewers can focus on what only humans can evaluate.

The Language of Collaborative Review

The tone and structure of review comments determines whether review feels like a conversation or an interrogation. Some concrete practices:

Distinguish blocking from non-blocking comments. Not everything needs to be fixed before merge. Differentiate:

nit: This variable name could be more descriptive (not blocking)

issue: This will fail when session_id is None — None check needed (blocking)

question: Why did you choose a HashMap here instead of a sorted structure?
          Just want to understand the tradeoff. (discussion, may or may not block)

GitHub Suggestions let you propose specific changes inline rather than describing what you'd like changed:

return session_id if session_id is not None else generate_anonymous_id()

A suggestion can be accepted with one click. It's faster for the author and less ambiguous than a prose description of the desired change.

Ask questions instead of making demands. "Why is this condition inverted?" invites explanation. It might reveal that you misunderstood the code. "Invert this condition" assumes you're right and the author is wrong.

Explain the why behind a concern. "This will cause a race condition when two requests increment the counter concurrently without locking" is actionable and educational. "This is wrong" is neither.

The Author's Responsibility in Code Review

Review is not done to you — you're a participant. The author has responsibilities too.

Respond to every comment, even if the response is "acknowledged, I'll address this in a follow-up ticket" or "I see your point, but I disagree for this reason." Silence is the fastest way to make a reviewer feel ignored and produce worse reviews in the future.

Explain decisions preemptively. If you made a non-obvious choice, leave an inline comment explaining it before requesting review. "I used a sorted array here instead of a hash set because we need deterministic iteration order for the retry sequence" pre-answers a question the reviewer would have asked.

Don't take blocking comments personally. A reviewer who catches a race condition or an incorrect edge case is doing you a favor. The code has a problem regardless of whether they noticed it. Thank them and fix it.

Async vs. Synchronous Review

Most teams default to asynchronous code review — open PR, wait for comments, respond asynchronously. This is fine for small changes. For large, architecturally significant changes, synchronous review (a live code review session, pair review, or a recorded walkthrough) often produces better outcomes faster.

A thirty-minute synchronous review of a complex PR frequently resolves questions that would take three asynchronous rounds to address, and produces a richer discussion. The downside is scheduling overhead and the requirement for both parties to be present simultaneously.

The heuristic: if you've been through more than two asynchronous review rounds without convergence, schedule a synchronous session. The back-and-forth is a sign that text is the wrong medium for the conversation.

What to Do When Review Is a Bottleneck

If PRs consistently wait more than two days for first review, the fix is not "require faster reviews." That produces shallower reviews, not faster ones.

The fixes:

  • Reduce PR size so each review requires less uninterrupted time
  • Distribute review load — if one senior developer is the required reviewer on everything, they become the bottleneck
  • Automate the mechanical concerns so human review is focused and faster
  • Set an explicit team norm about review turnaround (twenty-four-hour first response is a common target)

Code review is one of the highest-leverage activities in software development when done well. It's one of the highest sources of friction when done poorly. The difference is treating it as a collaborative design conversation rather than a quality gate to be passed.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

German Companies Pay €86K for Backend Engineers — What the Async Alternative Looks Like

That salary figure keeps showing up in your budget meetings. There's a version of this where you stop arguing about headcount.

Read more

Hollywood, Gaming, and Startups All Want the Same LA Backend Developers

Los Angeles has three of the most technically demanding industries in the world competing for backend talent. Startups are usually last in line.

Read more

CAP Theorem Is Not Just Interview Knowledge. It Affects Real Decisions.

The CAP Theorem gets rehearsed for system design interviews and then forgotten. That is a mistake — the tradeoffs it describes show up in every distributed system you build or operate.

Read more

Why the Best Technical Contractor Is Not Always the One Who Gets Hired

Hiring decisions rarely come down to who is the most technically skilled. They come down to who the client feels most confident about — and those are very different things.

Read more