Reviewing Code You Don't Fully Understand Is More Common Than You Think
by Eric Hanson, Backend Developer at Clean Systems Consulting
The Unspoken Review Problem
Here is what actually happens on many code review approvals: the reviewer opens the PR, understands the high-level intent from the description, reads about forty percent of the diff, gets lost in domain-specific logic or an unfamiliar library, and approves with a comment like "LGTM, looks good overall." The PR merges. Three weeks later there's a production bug in the code they approved.
Nobody talks about this because it's professionally uncomfortable. The implicit norm in code review is that approval means you've validated the code. In practice, "I've read it carefully and it's correct" and "I've skimmed it and I don't see obvious problems" produce the same green checkmark.
This isn't a matter of individual incompetence. It's a predictable outcome of assigning reviews to people who don't have the domain context to evaluate what they're reviewing.
Why It Happens
Rotating review assignments spread load but destroy context. A developer who works in the payments module every day can review a payments PR effectively. The same developer reviewing a change to the machine learning inference service is context-free — they can check formatting and obvious mistakes but can't evaluate the ML-specific logic.
Seniority pressure suppresses questions. A junior developer reviewing a senior's code feels the implicit expectation that they're the one who should learn, not the one asking "what does this algorithm actually do?" Questions can feel like an admission of inadequacy.
Review-as-compliance creates incentives to approve. When the team metric is "PRs merged per week" or when blocked PRs reflect badly on the reviewer's responsiveness, the path of least resistance is to approve and move on.
Insufficient PR context makes understanding the code harder than it should be. A PR with no description, no test examples, and no explanation of the approach requires significant effort to understand even for someone with domain expertise. Without context, the effort exceeds what reviewers are willing to spend.
What to Do When You're Lost
The first thing: ask. This sounds obvious, but in practice it requires overcoming a professional norm that says reviewers should have answers, not questions.
The right framing: questions in code review are not a demonstration of ignorance — they're evidence of careful reading. "I don't understand what this state machine transition is doing when the event queue is empty" is a better review than a silent approval.
Format the question so it's easy to answer:
I'm not familiar with the Raft consensus algorithm used here.
Can you add a comment explaining what the "commit index" variable
represents and what happens when it diverges from "last applied"?
Specifically, I'm trying to understand whether the case on line 78
(commitIndex > lastApplied) could happen during normal operation or
only after a failure.
That question is specific enough that the author can answer it without writing a textbook. It also demonstrates that you read the code carefully — you got lost at a specific point, not globally.
Partial Review Is Still Useful Review
Being unable to fully review a PR doesn't mean your review has no value. Your review can be useful without being complete.
Reviewing what you do understand. If the PR modifies both the API layer (which you know) and the ML layer (which you don't), review the API layer thoroughly and explicitly scope your review:
I've reviewed the API endpoint changes thoroughly — the routing,
input validation, and error handling look correct. I'm not in a
position to evaluate the model inference logic in the prediction
module — you'll want someone with ML context to review that section.
Reviewing for readability and structure. Even without domain expertise, you can evaluate whether the code is readable, whether the naming is clear, whether the tests cover obvious cases, and whether the approach is documented in the code itself.
Reviewing the test coverage. Tests are often the most accessible part of a PR — they express the expected behavior in concrete terms, often without requiring deep knowledge of the implementation.
Fixing the Process Problem
The individual behavior of asking questions is necessary but not sufficient. The systemic fix is routing PRs to reviewers who have context.
CODEOWNERS is the Git-native mechanism for this. Define which teams or individuals own which paths, and PRs that touch those paths automatically request review from the owners:
# .github/CODEOWNERS
/src/payments/ @payments-team
/src/ml/ @ml-team @data-platform-team
/src/auth/ @security-team @backend-leads
/infra/ @platform-team
This doesn't mean only the owners can review — it means they're automatically included. Anyone else can review too, and their comments are valuable, but the required approval comes from someone with context.
Explicit review scope in PR descriptions helps when cross-team review is unavoidable:
## Review Notes
The payment processor changes (src/payment/) need review from the payments team.
The logging changes (src/logging/) are straightforward infrastructure — any
backend developer can review those.
The Norm Worth Making Explicit
Teams where developers openly admit "I don't fully understand this section" in reviews have better outcomes than teams where everyone performs confidence. The key shift is treating "I got lost here" as useful signal rather than professional failure.
If you're the author and a reviewer says they got lost, that's information: either the code is genuinely complex and needs better documentation, or the PR needs to be smaller. Both are actionable. The reviewer's confusion is doing its job.
Require approval from people with context. Make it safe to ask questions. Keep PRs small enough that one reviewer can reasonably hold the whole thing in their head. These three things together dramatically reduce the "LGTM" problem.