TDD Sounds Backwards Until You Try It on a Real Feature
by Eric Hanson, Backend Developer at Clean Systems Consulting
Why It Seems Backwards at First
The instinct when you get a new feature is to design the solution, then build it, then confirm it works. Writing a test first interrupts this flow at the worst possible moment — before you know what you are building. How can you test something that does not exist?
This confusion is the correct starting point. The test is not a verification tool when used in TDD. It is a specification tool. Writing the test first forces you to specify the behavior precisely — what inputs, what outputs, what errors, what preconditions — before you have made any implementation decisions. And that precision often surfaces assumptions you would have discovered the hard way, mid-implementation.
A Real Feature, Not a Toy Example
Consider implementing rate limiting for an API endpoint. The requirement: users are limited to 100 requests per 15-minute window. Excess requests receive a 429 response.
Before TDD, the design questions are: where does the state live? How is the window calculated? Is it sliding or fixed? What happens at boundaries? What happens if the state store is unavailable?
These questions can be answered in a design doc, but they can also be answered by writing the first test:
def test_request_is_allowed_when_under_limit():
limiter = RateLimiter(limit=100, window_seconds=900)
user_id = "user_123"
result = limiter.check_and_record(user_id)
assert result.allowed is True
assert result.remaining == 99
assert result.reset_at is not None
Writing this test forces several decisions immediately: the RateLimiter class needs to exist, it needs limit and window_seconds parameters, check_and_record needs a user identifier, and the return type needs allowed, remaining, and reset_at fields. All of this is API design, done through the test before any implementation exists.
def test_request_is_rejected_when_limit_is_exceeded():
limiter = RateLimiter(limit=3, window_seconds=900)
user_id = "user_456"
limiter.check_and_record(user_id)
limiter.check_and_record(user_id)
limiter.check_and_record(user_id)
result = limiter.check_and_record(user_id) # 4th request
assert result.allowed is False
assert result.remaining == 0
def test_window_resets_after_expiry():
clock = FakeClock(start=datetime(2024, 1, 1, 12, 0, 0))
limiter = RateLimiter(limit=3, window_seconds=900, clock=clock)
user_id = "user_789"
for _ in range(3):
limiter.check_and_record(user_id)
# Advance clock past the window
clock.advance(seconds=901)
result = limiter.check_and_record(user_id)
assert result.allowed is True
assert result.remaining == 2
The third test reveals that the RateLimiter needs an injectable clock — something you might not have thought to include until you hit a flaky test in CI three weeks later. TDD surfaced the design requirement before it became a maintenance problem.
What the Red State Tells You
When you run a test that calls code that does not exist, it fails immediately. That failing test — the red state — is more informative than it looks. It tells you exactly what interface you need to implement. There is no ambiguity about what the first thing to build is. You are building the thing that makes this specific test pass.
This constraint is valuable. Without it, "implementing rate limiting" might start with standing up a Redis instance, designing a data schema, sketching out the full class hierarchy, and getting three hours in before you have written a line of testable logic. TDD keeps you focused on the smallest increment that produces working, tested behavior.
The Genuine Difficulty
TDD is harder to apply to code that interacts with infrastructure — HTTP handlers, database repositories, UI components. The feedback loop requires a real or simulated infrastructure to be in the loop, which slows the cycle. This is why TDD is most naturally applied to domain logic and is harder (though not impossible) to apply to infrastructure code.
TDD is also harder when the design is genuinely uncertain — when you do not know what the right interface is. In those cases, a short spike (exploratory implementation without tests) followed by TDD on the real implementation can be more productive than forcing TDD before you understand the problem.
Neither of these is an argument against TDD. They are scope limitations worth knowing so you can apply the practice where it has the most leverage.
The first time you use TDD on a real feature with genuine complexity — not a tutorial example — is when the argument for it stops being theoretical. Try it on the next piece of business logic with more than three conditional branches.