The Backend Decisions I've Regretted — and What I Do Differently Now
by Eric Hanson, Backend Developer at Clean Systems Consulting
Every experienced developer carries a graveyard of decisions that looked reasonable at the time and cost real money later. Here are mine, and the habits I built to stop repeating them.
Confident Mistakes Are Still Mistakes
The decisions I regret most weren't made carelessly. They were made with conviction. I knew the codebase. I understood the requirements. I made a call, shipped it, and moved on. Then, six months or a year later, the bill arrived.
That's the thing about backend mistakes — they often don't show up immediately. A performance issue stays hidden until traffic grows. A schema design decision becomes painful when the data model needs to change. A dependency choice becomes a problem when the library stops being maintained or the vendor changes its pricing.
These are the ones I think about most.
The Schema I Designed for Today
Early in a project, I designed a database schema around the current requirements. It was clean, normalized, and well-indexed for the queries we were running. The problem was that I didn't ask the obvious question: what are the next three shapes this data might take?
Six months later, a product requirement changed the fundamental relationship between two core entities. What would have been a medium-sized code change if the schema had anticipated flexibility was instead a multi-week migration project involving downtime risk, backfill scripts, and a lot of late nights.
What I do differently now: Before finalizing any schema, I explicitly ask: "What are the most likely ways this model needs to evolve?" I don't design for all of them — that's over-engineering — but I design in a way that doesn't close them off. Avoiding non-nullable columns without defaults on tables that will need schema changes, for instance. Or keeping foreign key constraints flexible enough to accommodate future relationships.
The Service I Extracted Too Early
Microservices are seductive. They feel modern, they feel scalable, they feel like you're doing architecture properly. I extracted a service from a monolith before the bounded context was clear, before the team was large enough to justify the operational overhead, and before the traffic warranted the added complexity.
The result was two systems instead of one, a network boundary where there had been a function call, distributed tracing that we never quite got right, and deployment pipelines that needed to be kept in sync. The team spent more time managing the separation than it would have spent managing a well-organized monolith.
What I do differently now: I don't extract a service until I have a clear answer to "what problem does this boundary solve, and is that problem actually hurting us today?" Not theoretically, not eventually — today. The monolith stays a monolith until there's a specific, demonstrable reason to split it.
The Logging I Didn't Write
This one is embarrassing because it's so straightforward. I shipped a service with minimal logging — basic request/response logs, top-level exceptions, nothing inside the core processing logic. It seemed like enough.
Then an incident hit. The service was producing incorrect outputs for a subset of requests. We had logs telling us the requests came in and responses went out. We had nothing about what happened between them. The debugging session lasted three days. A handful of well-placed structured log lines would have cut that to three hours.
What I do differently now: I log at the entry and exit of every significant processing step. Not just the happy path — I log the inputs and which branch was taken. I include correlation IDs that let me trace a single request across logs. I think about logging as part of the feature, not as something I'll add if there's time.
The Dependency I Didn't Vet
I pulled in a library to solve a problem. It was popular, well-documented, and did exactly what I needed. I didn't look at the maintenance cadence, the open issue count, or whether the organization behind it had any longevity. Two years later, the library was unmaintained, had an open security vulnerability with no patch forthcoming, and had to be replaced under time pressure.
What I do differently now: Before adding any significant dependency, I ask: When was the last commit? How active is the issue tracker? Is there a community or just one maintainer? What would replacing this look like if I had to? A library you can't control is a risk surface you're accepting permanently.
The Assumption I Didn't Document
I made a design decision based on an assumption about how a partner system would behave. It was a reasonable assumption — I'd had a conversation with the team that owned it, they'd confirmed it. I didn't write any of that down.
Eighteen months later, the partner system changed its behavior. The team that changed it wasn't aware of our dependency on the old behavior. The engineer who'd had the original conversation had left. We spent two weeks figuring out what had changed and why, and another week fixing it.
What I do differently now: Assumptions that the system depends on get documented, explicitly, in a place where the next engineer will find them. "This service assumes X will always return Y" as a comment in the code or a note in the architecture doc. It takes five minutes. It can save weeks.
The decisions that cost the most aren't the ones you agonized over — they're the ones you made confidently and never wrote down.