The Monolith Is Not Your Enemy. Bad Architecture Is.
by Eric Hanson, Backend Developer at Clean Systems Consulting
The villain that wasn't
Your engineers are blaming the monolith for slow deploys, tangled code, and cross-team friction. They're right that these problems exist. They're wrong about the cause. The deploy is slow because nobody enforced a clean build pipeline, not because the artifact is single. The code is tangled because bounded contexts were never defined, not because everything runs in one process. The cross-team friction exists because ownership was never assigned, not because the repository is shared.
Switching to microservices will not fix any of those things. It will add distributed systems complexity on top of them.
This is not a defense of every monolith ever written. Big balls of mud — no module structure, global mutable state, business logic in database triggers — are genuinely hard to work with. But the problem there is the architecture inside the monolith, not the deployment model. You can write a big ball of mud as microservices too. Plenty of teams have.
What a well-structured monolith actually looks like
A modular monolith has enforced boundaries between business domains. The word "enforced" is doing work here. Comments and conventions don't count. You need tooling that fails the build when the boundaries are violated.
In Java, ArchUnit makes this straightforward:
@AnalyzeClasses(packages = "com.example")
public class ArchitectureTest {
@ArchTest
static final ArchRule orders_must_not_access_payments_internals =
noClasses()
.that().resideInAPackage("..orders..")
.should().accessClassesThat()
.resideInAPackage("..payments.internal..");
@ArchTest
static final ArchRule domain_must_not_depend_on_infrastructure =
noClasses()
.that().resideInAPackage("..domain..")
.should().dependOnClassesThat()
.resideInAPackage("..infrastructure..");
}
These tests run in CI. A developer in the orders module who accidentally imports from payments internals gets a build failure before the PR merges. The discipline is real without the operational overhead of a network call.
In Go, unexported identifiers enforce this at the language level. In Python, you can use something like import-linter with contract rules. The specific tool matters less than whether it runs automatically and breaks builds on violations.
The performance argument that cuts both ways
People cite microservices for "independent scaling" — but in most systems under 100 million requests per day, scaling a monolith is simpler and cheaper. You run more instances. Kubernetes handles the load distribution. There's no inter-service network overhead, no serialization cost on every request, no additional latency introduced by service hops.
The monolith at high load eventually becomes a problem when different components have dramatically different resource profiles — CPU-intensive ML inference running next to I/O-bound API handling, for example. At that point, the scaling argument for separation is legitimate. Before that point, horizontal scaling of a single well-stateless process is entirely adequate.
The data point most architecture discussions skip: a properly tuned single-process API on a 32-core host can handle on the order of 50,000–100,000 requests per second (for I/O-bound workloads, depending heavily on DB latency). Most applications never come close to needing more. If yours does, you'll know — you'll have traffic data proving it.
Deployment independence is achievable in a monolith too
The argument that microservices enable team deployment independence is real — but the same outcome is achievable at smaller scale through good CI/CD discipline in a monolith. Feature flags (LaunchDarkly, Unleash, or a simple DB-backed flag service) let different teams ship code independently without shipping behavior independently. Teams merge to main on their own schedule. Flags control activation.
Canary deployments and blue-green deploys work on monoliths. If your monolith takes forty-five minutes to deploy, that's a build pipeline problem, not a monolith problem. Incremental compilation, proper caching in Docker layers, parallel test execution — these reduce monolith deploy times to under ten minutes in most codebases.
When the monolith genuinely becomes a problem
There are real scenarios where a monolith creates problems a service split is the right answer to:
- A single component has a security or compliance requirement mandating data isolation (PCI-DSS cardholder data is the canonical example)
- A component needs a different runtime or language for legitimate technical reasons (ML inference in Python alongside a Java API)
- You have grown to fifteen or more teams and release coordination at a single pipeline is a measurable bottleneck
None of those conditions apply to most teams making the microservices argument. The monolith is not the reason you're moving slowly. Address the actual reason: unclear ownership, missing CI/CD investment, absent module boundaries. Then re-evaluate whether a split still makes sense. Often it won't.