The Monolith Is Not Your Enemy. Bad Architecture Is.

by Eric Hanson, Backend Developer at Clean Systems Consulting

The villain that wasn't

Your engineers are blaming the monolith for slow deploys, tangled code, and cross-team friction. They're right that these problems exist. They're wrong about the cause. The deploy is slow because nobody enforced a clean build pipeline, not because the artifact is single. The code is tangled because bounded contexts were never defined, not because everything runs in one process. The cross-team friction exists because ownership was never assigned, not because the repository is shared.

Switching to microservices will not fix any of those things. It will add distributed systems complexity on top of them.

This is not a defense of every monolith ever written. Big balls of mud — no module structure, global mutable state, business logic in database triggers — are genuinely hard to work with. But the problem there is the architecture inside the monolith, not the deployment model. You can write a big ball of mud as microservices too. Plenty of teams have.

What a well-structured monolith actually looks like

A modular monolith has enforced boundaries between business domains. The word "enforced" is doing work here. Comments and conventions don't count. You need tooling that fails the build when the boundaries are violated.

In Java, ArchUnit makes this straightforward:

@AnalyzeClasses(packages = "com.example")
public class ArchitectureTest {

    @ArchTest
    static final ArchRule orders_must_not_access_payments_internals =
        noClasses()
            .that().resideInAPackage("..orders..")
            .should().accessClassesThat()
            .resideInAPackage("..payments.internal..");

    @ArchTest
    static final ArchRule domain_must_not_depend_on_infrastructure =
        noClasses()
            .that().resideInAPackage("..domain..")
            .should().dependOnClassesThat()
            .resideInAPackage("..infrastructure..");
}

These tests run in CI. A developer in the orders module who accidentally imports from payments internals gets a build failure before the PR merges. The discipline is real without the operational overhead of a network call.

In Go, unexported identifiers enforce this at the language level. In Python, you can use something like import-linter with contract rules. The specific tool matters less than whether it runs automatically and breaks builds on violations.

The performance argument that cuts both ways

People cite microservices for "independent scaling" — but in most systems under 100 million requests per day, scaling a monolith is simpler and cheaper. You run more instances. Kubernetes handles the load distribution. There's no inter-service network overhead, no serialization cost on every request, no additional latency introduced by service hops.

The monolith at high load eventually becomes a problem when different components have dramatically different resource profiles — CPU-intensive ML inference running next to I/O-bound API handling, for example. At that point, the scaling argument for separation is legitimate. Before that point, horizontal scaling of a single well-stateless process is entirely adequate.

The data point most architecture discussions skip: a properly tuned single-process API on a 32-core host can handle on the order of 50,000–100,000 requests per second (for I/O-bound workloads, depending heavily on DB latency). Most applications never come close to needing more. If yours does, you'll know — you'll have traffic data proving it.

Deployment independence is achievable in a monolith too

The argument that microservices enable team deployment independence is real — but the same outcome is achievable at smaller scale through good CI/CD discipline in a monolith. Feature flags (LaunchDarkly, Unleash, or a simple DB-backed flag service) let different teams ship code independently without shipping behavior independently. Teams merge to main on their own schedule. Flags control activation.

Canary deployments and blue-green deploys work on monoliths. If your monolith takes forty-five minutes to deploy, that's a build pipeline problem, not a monolith problem. Incremental compilation, proper caching in Docker layers, parallel test execution — these reduce monolith deploy times to under ten minutes in most codebases.

When the monolith genuinely becomes a problem

There are real scenarios where a monolith creates problems a service split is the right answer to:

  • A single component has a security or compliance requirement mandating data isolation (PCI-DSS cardholder data is the canonical example)
  • A component needs a different runtime or language for legitimate technical reasons (ML inference in Python alongside a Java API)
  • You have grown to fifteen or more teams and release coordination at a single pipeline is a measurable bottleneck

None of those conditions apply to most teams making the microservices argument. The monolith is not the reason you're moving slowly. Address the actual reason: unclear ownership, missing CI/CD investment, absent module boundaries. Then re-evaluate whether a split still makes sense. Often it won't.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

HTTP Status Codes Are Not Suggestions. Use Them Correctly.

Misusing HTTP status codes leads to broken retries, misleading metrics, and fragile clients. Treating them as part of your API contract improves reliability and reduces hidden complexity.

Read more

Java Generics Beyond `List<T>` — Wildcards, Bounds, and When They Actually Matter

Most Java developers use generics as glorified type-safe containers and stop there. Wildcards and bounds solve real API design problems — here is what they are, when they help, and when they make things worse.

Read more

Event-Driven vs Request-Driven Architecture — Which One to Pick and When

Event-driven architecture solves temporal decoupling and fan-out elegantly, but it trades synchronous clarity for eventual consistency — a trade-off that only makes sense in specific structural contexts.

Read more

Why Finding a Senior Backend Developer in Taipei Is Harder Than the City's Tech Reputation Suggests

Taipei has a strong technology identity and a serious engineering culture. Senior backend developers are still surprisingly hard to hire here.

Read more