Splitting Your App Into Services Won't Fix Bad Code
by Eric Hanson, Backend Developer at Clean Systems Consulting
The thing teams believe that isn't true
The pitch sounds reasonable: "Our monolith is a mess. If we break it into services, each service will be small enough to keep clean." Six months into the migration, the services are live and the code is still a mess — now spread across twelve repositories instead of one, with the added complexity of HTTP calls between the tangled pieces. The migration did not improve the code. It moved it.
This happens consistently enough that it should be treated as a rule: the quality of code that enters a service extraction equals the quality of code that exits. If your order processing logic has unclear responsibilities, poor error handling, and implicit coupling to your user model — extracting it into an "Order Service" produces an Order Service with unclear responsibilities, poor error handling, and now an HTTP dependency on a User Service.
Why splitting doesn't clean things up
Service extraction is a structural change, not a design change. The design — the domain model, the separation of concerns, the handling of edge cases — has to happen before, during, or after the split, but the split itself does not cause it to happen.
The reason teams believe otherwise is a confusion about what makes monoliths hard to work with. Monoliths feel messy because everything is reachable from everywhere. Any class can instantiate any other class. Any function can call any database. The surface area of coupling is theoretically infinite. Service boundaries feel like they solve this because they enforce a hard interface — to call another service, you have to go through an HTTP or gRPC API. That interface imposes discipline.
But the discipline is at the boundary, not inside the service. Inside the Order Service, your code can still be a tangle of God objects, implicit assumptions, and business logic embedded in HTTP handlers. The service boundary does not reach inside.
// Before extraction: tangled in monolith
public class OrderController {
public Response createOrder(Request req) {
User user = db.query("SELECT * FROM users WHERE id = ?", req.userId);
if (user.creditLimit - user.currentBalance < req.total) {
// credit logic mixed with order logic
emailService.send(user.email, "Credit limit exceeded");
return Response.error(402, "Insufficient credit");
}
// 200 more lines of mixed concerns...
}
}
// After extraction: same tangle, now in a service
public class OrderController { // inside Order Service
public Response createOrder(Request req) {
UserDto user = userClient.getUser(req.userId); // HTTP call instead of DB
if (user.creditLimit - user.currentBalance < req.total) {
// still mixed concerns, now with network failure modes too
notificationClient.sendCreditAlert(user.email);
return Response.error(402, "Insufficient credit");
}
// still 200 lines of mixed concerns
}
}
The coupling survived. Now it has latency and failure modes.
What actually fixes bad code
The work that needs to happen — before, during, or after a service split — is domain modeling. Specifically:
Identify what each piece of code is actually responsible for. Credit checking is not an order concern. It belongs in a credit or financial service with a clear API: "can this user spend this amount?" The order service asks that question and gets a yes or no. It does not receive user financial data and make credit decisions itself.
Enforce boundaries in your codebase first. Before splitting into services, enforce the boundaries as internal module boundaries. In Java, this means package-private classes and ArchUnit rules in CI. In Go, this means unexported functions. If the code cannot respect an internal boundary, it will not respect a service boundary either — it will just express the same coupling through API calls.
Write tests that define behavior, not implementation. A service extracted from untested code inherits not just the logic but the untestability. If you cannot test a unit of behavior in isolation in the monolith, extracting it into a service does not make it testable. You need the test suite before the extraction so the tests can validate that the extracted service behaves identically to the original code.
When the mess is in the data model
The hardest case is when the bad code reflects a bad data model — tables that conflate multiple concerns, foreign keys that create implicit coupling, schema designs that make simple queries require five joins.
This cannot be fixed by service extraction. If your orders table has thirty columns that belong to three different domains — shipping, billing, inventory reservation — splitting by service does not fix the schema. It forces you to either keep the bad schema inside the service or do the schema cleanup as part of the extraction, which doubles the scope and risk of the project.
The right sequence: fix the data model first. Separate the concerns in the schema. Run both old and new schemas in parallel with a migration layer if needed. Validate the new model in production. Then extract the service around the clean model.
The actual value of service extraction
Done correctly, service extraction provides genuine benefits: deployment independence, isolated failure domains, independent scaling, and clear ownership. But those benefits flow from doing the design work, not from the act of splitting. Teams that extract services from clean, well-modeled code get the benefits. Teams that extract services from tangled code get distributed tangled code, which is strictly worse.
If your monolith is a mess, spend one quarter cleaning the internal structure — better module boundaries, cleaner domain models, test coverage on core paths — before committing to an extraction. You may find the monolith is now manageable. If you still want to extract, you're starting from a foundation that will produce services worth operating.