Securing Microservices Is Harder Than Securing a Monolith
by Eric Hanson, Backend Developer at Clean Systems Consulting
The security model you inherited from the monolith that no longer applies
In a monolith, security is enforced at the edge. A request passes through authentication middleware once, the user identity is established, and every internal function call happens within the same process — no re-verification needed. This is straightforward, auditable, and correct for a single-process application.
When you split into microservices, this model breaks. Service A authenticates the user at the edge. It then calls Service B to fetch data, and Service C to process it. Neither B nor C independently verified who the user is. They're trusting that A did it correctly, and they're trusting that the caller is actually A and not something else on the internal network.
If your internal network is flat — if any service can call any other service without authentication — a compromised service has access to every other service's data and operations. This is not a theoretical risk. It's the most common attack pattern in microservices breaches: compromise a low-privilege service with a network vulnerability, then use it to pivot to high-value internal services that trust internal traffic implicitly.
Service-to-service authentication is not optional
Every service-to-service call should be authenticated. The two standard approaches:
Mutual TLS (mTLS): both the client and server present certificates. Each service has a certificate issued by an internal CA. The receiving service verifies the caller's certificate before processing the request. This is the strongest model — cryptographic proof of identity at the transport layer.
mTLS is operationally complex to manage manually: certificate issuance, rotation, revocation, and distribution. A service mesh (Istio, Linkerd) handles this transparently. Services don't change their code; the mesh sidecar (Envoy proxy) handles mutual TLS for all inter-service traffic.
# Istio PeerAuthentication: enforce mTLS for all services in namespace
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: production
spec:
mtls:
mode: STRICT # reject any non-mTLS traffic
JWT-based service tokens: services authenticate using short-lived JWTs issued by an internal token service or OAuth 2.0 authorization server (Keycloak, Auth0 internal, Vault's token service). The receiving service validates the JWT signature and checks that the sub claim identifies a known service identity.
This is easier to implement without a service mesh but requires managing token issuance, expiry, and distribution across all services.
The principle of least privilege applied to services
Every service should be able to access only what it needs. This means:
- Network policies that restrict which services can call which other services (Kubernetes NetworkPolicy or service mesh authorization policies)
- Database credentials scoped to the minimum necessary permissions (SELECT-only for read-only services, specific table access rather than full schema access)
- Secret access scoped to the specific secrets each service needs (Vault policies, AWS IAM roles per service)
# Kubernetes NetworkPolicy: only Order Service can call Payment Service
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: payment-service-ingress
namespace: production
spec:
podSelector:
matchLabels:
app: payment-service
ingress:
- from:
- podSelector:
matchLabels:
app: order-service
policyTypes:
- Ingress
Without explicit network policies, Kubernetes allows all pod-to-pod communication within a cluster by default. A compromised service can reach any other service. Network policies limit the blast radius of a compromise.
User identity propagation across services
When a user-initiated request flows through multiple services, each service may need to know who the user is for authorization decisions. The standard approach is to propagate the user's JWT in the Authorization header through the entire service chain.
Each service validates the JWT independently — it does not rely on an upstream service to have validated it:
@Component
public class JwtAuthenticationFilter extends OncePerRequestFilter {
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response,
FilterChain chain) throws IOException, ServletException {
String token = extractBearerToken(request);
if (token != null) {
// Each service validates the token independently
Claims claims = jwtValidator.validate(token);
SecurityContextHolder.getContext()
.setAuthentication(new JwtAuthentication(claims));
}
chain.doFilter(request, response);
}
}
The JWT is validated at every service boundary using the same public key (or JWKS endpoint from the auth server). Intermediate services forward the original user token downstream — they don't re-issue tokens on the user's behalf.
Secrets management
Secrets (database passwords, API keys, encryption keys) should never be in environment variables in plaintext or in source code. In a microservices environment, the number of secrets multiplies with the number of services, making manual secrets management genuinely dangerous.
HashiCorp Vault with Kubernetes auth is the most robust approach: services authenticate to Vault using their Kubernetes service account, Vault issues short-lived secrets that rotate automatically. Vault Agent Sidecar or the Vault CSI provider can inject secrets into pods without the application code needing to know about Vault at all.
At minimum: use Kubernetes Secrets with encryption at rest enabled (via KMS provider), restrict secret access with RBAC, and audit secret access. Plaintext secrets in ConfigMaps or environment variables in Deployment manifests are not acceptable in a production security posture.
The operational discipline that makes this work
Security in microservices requires ongoing discipline, not one-time configuration:
- Rotate service credentials and certificates regularly (Vault and cert-manager handle this automatically)
- Audit service-to-service call patterns for anomalies (unexpected callers are a signal of compromise or misconfiguration)
- Run security scanning on container images in your CI pipeline (Trivy, Grype) to catch known CVEs before deployment
- Practice the principle of immutable infrastructure: containers should not be SSH-accessible in production; debugging happens through logs and traces, not shells
The attack surface of a microservices system is larger than a monolith's. That's a fact of the architecture. The response is layered security — network isolation, mutual authentication, least-privilege access, and secrets management — applied consistently across all services, not just the ones that handle user-facing traffic.