Docker vs Bare Metal — When Containerizing Is Worth the Overhead

by Eric Hanson, Backend Developer at Clean Systems Consulting

The "works on my machine" problem and its actual solution

Containers solved a real problem: environment inconsistency between development, staging, and production. But the scope of that solution is often overstated. The "works on my machine" problem has two components: dependency version mismatches and configuration differences. Containers reliably solve dependency mismatches. Configuration differences — wrong environment variables, different secrets, different connected services — exist inside containers just as much as outside them.

The question is whether the full containerization and orchestration stack is justified for your specific operational problem, or whether you are adding complexity because containers have become the default assumption in infrastructure conversations.

What containers genuinely give you

Reproducibility: A Docker image is a versioned, immutable artifact. The same image runs in CI, staging, and production. Language runtime, system libraries, application dependencies — all locked. Rolling back a deployment means deploying the previous image tag. This is objectively better than the "golden AMI" pattern or Ansible-managed bare metal for teams without dedicated infrastructure engineering.

Density: Running multiple services on the same host without dependency conflicts. If you have a Python service, a Go service, and a JVM service that all need different library versions or system dependencies, containers give you process isolation without the overhead of separate VMs. Kubernetes takes this further — bin-packing workloads onto nodes based on resource requests and limits.

# A production-ready multi-stage build — runtime image is minimal
FROM eclipse-temurin:21-jdk-alpine AS builder
WORKDIR /app
COPY pom.xml .
COPY src ./src
RUN mvn package -DskipTests

FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=builder /app/target/app.jar app.jar
USER appuser

ENV JAVA_OPTS="-Xms256m -Xmx512m -XX:+UseG1GC"
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar app.jar"]

Deployment velocity: With a container registry and a Kubernetes deployment, rolling out a new version is a single manifest update. Kubernetes handles the rollout strategy — rolling updates, canary deployments, blue-green — without downtime when configured correctly.

What containers cost you

Performance overhead on I/O-intensive workloads: The container networking stack adds latency. Docker's default bridge networking adds 10-30% overhead on high-frequency inter-container calls compared to direct socket communication on bare metal. For a database-heavy service making thousands of connections per second, this is measurable. On Kubernetes with overlay networking (Flannel, Calico, Cilium), the overhead is lower than Docker bridge but still non-zero.

For databases themselves — PostgreSQL, MySQL, Redis — running on bare metal or dedicated VMs consistently outperforms running in containers on I/O performance benchmarks. The container I/O stack adds latency between the process and the actual storage device. Teams that container everything including their primary database are trading 5-15% I/O performance for deployment uniformity. Whether that trade is worth it depends on your workload.

Kubernetes operational complexity: A properly configured Kubernetes cluster — with RBAC, network policies, pod security standards, resource quotas, autoscaling, certificate management, and a working ingress controller — requires significant engineering investment. Running Kubernetes well is a specialty. Teams without platform engineering capacity will spend more time managing Kubernetes than it saves in deployment automation.

EKS, GKE, and AKS reduce the control plane burden but not the workload configuration burden. A team of 5 engineers running 3 services on Kubernetes is almost certainly over-engineered. The same services on a single VM with systemd, Nginx, and a deployment script run by Capistrano or a CI pipeline are simpler to operate and fail in more predictable ways.

When bare metal or plain VMs make sense

Direct VM deployment (EC2, GCE, Hetzner) without containers makes sense when:

  • You are running a small number of services (fewer than 5-8) with well-understood dependencies
  • Your team does not have container orchestration expertise and cannot afford the ramp-up
  • The services are stable and do not need frequent updates — the deployment velocity benefit of containers is smaller if you deploy monthly
  • Performance is a primary concern and you are running I/O-intensive workloads where the container networking and storage layers create measurable overhead

The cost of bare metal or plain VM deployment is discipline: you need documented dependency management (Ansible, Chef), clear server configuration state, and a coherent deployment process. These are solvable operational problems that many teams have solved without containers.

The pattern that fits most mid-size teams

Containers for the application tier: stateless API services, background workers, scheduled jobs. These benefit from the density, reproducibility, and deployment velocity that containers provide, and their I/O patterns are not container-bottlenecked.

Managed services for the data tier: RDS for PostgreSQL, ElastiCache for Redis, MSK for Kafka. Managed services give you the operational benefits of running on dedicated infrastructure without managing it yourself. You do not need to container your database.

This pattern sidesteps the hardest parts of running containers in production (stateful storage, persistent volumes, PVC management in Kubernetes) while capturing the genuine benefits for the stateless application layer.

Deploy Kubernetes when you have more than 15 services, a dedicated platform engineer who can own it, or a workload that genuinely benefits from its autoscaling and scheduling capabilities. Otherwise, ECS on Fargate (serverless containers) or a well-configured fleet of VMs with container deployments via Docker Compose or plain Docker runs are simpler and sufficient.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

Why Auckland Startups Have an Unfair Advantage When They Hire Async — and Most Don't Know It Yet

Auckland sits in one of the earliest timezones on the planet. Most founders see that as isolation. It's actually a scheduling superpower.

Read more

Stop Over-Engineering. Your Future Self Will Thank You.

Over-engineering feels like thoroughness while you are doing it. It feels like a trap six months later. The discipline of building only what is needed is harder than it sounds and more valuable than most engineers admit.

Read more

Accessing Staging Through 3 Layers of RDP: A Waste of Time

Ever tried logging into staging and ended up navigating a maze of RDP connections? Multiple remote desktops might sound secure—but it’s often just a productivity killer.

Read more

Your Tests Are Coupled to Your Implementation and That Is Why They Keep Breaking

Tests that break every time you refactor are not telling you that refactoring is risky — they are telling you that the tests were written against implementation details rather than behavior. The coupling is the bug.

Read more