Docker Compose Is Not Just for Local Development
by Eric Hanson, Backend Developer at Clean Systems Consulting
The tool you stop using past localhost
Your team's Docker Compose workflow works well: one command to start the app, database, Redis, and a reverse proxy locally. Then you deploy to a Kubernetes cluster and the Compose file is completely irrelevant. Staging uses a separate Helm chart. Production has its own manifests. Three different configuration systems for the same application, none of them sharing anything.
This is the standard pattern, and it has real costs: drift between environments, duplicate configuration, and onboarding friction. Docker Compose has evolved past "local dev tool," and understanding where it's actually appropriate — and where it isn't — helps you decide whether to extend it or abandon it.
What Compose actually provides
Docker Compose is a multi-container orchestrator for a single host. It manages:
- Container lifecycle (start, stop, restart, health checks)
- Network creation (containers on the same Compose project communicate via service name DNS)
- Volume management (named volumes, bind mounts)
- Dependency ordering (
depends_onwith condition) - Environment variable injection
- Port publishing
What it doesn't provide (and what Kubernetes/ECS/Nomad does): multi-host scheduling, horizontal scaling, rolling deployments, service discovery across hosts, and resource quota enforcement across a cluster.
The implication: Compose is genuinely useful for single-host deployments and for environments where Kubernetes is overkill.
Single-host production with Compose
Small internal services, staging environments, and low-traffic applications that don't require horizontal scaling are reasonable candidates for production Compose deployments.
Docker Compose with restart: unless-stopped and Docker's built-in health checking gives you container restarts on failure. Combined with a host-level process supervisor (systemd managing docker compose up -d) and a reverse proxy (Nginx, Traefik, or Caddy), this covers the requirements for many small-to-medium applications:
services:
app:
image: your-registry/your-app:${APP_VERSION}
restart: unless-stopped
environment:
- DATABASE_URL=${DATABASE_URL}
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 60s
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pg_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 10s
timeout: 5s
retries: 5
proxy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
volumes:
pg_data:
caddy_data:
A systemd unit runs this:
[Unit]
Description=Application Stack
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/opt/app
ExecStart=/usr/bin/docker compose up
ExecStop=/usr/bin/docker compose down
Restart=always
[Install]
WantedBy=multi-user.target
This is a production deployment for many teams. Not for a high-traffic public API, but for an internal tooling service or a staging environment serving dozens of users, this is entirely reasonable and significantly simpler to operate than Kubernetes.
Environment-specific overrides with Compose files
Compose supports layering configuration with multiple files:
# Base configuration
docker compose -f docker-compose.yml up
# Override with environment-specific config
docker compose -f docker-compose.yml -f docker-compose.prod.yml up
The override file (docker-compose.prod.yml) merges with the base, letting you share most configuration across environments while differing on the things that vary:
# docker-compose.prod.yml
services:
app:
image: your-registry/your-app:${APP_VERSION}
deploy:
resources:
limits:
cpus: '1'
memory: 512M
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
# docker-compose.dev.yml
services:
app:
build: .
volumes:
- ./src:/app/src # live reload in development
environment:
- DEBUG=true
This pattern works for local vs. staging differences without maintaining two entirely separate configuration systems.
Compose for integration testing in CI
This is the use case where Compose punches above its weight. Your integration tests need a real database, a real Redis, maybe a mock HTTP service. Compose sets this up in seconds and tears it down after:
# docker-compose.test.yml
services:
app:
build:
context: .
target: test # build to test stage
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
command: npm test
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: testdb
POSTGRES_USER: test
POSTGRES_PASSWORD: test
healthcheck:
test: ["CMD-SHELL", "pg_isready -U test"]
interval: 5s
retries: 10
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
retries: 10
In CI:
docker compose -f docker-compose.test.yml up --exit-code-from app
--exit-code-from app makes the Compose command exit with the exit code of the app service. If tests fail, CI fails. All dependencies are started, health-checked, and torn down automatically. No test database management, no port conflicts between parallel CI jobs (use --project-name to namespace).
Where Compose genuinely falls short
Be clear about the limits:
- No horizontal scaling:
docker compose up --scale app=3runs three containers, but load balancing requires additional configuration. Kubernetes services handle this natively. - Single host: all services run on one machine. No failover if the host goes down.
- No rolling deployments: updating a service means stopping and restarting it. Kubernetes'
Deploymentresource handles rolling updates with zero downtime. - No cluster-aware service discovery: services find each other by name within a single Compose project. Cross-project or cross-host discovery requires additional infrastructure.
If you need any of these, Compose isn't the right tool for that use case. But if you're reaching for Kubernetes because it's the "right" choice rather than because you need multi-host scheduling or rolling deployments, you may be trading simplicity for complexity you don't need yet.
The migration path
For teams currently running Compose in production who are growing toward Kubernetes: Compose can coexist. Run Compose for staging and simpler services. Run Kubernetes for high-traffic or HA-required services. The skills transfer — understanding Compose networking and volume semantics translates directly to Kubernetes concepts. You don't have to migrate everything at once.