Why Your Docker Build Is Slow and How to Fix It
by Eric Hanson, Backend Developer at Clean Systems Consulting
The build that takes four minutes when it should take thirty seconds
Your team has a habit: push a one-line fix, wait four minutes for CI to build the Docker image, then wait another two minutes for it to push to the registry. Repeat for every iteration. By sprint's end, developers have wasted hours watching progress bars. The Dockerfile "works," so nobody investigates.
Almost always, the root cause is layer cache invalidation — either layers are ordered in a way that forces unnecessary rebuilds, or CI starts from scratch every time because it doesn't have access to cached layers. Both problems have known solutions.
How Docker's layer cache actually works
Docker builds images layer by layer. Each RUN, COPY, and ADD instruction creates a new layer. Before executing a layer, Docker checks its cache: if the instruction and its inputs haven't changed since the last build, Docker reuses the cached result instead of re-executing.
The critical rule: when a layer's cache is invalidated, every subsequent layer is also invalidated, regardless of whether those layers changed.
This is why instruction order is not cosmetic. It's a caching strategy.
# Bad ordering — copies source before installing dependencies
FROM node:20-alpine
WORKDIR /app
COPY . . # invalidated on every source change
RUN npm ci # always runs, even if package.json didn't change
# Good ordering — install dependencies first, copy source after
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./ # only invalidated if deps change
RUN npm ci # cached unless deps change
COPY src/ ./src/ # invalidated on source change
In the second version, a typical source file change only replays the final COPY — npm ci is pulled from cache. For a project with 500 dependencies, the difference is 3 minutes vs 5 seconds.
The dependency manifest pattern
The general pattern: copy whatever controls your dependency installation first, install dependencies, then copy application source.
Maven/Gradle:
FROM maven:3.9-eclipse-temurin-17 AS build
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline -q # cache the dep download
COPY src ./src
RUN mvn package -DskipTests -q
Python:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
Go:
FROM golang:1.22-alpine AS build
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o app ./cmd/server
The pattern holds across ecosystems. The key is that dependency manifests change far less frequently than source files. Separating them into their own layer means you pay the dependency installation cost once and cache it indefinitely until the manifest actually changes.
Multi-stage builds don't slow things down — they're often faster
A common assumption is that multi-stage builds are slower because "there are more steps." In practice, they're usually faster because they produce smaller images (less to push) and they naturally enforce better layer separation.
FROM maven:3.9-eclipse-temurin-17 AS build
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline -q
COPY src ./src
RUN mvn package -DskipTests -q
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app
COPY --from=build /app/target/app.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]
The build stage can be cached independently from the runtime stage. If nothing changes in the build stage, both stages are served from cache and the image is assembled in seconds.
BuildKit: turn it on if you haven't
Docker BuildKit (enabled by default since Docker 23, or set DOCKER_BUILDKIT=1 on older versions) improves on classic build behavior in a few important ways:
- Parallel execution of independent build stages
- Improved cache backend options (inline, registry, S3, GitHub Actions cache)
--mount=type=cachefor persistent per-layer caches between builds
The --mount=type=cache option is particularly useful for package managers:
RUN --mount=type=cache,target=/root/.m2 \
mvn dependency:go-offline -q
This mounts a persistent cache directory for Maven's local repository. Between builds on the same machine, Maven finds its downloaded JARs already present and skips re-downloading them. On a large project this alone can shave 60–90 seconds off every non-cached build.
Same for pip:
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
And npm:
RUN --mount=type=cache,target=/root/.npm \
npm ci
Note: --mount=type=cache only helps on persistent build machines. In ephemeral CI runners that start fresh every run, you need registry-based caching instead.
Why CI builds are always slow
Local builds benefit from the layer cache stored on your machine. CI runners are typically ephemeral — they start fresh, pull the image, build from scratch, and are destroyed. The layer cache from the previous run is gone.
The fix is exporting the cache to a registry between runs. With BuildKit:
# Build and push cache to registry
docker buildx build \
--cache-from type=registry,ref=your-registry/your-image:cache \
--cache-to type=registry,ref=your-registry/your-image:cache,mode=max \
-t your-registry/your-image:latest \
--push .
mode=max exports all intermediate layer caches, not just the final stage. The next CI run pulls this cache before building, and the cache hit rate typically jumps to 70–90% for source-only changes.
GitHub Actions has its own cache backend that integrates with the Actions cache service:
- uses: docker/build-push-action@v5
with:
cache-from: type=gha
cache-to: type=gha,mode=max
This avoids registry costs for cache storage but has a 10GB limit per repository.
The build that's slow for a different reason
If you've done all of the above and the build is still slow, check whether a RUN instruction is doing something inherently expensive — fetching a large artifact, compiling a native extension, running apt-get update against a slow mirror. Profile with:
docker build --progress=plain . 2>&1 | grep -E "^\#[0-9]+ (CACHED|[0-9]+\.[0-9]+s)"
This shows per-step timing. The expensive step is obvious from the output.
What to fix first
Check your Dockerfile right now: is COPY . . or an equivalent happening before your dependency installation step? If yes, reorder it. That's an hour of work with immediate, measurable results. Next, if your CI builds start from scratch every run, add registry cache export to your pipeline. In most setups these two changes reduce build time by 60–80%.