Your CI Pipeline Is Rebuilding the Same Image Over and Over

by Eric Hanson, Backend Developer at Clean Systems Consulting

The image that gets rebuilt when nothing changed

Your monorepo has eight services. You change a README file in the root. CI triggers, builds all eight Docker images, and pushes them to the registry. Nothing in any service changed — the resulting images are byte-for-byte identical to the ones already in the registry. You just spent twelve minutes and the compute budget of eight full builds on nothing.

This happens because most pipelines trigger on any push to the repository and rebuild everything, regardless of what changed. The fix requires detecting what actually changed and building only what needs to be rebuilt — or skipping the build entirely when the output would be identical.

Three approaches to avoiding redundant builds

Approach 1: path-based filtering

Only trigger the build when files relevant to that service change. In GitHub Actions:

on:
  push:
    branches: [main]
    paths:
      - 'services/api/**'
      - 'shared/common/**'    # shared library this service depends on
      - 'Dockerfile.api'
      - '.github/workflows/build-api.yml'

Separate workflow files per service, each with its own paths filter. When only services/worker/** changes, only the worker workflow triggers.

The limitation: path filters work at the workflow level. If your build logic is in a single workflow that builds all services, you need job-level filtering:

jobs:
  check-changes:
    runs-on: ubuntu-latest
    outputs:
      api-changed: ${{ steps.changes.outputs.api }}
      worker-changed: ${{ steps.changes.outputs.worker }}
    steps:
      - uses: actions/checkout@v4
      - uses: dorny/paths-filter@v3
        id: changes
        with:
          filters: |
            api:
              - 'services/api/**'
              - 'shared/**'
            worker:
              - 'services/worker/**'
              - 'shared/**'

  build-api:
    needs: check-changes
    if: needs.check-changes.outputs.api-changed == 'true'
    runs-on: ubuntu-latest
    steps:
      # build api image

  build-worker:
    needs: check-changes
    if: needs.check-changes.outputs.worker-changed == 'true'
    runs-on: ubuntu-latest
    steps:
      # build worker image

The dorny/paths-filter action compares changed files between the current commit and the previous one, outputting per-filter true/false values. Downstream jobs use if: conditions to skip builds for unchanged services.

Approach 2: image digest comparison

Even with path filtering, some scenarios cause unnecessary rebuilds: a dependency update affects multiple services, or CI triggers for a non-service-related change. A more precise approach: build the image locally in CI, compute its digest, and compare to the registry.

If the digest matches what's already in the registry, skip the push:

#!/bin/bash
LOCAL_DIGEST=$(docker buildx build --load -t temp-image:check . \
  && docker inspect temp-image:check --format '{{index .RepoDigests 0}}' 2>/dev/null \
  || docker inspect temp-image:check --format '{{.Id}}')

REGISTRY_DIGEST=$(docker manifest inspect your-registry/your-app:main 2>/dev/null \
  | jq -r '.config.digest' || echo "not-found")

if [ "$LOCAL_DIGEST" = "$REGISTRY_DIGEST" ]; then
  echo "Image unchanged, skipping push"
  exit 0
fi

docker tag temp-image:check your-registry/your-app:$COMMIT_SHA
docker push your-registry/your-app:$COMMIT_SHA

This approach has a cost: you build the image even when you might skip the push. For large images, building to check the digest might take as long as just building and pushing. It's most useful when your build is fast but your push is slow (large image, slow registry connection).

Approach 3: content-addressed cache tags

A more elegant approach: generate a cache key from the inputs to the build (Dockerfile, source files, dependency manifests) and use it as a tag. If that tag already exists in the registry, skip the build.

#!/bin/bash
# Hash the inputs to the build
CACHE_KEY=$(cat Dockerfile package.json package-lock.json src/**/*.ts \
  | sha256sum | cut -c1-12)

# Check if this exact build already exists
if docker manifest inspect your-registry/your-app:$CACHE_KEY >/dev/null 2>&1; then
  echo "Build with key $CACHE_KEY already exists, tagging without rebuilding"
  # Retag existing image with commit SHA
  docker buildx imagetools create \
    -t your-registry/your-app:$GITHUB_SHA \
    your-registry/your-app:$CACHE_KEY
  exit 0
fi

# Build and push with both the cache key and commit SHA
docker buildx build \
  --push \
  -t your-registry/your-app:$CACHE_KEY \
  -t your-registry/your-app:$GITHUB_SHA \
  .

docker buildx imagetools create creates a new tag pointing to an existing manifest without rebuilding or re-uploading. If the cache-keyed image exists, you just retag it — near-instant operation.

The hash must cover all inputs that would cause the build output to differ: the Dockerfile, all COPY'd files, and any build args. Missing an input means you'll get false cache hits (thinking the image is unchanged when it isn't).

Fixing the monorepo trigger problem in practice

For monorepos where CI triggers on any push, the path-based filtering approach is the most maintainable. Here's a practical setup:

# .github/workflows/ci.yml
name: CI

on:
  push:
    branches: [main]
  pull_request:

jobs:
  detect-changes:
    runs-on: ubuntu-latest
    outputs:
      services: ${{ steps.detect.outputs.services }}
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 2    # need previous commit for diff

      - id: detect
        run: |
          CHANGED=$(git diff --name-only HEAD~1 HEAD)
          SERVICES=""
          for service in api worker scheduler; do
            if echo "$CHANGED" | grep -qE "^(services/$service|shared)/"; then
              SERVICES="$SERVICES $service"
            fi
          done
          echo "services=$(echo $SERVICES | tr ' ' ',')" >> $GITHUB_OUTPUT

  build:
    needs: detect-changes
    if: needs.detect-changes.outputs.services != ''
    strategy:
      matrix:
        service: ${{ fromJson(format('["{0}"]', join(split(needs.detect-changes.outputs.services, ','), '","'))) }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: docker/build-push-action@v5
        with:
          context: ./services/${{ matrix.service }}
          push: ${{ github.ref == 'refs/heads/main' }}
          tags: your-registry/${{ matrix.service }}:${{ github.sha }}
          cache-from: type=gha,scope=${{ matrix.service }}
          cache-to: type=gha,scope=${{ matrix.service }},mode=max

Note the scope parameter on the GHA cache — it namespaces the cache per service so different services don't share or overwrite each other's caches.

The impact

For a monorepo with ten services that averages six pushes per day, eliminating unnecessary rebuilds for the seven services that didn't change drops CI image build time by roughly 70% across the project. At scale — larger teams, more services, more frequent commits — the savings compound.

Start with path-based filtering. It's the least complex approach and handles the majority of cases. Layer in content-addressed caching or digest comparison if you still see unnecessary builds after path filtering.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

Protecting Your Main Branch Is the Cheapest Quality Gate You Have

Branch protection rules take fifteen minutes to configure and prevent an entire class of problems — direct pushes to main, merges without review, and deployments of code that failed CI. The cost of not enabling them is paid in incidents.

Read more

Migrating a Legacy Java Codebase — A Practical Strategy That Minimizes Risk

Java 8 to 21 is not a single jump — it's a series of LTS hops, each with specific breaking changes, dependency requirements, and validation gates. Here is the strategy that keeps the application deployable throughout.

Read more

When Your First Few Months Are Terrible: Staying Motivated

Starting a new job or contract can feel like hitting a wall. The first few months are often messy, confusing, and overwhelming—but that doesn’t mean you’re failing.

Read more

Logging Across Microservices Is Useless If You Can't Connect the Dots

Logs from individual services are only as useful as your ability to correlate them across service boundaries. Without structured logging and a consistent correlation ID strategy, your logs are evidence without context.

Read more