Parallelizing Your Pipeline Is Easier Than You Think
by Eric Hanson, Backend Developer at Clean Systems Consulting
Why Your Pipeline Is Sequential When It Doesn't Have to Be
Most CI pipelines look like this: checkout → compile → lint → test → build image → deploy. Each step waits for the previous one to complete. This is the default because it's the simplest structure to write — and because nobody explicitly asked "which of these steps actually depend on each other?"
The answer is usually: fewer than you think. Linting doesn't need test results. Security scanning doesn't need to wait for the Docker build to finish. Smoke tests against a staging environment can run while container scanning is still in progress. The sequential pipeline exists because it was never redesigned, not because it's correct.
Mapping Dependencies Before Writing YAML
Before changing any pipeline configuration, draw the actual dependency graph. For each job, ask: what inputs does it need, and where do those inputs come from?
Typical dependency mapping:
checkout → (no dependencies)
compile → checkout
unit-tests → compile
lint → checkout (just needs source, not compiled output)
sast-scan → checkout
integration-tests→ compile + test environment
docker-build → unit-tests (want tests to pass before building image)
container-scan → docker-build
staging-deploy → docker-build
smoke-tests → staging-deploy
From this graph, the parallel groups become obvious:
- Group 1 (parallel): lint, sast-scan, compile
- Group 2 (parallel, after compile): unit-tests, integration-tests
- Group 3 (after unit-tests): docker-build
- Group 4 (parallel, after docker-build): container-scan, staging-deploy
- Group 5 (after staging-deploy): smoke-tests
A pipeline that ran sequentially in 35 minutes now has a critical path of roughly 12 minutes, assuming runners are available.
GitHub Actions Parallel Jobs
In GitHub Actions, parallelism is the default — jobs run in parallel unless you specify needs. The pattern is to explicitly declare dependencies only where they exist:
jobs:
compile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-java@v4
with:
java-version: '21'
distribution: 'temurin'
cache: 'gradle'
- run: ./gradlew classes testClasses
- uses: actions/upload-artifact@v4
with:
name: compiled-classes
path: build/
lint:
runs-on: ubuntu-latest # Runs in parallel with compile
steps:
- uses: actions/checkout@v4
- run: ./gradlew checkstyleMain
sast:
runs-on: ubuntu-latest # Runs in parallel with compile and lint
steps:
- uses: actions/checkout@v4
- uses: github/codeql-action/analyze@v3
unit-tests:
needs: compile # Waits for compile only
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: compiled-classes
path: build/
- run: ./gradlew test --rerun-tasks
The artifact upload/download pattern — compiling once and passing the compiled output to downstream jobs — avoids recompiling in every parallel job while still allowing them to run concurrently.
Test Parallelism Within a Single Job
For test suites that are slow even in isolation, parallelism within the test run itself is the next lever. JUnit 5 supports parallel test execution natively:
# src/test/resources/junit-platform.properties
junit.jupiter.execution.parallel.enabled=true
junit.jupiter.execution.parallel.mode.default=concurrent
junit.jupiter.execution.parallel.config.strategy=dynamic
junit.jupiter.execution.parallel.config.dynamic.factor=2
With dynamic strategy and a factor of 2, JUnit 5 runs tests with 2× the available CPU cores in parallel. On a 4-core runner, that's 8 concurrent test threads. The constraint: tests must be thread-safe — no shared mutable static state, no single test database without transaction isolation.
For Gradle specifically, you can also run tests in parallel across multiple JVMs:
// build.gradle.kts
tasks.test {
maxParallelForks = (Runtime.getRuntime().availableProcessors() / 2).coerceAtLeast(1)
forkEvery = 100 // Fork a new JVM every 100 tests to avoid memory pressure
}
The Cost Side
Parallelism costs money. More parallel jobs means more concurrent runner minutes. For most teams, the math heavily favors parallelism — developer time is far more expensive than runner minutes — but it's worth sizing consciously.
GitHub-hosted runners charge per minute per runner. A pipeline that was 35 minutes on one runner (35 runner-minutes) parallelized to 12 minutes across 4 runners uses 48 runner-minutes. It costs 37% more and saves 23 minutes of developer wait time. On any reasonable accounting of developer cost, that trade is worth making immediately.
The check is whether your CI budget can absorb the increased runner usage. If you're on a shared runner pool with limited concurrency, parallelism is capped by available runners — at which point job queuing becomes the new bottleneck, which is a different problem to solve.
Start with the dependency mapping. The pipeline changes will follow naturally.