Stop Skipping Tests in Your Pipeline to Save Time
by Eric Hanson, Backend Developer at Clean Systems Consulting
The 15 Minutes You're Saving and What They're Actually Costing You
The integration tests take 15 minutes. The pipeline takes 22 minutes total with them. Someone — usually under deadline pressure, often with good intentions — adds -DskipTests to the Maven build or comments out the integration test job in the GitHub Actions workflow. The pipeline drops to 7 minutes. Velocity appears to improve. Deployments go faster.
Six weeks later, a defect ships to production that the integration tests would have caught. The incident takes 4 hours to diagnose and remediate. The RCA notes that the tests were skipped. A decision is made to "re-enable them soon." They stay disabled for another month while the team catches up on the backlog the incident created.
This is not a hypothetical. It is a pattern that plays out in organizations repeatedly, and the mechanism is always the same: skipping tests creates a false sense of safety that makes the eventual defect more costly than the sum of the time saved.
Why People Skip Tests (and Why Those Reasons Sound Reasonable)
"The tests are slow." They are. Slow tests are a real problem. The correct response is to fix the tests — parallelize them, optimize the infrastructure, move slow tests to a non-blocking tier. Skipping them is a response that creates a new problem without solving the original one.
"These tests are flaky anyway." Flaky tests are also a real problem. The correct response is to quarantine them from the blocking suite and fix them. Skipping all tests because some are flaky is a disproportionate response that removes reliable signal along with unreliable signal.
"We're under deadline pressure." This one is the most understandable and the most dangerous. Under deadline pressure, the instinct to remove friction is strong. But tests exist specifically for high-velocity, high-pressure moments — when engineers are moving fast, when changes are large, when review is rushed. Removing the safety net exactly when it's most needed is the worst possible time to skip it.
"I know this change is safe." You know your change is safe. You don't know whether your change interacts unexpectedly with a change someone else merged this morning. CI runs against the integrated codebase, not against your isolated branch. "I know my code is safe" is not equivalent to "the integration is safe."
What Skipping Tests Actually Communicates
When a team normalizes skipping tests, it communicates three things that quietly poison the engineering culture:
First, that tests are overhead rather than value. Tests are the team's collective assertion that the software behaves correctly. Skipping them says those assertions are optional — which undermines the entire premise of investing in tests.
Second, that shipping fast is more important than shipping correctly. This is sometimes true (startups moving fast on low-stakes changes), but it should be an explicit, deliberate decision — not a default that creeps in through deadline pressure.
Third, that the test suite is not trusted. If the team trusted the tests to provide real signal, skipping them would feel riskier. The fact that skipping feels acceptable often reveals an underlying lack of confidence in test quality — which should be addressed by improving test quality, not by making skipping the norm.
The Right Alternatives
If tests are too slow to run on every commit, tier them — don't skip them:
# Run fast tests on every push
on:
push:
branches: ['**']
jobs:
fast-tests:
steps:
- run: ./gradlew test -P testGroups=unit # 3 minutes
---
# Run full suite on PR only
on:
pull_request:
branches: [main]
jobs:
full-suite:
steps:
- run: ./gradlew test integrationTest # 18 minutes
If specific tests are reliably flaky, quarantine them explicitly:
@Tag("quarantined") // Does not block CI; tracked in separate report
@Test
void thisTestIsFlaky() { ... }
If you're under deadline pressure and genuinely need to ship now: document that you're shipping with tests skipped, create a ticket to re-enable them, and treat the deployment as higher-risk — which means more monitoring, a staged rollout, and a clear rollback plan. Make the risk explicit rather than hiding it behind a clean pipeline.
The Line You Cannot Cross
There is one version of test skipping that is categorically unacceptable: skipping tests before a production deployment and not telling anyone. That is not a speed optimization. It is a misrepresentation of code quality that other people will rely on when making deployment decisions.
If you're going to skip tests, be explicit about it, document it, and escalate the risk to whoever makes the deployment decision. Own the choice rather than hiding it inside a YAML comment.
The 15 minutes you save is never worth the incident you enable.