Why Your Unit Tests Are Slow and What to Do About It
by Eric Hanson, Backend Developer at Clean Systems Consulting
The Suite Nobody Runs Locally
The feedback loop of TDD — red, green, refactor in under a minute — only works if the tests run in under a minute. When the suite takes 8 minutes, developers stop running it locally. They push and wait for CI. By the time CI reports a failure, they have moved on to the next thing, and context-switching back to fix the failure costs more than the test was worth.
Slow test suites are abandoned test suites. The tests still exist, they still run in CI, but they no longer do the most valuable thing tests can do: give immediate feedback during development.
The goal for a unit test suite is under 10 seconds for the full run on a single developer machine. For most codebases, this is achievable. The things slowing it down are usually identifiable and fixable.
The Three Causes of Slow Unit Tests
1. Tests that are not actually unit tests.
The most common cause of a slow "unit test" suite is that the suite contains integration tests labeled as unit tests. Any test that starts a database, makes an HTTP call, reads from disk, or initializes a dependency injection container is not a unit test. It might be a valuable test — but it should not be mixed into the unit suite that runs on every save.
Separate your test suites. Tests that require I/O belong in an integration suite that runs less frequently. The unit suite should contain only tests that run entirely in memory with no external dependencies.
<!-- Maven Surefire / Failsafe separation -->
<!-- Unit tests run with mvn test -->
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<excludes>
<exclude>**/*IntegrationTest.java</exclude>
<exclude>**/*IT.java</exclude>
</excludes>
</configuration>
</plugin>
<!-- Integration tests run with mvn verify -->
<plugin>
<artifactId>maven-failsafe-plugin</artifactId>
<configuration>
<includes>
<include>**/*IntegrationTest.java</include>
<include>**/*IT.java</include>
</includes>
</configuration>
</plugin>
2. Expensive setup and teardown.
Tests that construct large object graphs for every test case accumulate significant overhead. Spring's @SpringBootTest annotation, which bootstraps the entire application context for each test class, is one of the most common causes of a Java test suite going from 30 seconds to 8 minutes. Testcontainers, which starts Docker containers per test, is another.
For unit tests specifically, the fix is to not use any of this. If the application context needs to start, the test is not a unit test. Use slices (@WebMvcTest, @DataJpaTest) for integration tests that need a partial context, and use Testcontainers only for integration suites that specifically need a real database or message broker.
For pure unit tests, construction should be trivial — a few new calls or factory methods with no I/O.
3. Synchronous waits and sleep calls.
# This makes every test that exercises retry logic take at least 3 seconds
def send_with_retry(message, max_attempts=3):
for attempt in range(max_attempts):
try:
return send(message)
except ConnectionError:
time.sleep(1) # Hard-coded sleep
# Inject the delay so tests can use zero delay
def send_with_retry(message, max_attempts=3, delay_seconds=1):
for attempt in range(max_attempts):
try:
return send(message)
except ConnectionError:
time.sleep(delay_seconds)
# Test with zero delay
def test_send_retries_on_connection_error():
mock_send = Mock(side_effect=[ConnectionError, ConnectionError, "ok"])
result = send_with_retry(message="hello", delay_seconds=0, _send=mock_send)
assert result == "ok"
Any sleep, Thread.sleep, or asyncio.sleep in the code under test should be injectable so tests can set it to zero. Hard-coded delays in retry logic, polling logic, or rate limiters will directly transfer into test execution time.
Profiling Before Optimizing
Before optimizing, measure. Most test frameworks can report per-test timing.
# pytest: show slowest 10 tests
pytest --durations=10
# Jest: detect slow tests
jest --verbose 2>&1 | grep -E "✓|✗" | sort -t "(" -k2 -rn | head -10
# Go: verbose output includes timing
go test -v -bench=. ./...
Identify the 20% of tests that are consuming 80% of the time. Usually it is a handful of tests that are genuinely doing I/O. Those are your integration tests in disguise, and moving them to the integration suite will recover most of the time.
A unit suite that runs in 8 seconds gets run dozens of times a day. A suite that runs in 8 minutes gets run once before a push. The difference in feedback quality over a week of development is enormous. Speed is not a nice-to-have; it is what makes the tests useful at all.