CI/CD Is Not a Tool. It Is a Practice.
by Eric Hanson, Backend Developer at Clean Systems Consulting
The Tool Is Running. So Why Is Deployment Still Stressful?
Your team has GitHub Actions, or Jenkins, or CircleCI. Builds run on every pull request. There's a green checkmark before anything merges. By most definitions, you have CI/CD. And yet, releases still require a coordinating Slack thread, a manual approval from someone who "knows the system," and a tense 20-minute window where everyone watches the dashboards.
That gap — between having the tool and having the practice — is where most engineering teams live permanently.
CI/CD as a practice means something specific: code integrates into a shared branch continuously (multiple times a day, not once a sprint), every integration is verified automatically, and the resulting artifact is deployable to production at any moment. Not "deployable after a manual smoke test." Not "deployable once QA signs off next Thursday." Deployable now, by anyone, with confidence.
What "Continuous" Actually Means
The word continuous is not marketing language. It has a precise implication: feedback loops are short enough to change behavior.
If your CI run takes 45 minutes, developers stop waiting for it. They context-switch, stack PRs, and by the time the build fails, the fix requires reconstructing mental state from two hours ago. The integration is technically continuous, but the feedback is not — and that distinction matters more than which YAML file you're maintaining.
Same with delivery. If your pipeline deploys to staging automatically but production requires four manual approvals and a change advisory board ticket, you don't have continuous delivery. You have continuous staging delivery, which is a different thing with a different risk profile.
Practices That Make CI/CD Real
Trunk-based development is the foundation most teams skip. Long-lived feature branches are the enemy of continuous integration — by definition, code on a branch is not integrated. Short-lived branches (under a day) merged directly to main, protected by feature flags when needed, is what CI actually requires.
Every build is a release candidate. If you can't answer "could we ship this build to production right now?" with yes for every green build, your pipeline is not doing its job. This means environment parity matters, database migrations are backward-compatible, and configuration is externalized properly.
Fast feedback is non-negotiable. The target for a CI pipeline is under 10 minutes for the critical path. Not because 10 minutes is a magic number, but because it's the outer boundary of a reasonable context switch. Beyond that, developers don't wait — and the integration stops being continuous.
# GitHub Actions: parallelize to keep the critical path short
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./gradlew test --parallel
static-analysis:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./gradlew checkstyleMain spotbugsMain
build:
needs: [unit-tests, static-analysis]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./gradlew bootJar
Running checks in parallel isn't a pipeline optimization. It's a practice decision: you're saying that fast feedback is valuable enough to spend parallel runner minutes on it.
The Organizational Side Nobody Talks About
Tools don't require trust. Practices do.
Continuous delivery means trusting that your automated checks are rigorous enough to catch regressions — which means investing seriously in test quality, not just test quantity. It means trusting that developers can deploy without a gatekeeper, which requires runbooks, observability, and documented rollback procedures so that anyone can respond to a bad deploy at 2pm on a Tuesday.
Teams that treat CI/CD as a tool purchase never build that trust infrastructure. They get the pipeline YAML without the shared confidence that makes autonomous deployment feel safe.
The practice also requires feedback on the practice itself. Measure lead time for changes (time from commit to production). Measure deployment frequency. Measure mean time to recovery. If those numbers aren't improving quarter over quarter, the pipeline exists but the practice isn't taking hold.
Where to Start If You're Stuck
If your pipeline is taking longer than 10 minutes, profile it — most teams find a single bottleneck (usually integration tests or a slow Docker build) consuming 60% of the time. Fix that first.
If deployment still requires human coordination, document exactly what those humans are doing and ask whether each step could be automated or eliminated. Usually it's a mix of "we don't trust the automated checks" and "there's manual state management happening." Both are solvable, neither is solved by buying a better tool.
The pipeline is just the mechanism. The practice is what gives it meaning. Treat them as separate things, and you'll stop confusing "we have CI/CD" with "we actually do CI/CD."