How Git Fits Into a CI/CD Pipeline Without Getting in the Way

by Eric Hanson, Backend Developer at Clean Systems Consulting

Git as the Event Source

Your CI/CD system does not run continuously. It runs in response to events — a push to a branch, a pull request opened, a tag created, a merge to main. Every one of those events originates in Git. The quality of your Git workflow directly determines the quality of your CI/CD feedback loop.

When the connection between Git events and pipeline behavior is thoughtfully designed, CI is fast, informative, and unobtrusive. When it's ad-hoc, you get pipelines that run on every push including typo fixes, pipelines that can't distinguish a feature branch from a release, and deployment workflows that require manual interpretation of the branch name.

Mapping Git Events to Pipeline Stages

The standard model for a web service:

# GitHub Actions structure
on:
  push:
    branches: ['main']           # triggers deploy pipeline
  pull_request:
    branches: ['main']           # triggers validation pipeline
  push:
    tags: ['v*']                 # triggers release pipeline

Each event type should trigger a different set of work:

On PR against main (every proposed change):

  • Run full test suite
  • Run linting and static analysis
  • Build the artifact
  • Run integration tests
  • Report coverage
  • Do NOT deploy anywhere

On merge to main:

  • Run full test suite (should be fast — you already validated in PR)
  • Build and publish the artifact
  • Deploy to staging automatically
  • Run smoke tests against staging

On tag push (v2.3.1):

  • Run full test suite
  • Build the production artifact
  • Deploy to production

This separation means: the PR pipeline is about validation, the main pipeline is about staging delivery, and tag pushes are the production deployment trigger. Each stage has a clear purpose and a clear Git event as its trigger.

Branch-Based Pipeline Behavior

For projects with multiple deployment environments, branch names map to environments:

# GitLab CI example
deploy:staging:
  stage: deploy
  environment: staging
  script: ./deploy.sh staging
  only:
    - main

deploy:production:
  stage: deploy
  environment: production
  script: ./deploy.sh production
  only:
    - /^v\d+\.\d+\.\d+$/   # matches version tags like v2.3.1

This makes the pipeline behavior predictable from branch and tag naming conventions alone. No manual configuration per deploy. No special CI job parameters. Push to main → staging. Tag a release → production.

Commit Messages as Pipeline Instructions

Conventional Commits become machine-readable instructions when combined with tooling. The commit type determines what happens after merge:

  • feat: commits bump the minor version
  • fix: commits bump the patch version
  • BREAKING CHANGE: in the footer bumps the major version

Tools like semantic-release and release-please read the commit log since the last tag, determine the next version number, generate a changelog, create a GitHub Release, and publish the artifact — all automatically.

# .releaserc.json for semantic-release
{
  "branches": ["main"],
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    "@semantic-release/changelog",
    "@semantic-release/github"
  ]
}

With this setup, the entire release process is driven by commit messages. No manual version bumping. No manual changelog writing. The discipline of using Conventional Commits pays forward into automated release management.

Path-Based CI for Monorepos

In a monorepo with multiple services, running the full test suite for every service on every commit is wasteful. Builds that take forty-five minutes on a change to a documentation file kill developer velocity.

Path-based filtering triggers only the affected pipelines:

# GitHub Actions with path filtering
jobs:
  test-payment-service:
    if: contains(github.event.head_commit.modified, 'services/payment/')
    steps:
      - uses: actions/checkout@v4
      - run: cd services/payment && ./gradlew test

  test-auth-service:
    if: contains(github.event.head_commit.modified, 'services/auth/')
    steps:
      - uses: actions/checkout@v4
      - run: cd services/auth && ./gradlew test

For more sophisticated monorepo CI, tools like Nx (JavaScript/TypeScript), Bazel (language-agnostic), or Turborepo use dependency graphs to determine which packages are affected by a change and only test those packages plus their dependents.

Protecting Main Without Blocking Delivery

Branch protection on main ensures that only validated code merges:

Required status checks before merging:
  ✓ test / unit-tests
  ✓ test / integration-tests
  ✓ lint / code-quality
  ✓ security / secret-scan

Require branches to be up to date before merging: ✓
Require pull request reviews before merging: ✓ (1 reviewer)

The "require branches to be up to date" rule ensures that the code tested in CI is actually what's being merged, not the code before any commits that landed on main since the PR was opened. Without this, two PRs can both pass CI independently and then break main when merged together.

For high-volume teams, this creates a queue problem — every PR needs to rebase after every merge. GitHub's merge queue (and GitLab's merge trains) solve this by serializing merges and running CI against the combined result automatically.

The Metric That Reveals Pipeline Health

Lead time from merge to production — the time between a commit merging to main and that commit being live in production. With automated CI/CD and no manual steps, this should be under thirty minutes for most services. Manual deployment steps, slow tests, or blocked queues show up as increases in this metric.

If your lead time is measured in hours or days, the bottleneck is almost never Git — it's usually slow tests, manual approval gates, or deployment coordination overhead. But you can't measure it without the Git metadata (commit timestamps, merge timestamps, deployment timestamps), so the Git discipline of clean branching and tagging is also the foundation of your delivery metrics.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

How to Recognize a Failing Software Project Early

Not all disasters happen overnight. Sometimes, projects fail slowly, and the warning signs are subtle. Spotting them early can save you money, time, and a lot of frustration.

Read more

Splitting Your App Into Services Won't Fix Bad Code

Microservices distribute your codebase, not your technical debt. Bad domain modeling, tangled logic, and poor separation of concerns survive service splits intact — now with network latency attached.

Read more

Samsung, Kakao and Naver Hire Seoul's Best Backend Developers — Here Is What Startups Do

Seoul produces exceptional backend engineering talent. The companies with the longest recruiting pipelines and the largest comp budgets get there first.

Read more

ActiveRecord Query Patterns That Actually Scale

ActiveRecord makes simple queries trivial and complex queries dangerous. These are the patterns that remain correct under load — and the common ones that quietly fall apart at scale.

Read more