Your CI/CD Pipeline Has Access to Everything. That Is a Problem.

by Eric Hanson, Backend Developer at Clean Systems Consulting

The Service Account With Everything

When your CI/CD system was set up, someone created a service account and gave it the permissions needed to deploy. Over time, as the system evolved, more permissions were added: read access to S3 for pulling config files, write access to ECR, permissions to describe ECS clusters, access to Parameter Store, permissions to update Route 53 records. Each addition was legitimate at the time. Nobody removed anything when requirements changed.

The result: your CI/CD service account has IAM permissions that, if compromised, give an attacker substantial control over your production environment. And the CI/CD system is a high-value target — it executes arbitrary code from your repository on every commit.

Why Pipelines Accumulate Permissions

The primary cause is convenience. When a new pipeline step requires a permission, the fastest path is adding the permission to the existing service account rather than creating a new, scoped role. Nobody is being reckless — they're being pragmatic under deadline pressure.

The secondary cause is that permissions are rarely audited. AWS IAM Access Analyzer can tell you which permissions have been used in the last 90 days; most teams don't run it. Permissions that were needed once, years ago, remain indefinitely.

The tertiary cause is that blast radius is invisible. An administrator adding s3:* to a CI role doesn't feel dangerous in the moment because they're just adding a permission, not handing a key to an attacker. The danger only becomes visible after a compromise.

What "Least Privilege for Pipelines" Actually Means

Least privilege means the CI/CD system has exactly the permissions it needs for its current jobs — no more, no account-level wildcards, no "just in case" additions.

In practice, this requires separating permissions by pipeline stage. The unit test job needs to pull the base Docker image — it does not need to push to ECR or deploy to ECS. The build job needs ECR push access — it does not need access to production secrets. The deployment job needs ECS update and ECR pull — it does not need broad IAM permissions or access to the test database.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ECRPushOnly",
      "Effect": "Allow",
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "ecr:InitiateLayerUpload",
        "ecr:UploadLayerPart",
        "ecr:CompleteLayerUpload",
        "ecr:PutImage"
      ],
      "Resource": "arn:aws:ecr:ap-southeast-1:123456789:repository/myapp"
    },
    {
      "Sid": "ECSDeployOnly",
      "Effect": "Allow",
      "Action": [
        "ecs:UpdateService",
        "ecs:DescribeServices"
      ],
      "Resource": "arn:aws:ecs:ap-southeast-1:123456789:service/production/myapp"
    }
  ]
}

This policy lets the CI system push images and update the specific ECS service — nothing else. Not all ECR repositories, not all ECS services, not S3, not IAM, not anything not explicitly listed.

OIDC-Based Authentication: No Stored Credentials

The most significant architectural improvement for pipeline security is eliminating stored credentials entirely using OIDC. Instead of a long-lived access key stored in GitHub Secrets, the pipeline uses its OIDC token (a short-lived JWT issued by GitHub) to assume an IAM role. The role trust policy restricts which workflows can assume it:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::123456789:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
          "token.actions.githubusercontent.com:sub": "repo:myorg/myrepo:ref:refs/heads/main"
        }
      }
    }
  ]
}

The sub condition restricts this role to the main branch of a specific repository. A PR branch, a fork, or a different repository cannot assume this role. The credential exists only for the duration of the pipeline run. There is no access key to rotate, leak, or compromise.

Auditing What Your Pipeline Actually Uses

Before restricting permissions, audit what's actually being used. AWS IAM Access Analyzer's "Generate Policy" feature analyzes CloudTrail logs and generates a policy containing only the permissions your service account actually exercised in the audit window:

aws accessanalyzer start-policy-generation \
  --policy-generation-details '{
    "principalArn": "arn:aws:iam::123456789:role/CIPipelineRole"
  }' \
  --cloud-trail-details '{
    "accessRole": "arn:aws:iam::123456789:role/AccessAnalyzerRole",
    "trailArn": "arn:aws:cloudtrail:ap-southeast-1:123456789:trail/my-trail",
    "startTime": "2026-01-25T00:00:00Z",
    "endTime": "2026-04-25T00:00:00Z"
  }'

The generated policy shows what the pipeline actually used in the last 90 days. Anything in the current policy but not in the generated policy is unused — and can likely be removed. Run this audit quarterly. Treat unused permissions as technical debt with a security cost.

The Principle Applied to Job-Level Permissions

In GitHub Actions, every job can declare its own minimal permissions:

jobs:
  test:
    runs-on: ubuntu-latest
    permissions:
      contents: read      # Read the repo
      # Nothing else

  build-and-push:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write     # Push to GitHub Container Registry
      id-token: write     # OIDC for AWS role assumption

  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write     # OIDC only
      # No contents write, no packages write

Scoping permissions at the job level means a compromised test job has read-only access. A compromised build job can push images but not deploy. The blast radius of each compromise is bounded to that job's scope.

The pipeline that can do everything is a liability. Build the pipeline that can do exactly what it needs to — nothing more.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

Docker Networking Is Confusing Until You Understand This One Thing

Most Docker networking confusion comes from conflating three distinct namespaces: how containers reach each other, how the host reaches containers, and how containers reach the outside world. Once you separate those three, the rules become predictable.

Read more

Why Versioning Your API From Day One Saves You Pain Later

Skipping API versioning early feels faster, but it locks you into brittle contracts. Starting with versioning from day one keeps you flexible when real-world changes inevitably arrive.

Read more

The Testing Pyramid Is Not a Rule. It Is a Guideline.

The testing pyramid — many unit tests, fewer integration tests, even fewer end-to-end tests — is sound advice for many systems. It is not a law, and treating it as one produces test suites that are thorough in the wrong places.

Read more

Message Queues vs Direct API Calls — A Decision Guide With Real Trade-offs

The choice between publishing to a message queue and calling a downstream API directly determines your system's failure boundary — and getting it wrong in either direction creates either over-engineering or brittle coupling.

Read more