Scanning Your Docker Image for Vulnerabilities Is Not Optional

by Eric Hanson, Backend Developer at Clean Systems Consulting

The CVE you've been shipping for six months

Your Node.js application image is based on node:18. You haven't changed the base image tag since the project started. In that time, OpenSSL shipped four security patches, the Debian base shipped two glibc updates addressing privilege escalation vulnerabilities, and libexpat got a critical CVE fix. Your image has none of these patches. Your scanner — if you had one — would report critical findings.

This isn't a failure of your application code. It's the compounding cost of not updating the base image and not scanning. Every week you don't scan is a week you don't know what you're shipping.

How vulnerability scanning works

Image scanners analyze the software bill of materials (SBOM) of your image — the OS packages, language runtimes, and libraries present in each layer — and cross-reference them against vulnerability databases (NVD, GitHub Security Advisories, OS-vendor advisories). When a package version matches a known CVE, the scanner reports it with severity (Critical, High, Medium, Low), fix availability, and the CVE identifier.

The quality of results depends on the scanner's database coverage and how well it identifies packages in non-standard locations (e.g., JARs bundled inside a fat JAR, npm packages in node_modules).

Trivy: the tool to start with

Trivy (from Aqua Security) is the most widely adopted open-source container scanner. It handles OS packages, language ecosystem packages (npm, pip, Maven, Go modules), and misconfigurations:

# Install trivy
brew install trivy        # macOS
apt-get install trivy     # Debian/Ubuntu via Aqua repo

# Scan an image
trivy image your-image:tag

# Show only fixable vulnerabilities, medium and above
trivy image --ignore-unfixed --severity MEDIUM,HIGH,CRITICAL your-image:tag

# Output as SARIF for integration with GitHub Code Scanning
trivy image --format sarif --output trivy-results.sarif your-image:tag

# Scan the filesystem (useful in CI before building an image)
trivy fs --security-checks vuln,config .

Trivy integrates with all major CI systems and is free for the core functionality.

Docker Scout: the built-in option

Docker Scout (included with Docker Desktop and available via CLI) provides similar scanning with tighter Docker Hub integration. If your images are on Docker Hub, Scout continuously monitors them and notifies you when new CVEs affect your published images:

docker scout cves your-image:tag

# Quick summary
docker scout quickview your-image:tag

# Compare two images
docker scout compare --to your-image:previous-tag your-image:tag

Docker Scout is particularly useful for seeing what base image updates would fix — it provides recommendations, not just findings.

Grype: for CI pipelines and SBOM workflows

Grype (from Anchore) pairs well with Syft (their SBOM generator) for pipelines that need to produce and consume SBOMs:

# Generate SBOM
syft your-image:tag -o spdx-json > sbom.json

# Scan the SBOM for vulnerabilities
grype sbom:./sbom.json

# Or scan the image directly
grype your-image:tag --only-fixed --fail-on high

--fail-on high makes Grype exit with a non-zero code if any High or Critical vulnerability is found — which makes it useful as a CI gate. --only-fixed filters to only vulnerabilities that have an available fix, reducing alert fatigue from unfixable findings.

Integrating scanning into CI

A basic GitHub Actions step that fails the pipeline on critical vulnerabilities:

- name: Scan image
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: 'your-registry/your-image:${{ github.sha }}'
    format: 'table'
    exit-code: '1'
    ignore-unfixed: true
    severity: 'CRITICAL,HIGH'

For teams using GitLab CI:

container-scanning:
  image: registry.gitlab.com/security-products/container-scanning:7
  variables:
    CS_IMAGE: your-registry/your-image:$CI_COMMIT_SHA
  artifacts:
    reports:
      container_scanning: gl-container-scanning-report.json

GitLab's built-in container scanning produces a report that integrates with the GitLab security dashboard — findings appear in merge request pipelines alongside SAST and DAST results.

Managing the noise

A first scan on an old image will produce hundreds of findings. Most will be informational or low severity. The practical approach:

Triage by severity and fixability first. Critical vulnerabilities with fixes available are the only ones that should block a deploy. High vulnerabilities with fixes should be addressed within days. Low and Medium, especially unfixable ones, go in a backlog.

Unfixable vulnerabilities: some CVEs exist in packages that are present in the base image and have no fixed version available yet. These can't be patched — you can only wait for the upstream fix or switch to a different base image. Use --ignore-unfixed or equivalent to suppress these from your blocking checks.

Accept specific CVEs via allowlist. Trivy supports a .trivyignore file:

# .trivyignore
# Accepted: CVE-2023-12345 — no fix available, not exploitable in our context
CVE-2023-12345
# Will fix by: 2025-06-01
CVE-2024-67890

Document why each CVE is accepted and when you'll revisit it. An undocumented ignore entry becomes invisible technical debt.

Keeping the base image current

Scanning tells you what's vulnerable. Fixing it usually means updating the base image.

Pin base image versions, but update the pin regularly:

FROM node:20.12.2-alpine3.19

Use Renovate or Dependabot to automate base image updates:

// renovate.json
{
  "extends": ["config:base"],
  "docker": {
    "enabled": true
  }
}

Renovate opens a pull request when a new version of node:20 is available, runs your CI (including the vulnerability scan), and gives you a one-click update path. Without automation, base image updates are done manually when someone remembers — which is rarely, and often after an incident.

The baseline you need today

  1. Run Trivy against your most critical production image right now:
    trivy image --severity CRITICAL your-image:latest
    
  2. Look at what Critical findings exist and how many have fixes available
  3. Update the base image if findings include OS-level packages with fixes
  4. Add a scan step to your CI pipeline that fails on Critical with available fixes

The scan takes under a minute. The output tells you exactly where you stand. What you do with that information is a risk decision — but not scanning means making that decision blind.

Scale Your Backend - Need an Experienced Backend Developer?

We provide backend engineers who join your team as contractors to help build, improve, and scale your backend systems.

We focus on clean backend design, clear documentation, and systems that remain reliable as products grow. Our goal is to strengthen your team and deliver backend systems that are easy to operate and maintain.

We work from our own development environments and support teams across US, EU, and APAC timezones. Our workflow emphasizes documentation and asynchronous collaboration to keep development efficient and focused.

  • Production Backend Experience. Experience building and maintaining backend systems, APIs, and databases used in production.
  • Scalable Architecture. Design backend systems that stay reliable as your product and traffic grow.
  • Contractor Friendly. Flexible engagement for short projects, long-term support, or extra help during releases.
  • Focus on Backend Reliability. Improve API performance, database stability, and overall backend reliability.
  • Documentation-Driven Development. Development guided by clear documentation so teams stay aligned and work efficiently.
  • Domain-Driven Design. Design backend systems around real business processes and product needs.

Tell us about your project

Our offices

  • Copenhagen
    1 Carlsberg Gate
    1260, København, Denmark
  • Magelang
    12 Jalan Bligo
    56485, Magelang, Indonesia

More articles

Memoization in Ruby — Patterns I Use Every Day

The ||= idiom covers 80% of memoization cases, but the other 20% — falsy values, arguments, thread safety, invalidation — is where the real decisions live.

Read more

Root Cause Analysis: Stop Fixing Symptoms and Start Fixing Problems

Most incident follow-ups fix the proximate cause — the thing that immediately broke. Root cause analysis asks what system property allowed the proximate cause to exist, and fixes that instead.

Read more

How I Make Architecture Decisions Without Endless Meetings

Architecture decisions don't need a calendar invite — they need a clear process, the right people, and a bias toward writing things down. Here's the framework I actually use.

Read more

How to Price Your Contract Work Without Underselling Yourself

Pricing is not just math. It is a statement about how you see your own value — and clients read it that way too.

Read more