The average time from commit to production has shortened from weeks to hours over the past three years. But security review still takes days — and that’s a problem. This guide shows how to integrate five layers of security scanning directly into CI/CD pipeline so that the build stops before a vulnerability gets to production. No theory — concrete configuration for GitHub Actions, GitLab CI, and Jenkins.
Why Manual Security Review Is Not Enough in 2026¶
Imagine a development team that deploys 15 times a day. Each deploy goes through code review, unit tests, integration tests, linting. But security review? That is handled by one security engineer who can’t keep up, so “critical” pull requests get merged with a note “security review later.” Later never comes.
This is not an edge case. According to the Snyk 2025 State of Application Security report, 76% of organizations admit that security testing slows their release cycle. Result? Either security is bypassed, or deployments are slower than necessary. Both are wrong.
DevSecOps pipeline solves this trade-off radically: security checks run automatically, in parallel with other tests, on every commit. Developers get feedback in minutes, not days. The security team shifts from the role of gatekeeper to the role of architect — defining rules, not checking every pull request.
At CORE SYSTEMS, we implement DevSecOps pipelines for clients from fintech to public sector. From practice, we know that a properly configured pipeline reduces the number of security incidents in production by 60–80% while simultaneously accelerating the release cycle, because manual waiting for security approval is eliminated.
Five Layers of Security Scanning¶
A DevSecOps pipeline is not a single tool — it is an orchestration of five complementary scanning layers. Each layer catches a different type of vulnerability at a different stage of the code lifecycle. If you skip one, you have a blind spot.
1. SAST — Static Application Security Testing¶
SAST analyzes source code without running it. It looks for patterns that lead to vulnerabilities: SQL injection, XSS, hardcoded secrets, insecure deserialization, path traversal. SAST sees code the way a developer reads it — it can point to the exact line and explain why it is problematic.
Semgrep is in 2026 the de facto standard for SAST in modern teams. Unlike legacy tools (Fortify, Checkmarx), it is open-source, fast, and its rules are written in YAML, not in a proprietary language. Semgrep Pro adds cross-file and cross-function analysis (taint tracking), which dramatically reduces false positives.
Specific example: Semgrep rule for SQL injection detection in Python:
`# .semgrep/sql-injection.yml
rules:
- id: python-sql-injection
patterns:
- pattern: |
cursor.execute(f”…{$VAR}…”)
- pattern-not: |
cursor.execute(f”…{int($VAR)}…”)
message: |
SQL injection: user input $VAR interpolated
directly into SQL query. Use parameterized queries.
severity: ERROR
languages: [python]
metadata:
cwe: CWE-89
owasp: A03:2021`
In practice, we combine Semgrep with custom rules specific to the client’s tech stack. One of our fintech clients had an internal ORM wrapper that bypassed standard parameterized queries — generic SAST never caught it. A custom Semgrep rule did.
2. SCA — Software Composition Analysis¶
Modern applications consist of 70–90% open-source dependencies. SCA scans these dependencies against known vulnerability databases (NVD, GitHub Advisory Database, OSV) and identifies libraries with CVEs.
Snyk Open Source goes further than simple CVE matching. It analyzes reachability — whether your code actually calls the vulnerable function in a dependency. Log4j (CVE-2021-44228) is in the dependency tree? Snyk verifies whether your application even uses JNDI lookup. If not, the priority changes dramatically.
Key SCA metrics we track:
- Dependencies with known CVE: how many dependencies have active CVEs? Target: 0 critical, <5 high.
- Dependency freshness: how old are your dependencies? A dependency 3+ years without an update is a red flag.
- License compliance: are you using a GPL library in a proprietary product? SCA will catch it before a lawyer does.
- Transitive dependencies: your direct dependency is OK, but its dependency has a critical CVE. SCA scans the entire tree.
3. DAST — Dynamic Application Security Testing¶
DAST tests a running application from outside — like an attacker. It sends malformed requests, tests authentication, looks for exposed endpoints, and verifies HTTP security headers. DAST finds things that SAST cannot see: misconfigured servers, missing CORS policy, session management issues.
In CI/CD pipeline, DAST typically runs against the staging environment after a successful deploy. OWASP ZAP (Zed Attack Proxy) is the open-source choice; for enterprise deployment, we recommend Burp Suite Enterprise or Snyk DAST (formerly Probely).
Example of ZAP integration into GitHub Actions:
`# .github/workflows/dast.yml
dast-scan:
runs-on: ubuntu-latest
needs: deploy-staging
steps:
- name: ZAP Full Scan
uses: zaproxy/[email protected]
with:
target: ‘https://staging.example.com’
rules_file_name: ‘.zap/rules.tsv’
cmd_options: ‘-a -j’
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ‘report.sarif’`
4. Container Scanning¶
If you deploy containers (and in 2026 — who doesn’t?), you must scan both the base image and application layer. A vulnerable Alpine base image with a CVE in OpenSSL or libcurl is an entry point that SAST will never see.
Trivy from Aqua Security is the most widely used open-source container scanner. It scans OS packages, language-specific dependencies, IaC misconfigurations, and secrets — all in one tool. At CORE SYSTEMS, we implement Trivy as a mandatory step in the CI pipeline and as an admission controller in Kubernetes (via Trivy Operator).
`# Trivy v CI — GitHub Actions
container-scan:
runs-on: ubuntu-latest
steps:
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Trivy vulnerability scan
uses: aquasecurity/[email protected]
with:
image-ref: ‘myapp:${{ github.sha }}’
format: ‘sarif’
output: ‘trivy-results.sarif’
severity: ‘CRITICAL,HIGH’
exit-code: ‘1’ # fail build on findings
- name: Upload to GitHub Security
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ‘trivy-results.sarif’`
Key detail: scan the final image, not just the Dockerfile. A multi-stage build may contain different packages in the final stage than in the builder stage. And scan on every build — new CVEs appear daily.
5. IaC Scanning — Infrastructure as Code¶
Terraform, Helm charts, Kubernetes manifests, CloudFormation, Pulumi — your infrastructure is code and needs security review like any other code. IaC scanning reveals misconfigurations before they get to production: S3 buckets without encryption, security groups with 0.0.0.0/0, Kubernetes pods with privileged: true.
Checkov from Bridgecrew (Palo Alto) is the most comprehensive open-source IaC scanner. It supports 1,000+ built-in rules for Terraform, CloudFormation, Kubernetes, Helm, ARM templates, and Docker. Custom policies are written in Python or YAML.
`# Checkov v GitLab CI
iac-scan:
stage: test
image: bridgecrew/checkov:latest
script:
- checkov -d ./terraform/
–framework terraform
–output sarif
–soft-fail-on LOW
–hard-fail-on CRITICAL,HIGH
–skip-check CKV_AWS_18 # intentional public bucket
- checkov -d ./k8s/
–framework kubernetes
–hard-fail-on CRITICAL`
At CORE SYSTEMS, we implement Checkov with a custom policy pack for each client. A banking client has strict rules on encryption at rest for all storage services. An e-commerce client needs specific network isolation rules for PCI DSS compliance. Generic rules are not enough.
Complete Pipeline — How to Put It All Together¶
Five tools are five moving parts. The key is in orchestration — when each runs, how results are aggregated, and when the build stops.
Here is a reference pipeline architecture that we use as a foundation for most client deployments:
Phase 1: Pre-commit & PR (seconds)¶
Secret detection: git-secrets or gitleaks as a pre-commit hook. Stops commits with AWS keys, API tokens, private keys.
SAST (fast mode): Semgrep with –fast flag — scans only changed files. Feedback in 30 seconds.
Why here: Developer gets feedback immediately, before push. Cheapest place to fix.
Phase 2: CI Build (minutes)¶
SAST (full mode): Semgrep cross-file analysis of the entire repository. Taint tracking, data flow analysis.
SCA: Snyk test or Trivy fs scan — full dependency tree, reachability analysis, license check.
IaC scan: Checkov on Terraform, Helm, K8s manifests. In parallel with SAST and SCA.
Container scan: Trivy image scan after docker build. Severity threshold: CRITICAL = fail, HIGH = warning.
Why here: These scans run in parallel (1–3 minutes total). Build failure stops merge to main.
Phase 3: Post-deploy Staging (minutes–hour)¶
DAST: ZAP or Burp Suite against the staging environment. Baseline scan (5 min) on every deploy, full scan (30–60 min) as a nightly cron job.
API security testing: OpenAPI spec validation, fuzz testing of API endpoints, authentication and authorization edge cases.
Why here: DAST needs a running application. Staging is a safe environment for destructive tests.
Phase 4: Production Runtime (continuous)¶
Container runtime scanning: Trivy Operator in Kubernetes — continuous scanning of running images against newly published CVEs.
SBOM monitoring: CycloneDX or SPDX SBOM is generated at build time and continuously monitored. New CVE in a dependency? Alert.
Why here: A new CVE appears after deploy. Runtime scanning ensures you learn about it within hours, not weeks.
Complete GitHub Actions Pipeline — Copy-paste Ready¶
Here is a real configuration that we use as a foundation. Adjust severity thresholds and skip-check lists according to your risk appetite:
`# .github/workflows/devsecops.yml
name: DevSecOps Pipeline
on:
pull_request:
branches: [main, develop]
push:
branches: [main]
jobs:
# — SAST —
sast:
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- uses: actions/checkout@v4
- name: Semgrep SAST
uses: semgrep/semgrep-action@v1
with:
config: >-
p/default
p/owasp-top-ten
p/cwe-top-25
.semgrep/
env:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_TOKEN }}
# — SCA —
sca:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Snyk dependency scan
uses: snyk/actions/node@master
with:
args: –severity-threshold=high
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
# — IaC Scan —
iac:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Checkov IaC scan
uses: bridgecrewio/checkov-action@v12
with:
directory: ./infrastructure/
framework: terraform,kubernetes
soft_fail_on: LOW,MEDIUM
hard_fail_on: CRITICAL,HIGH
# — Container Scan —
container:
runs-on: ubuntu-latest
needs: [sast, sca] # build only if code is clean
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t app:${{ github.sha }} .
- name: Trivy container scan
uses: aquasecurity/[email protected]
with:
image-ref: ‘app:${{ github.sha }}’
format: ‘sarif’
severity: ‘CRITICAL,HIGH’
exit-code: ‘1’
- name: Generate SBOM
run: trivy image –format cyclonedx
–output sbom.json app:${{ github.sha }}
- uses: actions/upload-artifact@v4
with:
name: sbom
path: sbom.json`
Secret Management — First Line of Defense¶
The GitGuardian 2025 State of Secrets Sprawl report revealed 12.8 million new hardcoded secrets on public GitHub in 2024. AWS keys, database credentials, API tokens — all committed to repositories. And those are just public repos. In private ones, the situation is worse because developers have a false sense of security.
A pre-commit hook with gitleaks is the minimum. Configuration:
`# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.22.0
hooks:
- id: gitleaks
DevSecOps Pipeline — Security Automation in CI/CD 2026¶
[extend]
useDefault = true
[[rules]]
id = “internal-api-key”
description = “Internal API key pattern”
regex = ‘’‘CORE_API_[A-Za-z0-9]{32,}’‘’
tags = [“api”, “internal”]`
But a pre-commit hook is opt-in — a developer can bypass it with --no-verify.
Therefore, secret scanning must also run in CI as a server-side gate.
GitHub offers native secret scanning with push protection. GitLab has
Secret Detection as part of its CI template. For self-hosted solutions,
use gitleaks or TruffleHog in a CI job.
How Not to Drown Your Team in False Positives¶
The fastest way to kill DevSecOps adoption is to flood developers with hundreds of alerts, 80% of which are false positives. The team starts ignoring security findings and the entire effort is wasted. At CORE SYSTEMS, we solve this with a three-step approach:
1. Severity-based gating: Not all findings are equal. Build fails only on CRITICAL and HIGH. MEDIUM generates a warning in the PR comment. LOW goes to the security dashboard — never blocks the build.
2. Baseline and suppression: When first integrating a tool into an existing project, you’ll get hundreds of findings from legacy code. Solution: create a baseline file with existing findings. CI then reports only new findings. Legacy findings are addressed during a tech debt sprint, not as a blocker for every PR.
`# Semgrep — ignoring existing findings
On first integration:¶
semgrep –config auto –sarif –output baseline.sarif .
In CI — new findings only:¶
semgrep –config auto –baseline-commit ${{ github.event.pull_request.base.sha }}`
3. Rule tuning: Generic rulesets contain rules that don’t make sense for your stack. Java serialization rules in a Python project? Disable. PHP-specific XSS in a Go backend? Disable. Review monthly suppressed findings — if a rule generates >50% false positives, modify it or replace it with a custom version.
SBOM and Supply Chain Security — A Requirement, Not Nice-to-have¶
The EU Cyber Resilience Act (CRA), which enters into force in 2027 with a transition period from 2025, requires a Software Bill of Materials (SBOM) for all products with digital elements sold on the EU market. US Executive Order 14028 requires the same for federal suppliers.
An SBOM is not a PDF listing libraries. It is a machine-readable file (CycloneDX or SPDX format) containing:
- Components: all dependencies — direct and transitive, including versions.
- Licenses: what license each component is under.
- Vulnerabilities: known CVEs at the time of generation.
- Provenance: where the component comes from (registry, commit hash).
SBOM generation belongs in the CI pipeline as a build artifact:
`# SBOM generování s Trivy
trivy image –format cyclonedx \
–output sbom-$(date +%Y%m%d).cdx.json \
myapp:${{ github.sha }}
Alternativa: syft (Anchore)¶
syft packages myapp:${{ github.sha }} \
-o cyclonedx-json=sbom.cdx.json
Validace a signing¶
cosign attest –predicate sbom.cdx.json \
–type cyclonedx \
myapp:${{ github.sha }}`
At CORE SYSTEMS, we implement SBOM pipelines with continuous monitoring — the SBOM is generated on every build and stored in a central registry (Dependency-Track). When a new CVE appears, the system automatically identifies all deployed versions containing the affected component and creates a ticket with priority based on severity and reachability.
Success Metrics — What to Measure and What Targets to Set¶
A DevSecOps pipeline without metrics is security theater. You measure to know whether the investment works. Here are the metrics we recommend tracking from day one:
- Mean Time to Remediation (MTTR): average time from vulnerability detection to fix. Critical CVE target: <24 hours. High: <7 days. MTTR under 72 hours for critical is top decile.
- Escape rate: how many vulnerabilities pass through the entire pipeline and reach production. Target: <5% of total findings. This is the ultimate pipeline effectiveness metric.
- False positive rate: how many findings the security team marks as false positive. Above 30% = pipeline needs tuning. Above 50% = developers stop taking alerts seriously.
- Security debt trend: total number of open security findings over time. Should be declining or stable, never growing.
- Pipeline failure rate: how many builds fail on security checks. Above 20% = rules are too strict or generate too many false positives.
- Developer friction: how much time security scans add to the CI pipeline. Target: <5 minutes for SAST+SCA+container scan. Above 10 minutes, developers start bypassing.
- Coverage: how many repositories have an active DevSecOps pipeline. Target: 100% for production workloads.
We recommend a Grafana dashboard with data from SARIF reports aggregated via DefectDojo or GitHub Security Overview. A weekly security standup (15 minutes) reviewing these metrics keeps the team accountable.
Implementation Roadmap — 8 Weeks from Zero to Production¶
You don’t need a year to build a DevSecOps pipeline. With clear priorities and the right tools, you have a functional pipeline in 8 weeks:
Weeks 1–2: Foundation¶
Secret scanning: gitleaks pre-commit hook + CI job. Immediate value, minimal effort.
SCA: Snyk or Trivy fs scan in CI. One of the simplest extensions — npm audit / pip-audit is not enough.
Output: Every PR has secret scan and dependency scan. Build fails on critical CVE.
Weeks 3–4: SAST & Container¶
SAST: Semgrep with p/default + p/owasp-top-ten rulesets. Baseline for existing code.
Container scanning: Trivy image scan after docker build. Set severity threshold according to context.
Output: PR comments with SAST findings. Container images scanned before push to registry.
Weeks 5–6: IaC & DAST¶
IaC scanning: Checkov on Terraform/Kubernetes. Custom policy pack for your environment.
DAST: ZAP baseline scan against staging after every deploy. Full scan as nightly cron.
Output: Infrastructure misconfigurations caught in PR. Staging scanned automatically.
Weeks 7–8: Observability & Tuning¶
Dashboard: Grafana dashboard with MTTR, escape rate, false positive rate. Data from DefectDojo or native GitHub/GitLab.
Tuning: Review false positives, adjust severity thresholds, custom rules for specific patterns.
SBOM: Automatic generation at build time, upload to Dependency-Track.
Output: Measurable DevSecOps pipeline with metrics and continuous improvement process.
Most Common Mistakes from Practice¶
In two years of deploying DevSecOps pipelines for clients, we’ve seen the same mistakes again and again:
- “We’ll turn everything on across all repositories at once”: Developers get hundreds of alerts overnight. Result: resistance, bypassing, disabling. Start with one pilot project, tune the rules, then roll out.
- Security team defines rules without developers: Rules that don’t make sense for the given stack generate false positives and frustration. Developers must be part of rule definition from the start.
- No ownership over findings: Security scan finds 200 findings, but nobody has them in the backlog. Every finding needs an owner and an SLA for remediation.
- Ignoring base images: The team scans their code but uses a 3-year-old Node.js base image with 47 CVEs. Base image hygiene is fundamental — pin versions, use distroless or chainguard images.
- Missing feedback loop: Pipeline runs but nobody reads the results. Integration into PR comments, Slack notifications for critical findings, weekly security standup — these are the mechanisms that maintain attention.
- Security as a gate instead of a guardrail: A DevSecOps pipeline should not be a wall that PRs crash against. It should be a guide — soft-fail with clear communication is better than hard-fail without context.
Tools in Context — Decision Matrix 2026¶
Tool selection depends on tech stack, budget, and whether you prefer a best-of-breed or single-vendor approach:
Open-source Stack (EUR 0 license cost)¶
SAST: Semgrep OSS — community rules, single-file analysis, fast.
SCA: Trivy fs — scans lockfiles, supports 20+ package managers.
Container: Trivy image — OS packages + language deps in one scan.
IaC: Checkov — 1,000+ rules, Terraform/K8s/CloudFormation/Helm.
DAST: OWASP ZAP — automated baseline and full scan.
Secrets: gitleaks — pre-commit + CI, custom regex patterns.
Enterprise Stack (consolidated)¶
Snyk platform: SAST (Snyk Code), SCA (Snyk Open Source), container (Snyk Container), IaC (Snyk IaC) — one dashboard, one policy engine, IDE integration.
Alternative: GitHub Advanced Security — CodeQL for SAST, Dependabot for SCA, secret scanning, native SARIF aggregation.
DAST: Burp Suite Enterprise or Snyk DAST — scheduled scans, CI integration, authenticated scanning.
At CORE SYSTEMS, we implement both variants depending on client context. Startups and smaller teams start with the open-source stack — Trivy + Semgrep + Checkov covers 80% of needs at zero license cost. Enterprise clients with 50+ repositories typically benefit from a consolidated platform (Snyk, GitHub Advanced Security), because the reduction in operational burden offsets license costs.
Conclusion: A Pipeline Is a Product, Not a Project¶
A DevSecOps pipeline is not a one-time implementation. It is an internal product that needs an owner, a roadmap, a feedback loop, and continuous improvement. Rules change with new types of vulnerabilities. Tools get updated. Thresholds tighten as the team matures.
Start small — secret scanning and SCA in one repository. Add layers every two weeks. In two months, you will have a pipeline that catches 90% of vulnerabilities before they reach a pull request. And more importantly — developers will treat security as part of their workflow, not as an obstacle.
In 2026, the question is not whether to automate security in CI/CD. The question is how many vulnerabilities you will let through to production before you start. Every day without a DevSecOps pipeline is a day when you rely on luck instead of a system.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us