Left SDLC, Right Runtime

Left SDLC, Right Runtime

In today's container-centric infrastructure landscape, DevOps professionals need a clear understanding of every tool and process involved in securing workloads — from the first line of code to a running container in production. This article walks through both halves of the security lifecycle: the left side (build-time SDLC) and the right side (runtime defense).

The mental model is simple: everything to the left happens before deployment — code scanning, image building, vulnerability analysis, IaC validation. Everything to the right happens after deployment — runtime monitoring, behavioral detection, incident response.


The Full Pipeline at a Glance

A secure container workflow touches these stages in order:

  1. Source Code Management — Git repositories hosting application code, Dockerfiles, Helm charts, and Terraform modules
  2. Security Scanning — Static CVE analysis with tools like Trivy, scanning code, dependencies, and IaC files
  3. Containerization — Building container images with Docker, enforcing secure Dockerfile practices
  4. Image Registry — Pushing verified images to a trusted registry after passing all gates
  5. Runtime Deployment — Running containers in Kubernetes or Docker with proper security contexts
  6. Runtime Monitoring — Active behavioral monitoring with eBPF-based agents (Falco, Tracee) detecting anomalies in real time

Stages 1–4 are the left side (shift left). Stages 5–6 are the right side (runtime).


Shift Left SDLC

"Shift left" means catching security issues as early as possible in the development pipeline — before code reaches production. Every stage acts as a gate: if a check fails, the pipeline stops.

What lives in the Git repo

Your Git repository is an ecosystem, not just code storage:

  • Application code — The actual service (Go, Python, Node, etc.)
  • Dockerfile — The container image recipe
  • Jenkinsfile / CI config — The pipeline definition that orchestrates build stages
  • Terraform / Helm / Kubernetes YAMLs — Infrastructure-as-Code that defines how and where things deploy

Every one of these files is a security surface. A misconfigured Terraform module can expose an RDS instance to the internet. A Dockerfile running as root bypasses container isolation. A Helm chart without resource limits enables denial-of-service.

The security gates

Before any image reaches a registry, the pipeline should enforce:

  • Repository scanning — Scan the remote Git repo for known vulnerabilities in dependencies
  • Filesystem scanning — Scan the local codebase for high/critical CVEs, skipping vendor directories
  • Image scanning — After docker build, scan the resulting image for vulnerabilities
  • IaC validation — Template Helm charts or Terraform plans and scan the output for misconfigurations
  • Dockerfile linting — Verify secure practices: non-root user, minimal base image, no secrets in layers

Each gate uses an exit code to halt the pipeline on failure. In the Jenkinsfile below, --exit-code 192 tells Trivy to return a non-zero exit code when high-severity CVEs are found.


Jenkinsfile

The Jenkinsfile defines the CI/CD pipeline as code. Each stage is a security gate:

pipeline {
    agent any

    environment {
        APP = 'headers'
        VERSION = "0.0.1"
        GIT_HASH = """${sh(
                      returnStdout: true,
                      script: 'git rev-parse --short HEAD'
                      )}"""
        dockerhub = credentials('dockerhub')
    }

    stages {
        stage('Remote Code Repo Scan') {
            steps {
                echo "Running ${env.BUILD_ID} on ${env.JENKINS_URL}"
                sh "trivy repo --exit-code 192 https://github.com/kurtiepie/headers.git"
            }
        }
        stage('Code Base Scan') {
            steps {
                sh "trivy fs --exit-code 192 --severity HIGH,CRITICAL --skip-dirs ssl ."
            }
        }
        stage('Docker Build') {
            steps {
                sh "docker build . -t ${APP}:${VERSION}-${GIT_HASH}"
            }
        }
        stage('Scan Generated Image') {
            steps {
                sh "trivy image --exit-code 192 --severity HIGH,CRITICAL ${APP}:${VERSION}-${GIT_HASH}"
            }
        }
        stage('Push to Registry') {
            steps {
                sh "docker tag ${APP}:${VERSION}-${GIT_HASH} kvad/headers:${VERSION}"
                sh 'echo $dockerhub_PSW | docker login -u $dockerhub_USR --password-stdin'
                sh "docker push kvad/headers:${VERSION}"
            }
        }
        stage('Scan Helm IaC Files') {
            steps {
                sh "helm template headerschart/ > temp.yaml"
                sh "trivy config --severity HIGH,CRITICAL --exit-code 192 ./temp.yaml"
                sh "rm ./temp.yaml"
            }
        }
    }
}

Key patterns:

  • Versioned images — Tags include version + git hash (headers:0.0.1-a3f2b1c) for traceability
  • Fail-fast gates--exit-code 192 stops the pipeline on any high/critical finding
  • Credential management — Docker Hub credentials injected via Jenkins credential store, never hardcoded
  • IaC scanning — Helm templates are rendered to YAML and scanned as Kubernetes config, catching misconfigurations before they reach the cluster

Dockerfile

The Dockerfile is the blueprint for your container image. Security starts here:

FROM golang:1.16-alpine AS builder

WORKDIR /app

COPY go.mod ./
COPY go.sum ./
RUN go mod download

COPY *.go ./
RUN go build -o ./headers

FROM alpine:3.18
COPY --from=builder /app/headers /bin/headers

# Create non-root user and set ownership
RUN adduser -D headeruser && chown headeruser /bin/headers

USER headeruser
CMD ["/bin/headers"]

Secure practices demonstrated:

  • Multi-stage build — The builder stage has Go tooling; the final image only has the compiled binary. This minimizes the attack surface and image size.
  • Non-root user — The USER headeruser directive ensures the process doesn't run as root. If an attacker compromises the application, they land as an unprivileged user.
  • Minimal base imagealpine:3.18 is a few MB, far less attack surface than ubuntu or debian.
  • Pinned versions — Base images specify versions (golang:1.16-alpine, alpine:3.18) rather than :latest to avoid supply chain drift.

Common Dockerfile mistakes to catch

Mistake Risk Fix
USER root Container escape trivial Use USER <nonroot>
FROM ubuntu:latest Large image, unpinned Pin version, use alpine
COPY . . before dependency install Cache invalidation, secrets in layers Copy dependency files first
RUN apt-get install -y curl wget Unnecessary tools for attackers Only install what the app needs
No .dockerignore Secrets, git history in image Add .dockerignore excluding .git, .env

Trivy's Dockerfile scanning catches many of these automatically:

trivy config --severity HIGH,CRITICAL ./Dockerfile

Right Side / Runtime

Once a container is deployed, the left-side gates are behind you. Runtime is where the real threats materialize — an attacker who bypasses your build checks, a zero-day in a dependency, or a compromised supply chain artifact.

Runtime security has three pillars:

1. Behavioral Monitoring

Runtime agents observe what containers actually do at the kernel level using eBPF (Extended Berkeley Packet Filter). Every syscall — execve, connect, open, mount — is visible. The agent compares observed behavior against a baseline and fires alerts on deviations.

Key tools:

  • Falco — The CNCF standard for runtime detection. Uses eBPF probes to monitor syscalls and match against rules. See the Gatekeeper & Falco tutorial for a complete lab.
  • Aqua Tracee — eBPF-based runtime security from Aqua Security. Traces syscalls and maps them to MITRE ATT&CK techniques.
  • Tetragon — Cilium's eBPF enforcement engine. Can not only detect but also block syscalls in real time.

What to detect at runtime:

Shell spawned in container       → execve of bash/sh in a production pod
Outbound connection to C2        → connect() to non-RFC1918 addresses
File written to /tmp             → open() with O_WRONLY in writable paths
Privilege escalation attempt     → setuid/setgid syscalls
Container drift                  → Binary executed that wasn't in the original image
Cloud metadata access            → connect() to 169.254.169.254

2. Admission Control

Before a container even starts, admission controllers validate the pod spec against security policies. This is the bridge between left and right — it catches misconfigurations that slipped through CI/CD.

OPA Gatekeeper is the standard admission controller for Kubernetes:

# Block privileged containers
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sBlockPrivileged
metadata:
  name: block-privileged-containers
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces:
      - kube-system

With this constraint active, any attempt to deploy a pod with privileged: true is rejected at the API server before the container is scheduled.

3. Incident Response

When a runtime alert fires, the response workflow should be automated:

  1. Alert — Falco fires a CRITICAL alert: "Shell spawned in production container"
  2. Enrich — Falcosidekick adds context: pod name, namespace, image, node, user
  3. Notify — Alert routed to Slack, PagerDuty, or SIEM
  4. Contain — Automated response cordons the node or kills the pod
  5. Investigate — Forensic data from eBPF traces shows the full syscall chain

The goal is to shrink the gap between detection and response from hours to seconds.


Putting It All Together

The left and right sides aren't separate — they're a continuous feedback loop:

Code Commit → Scan → Build → Scan → Push → Admission → Deploy → Monitor
     ↑                                                              │
     └──────── Runtime findings feed back into policy updates ──────┘
  • A runtime detection of a shell in a production container should trigger a Gatekeeper policy blocking that image's registry
  • A CVE discovered in a running workload should trigger a pipeline re-scan of the source image
  • Syscall profiles captured by eBPF should feed into seccomp policies for the next build (see the eBPF & Seccomp tutorial)

The organizations that close this loop — where runtime intelligence directly hardens the build pipeline — are the ones operating at a genuinely mature security posture.


Next Steps

  • Build a complete Jenkins pipeline — Set up a local Jenkins instance with Docker and implement the Jenkinsfile above
  • Deploy Falco and Gatekeeper — Follow the K8s Runtime Monitoring tutorial for a hands-on lab
  • Generate seccomp profiles from eBPF traces — The eBPF & Seccomp tutorial walks through custom profile generation
  • Scan your own images — Run trivy image <your-image> against your production images and review the findings
  • Implement container drift detection — Deploy Falco with rules that alert when binaries not in the original image are executed