Open Policy Agent GateKeeper minikube

When at the admission phase we can apply policies to every aspect of the any kubernties objects. This is the advantage of using Open Policy over PSP’s which only cover pods, replica-sets, stateful sets and such.

GateKeeper Adds its own CRD’s where we can define the polciy by namespace. In this example development namespace is pretty permisive while the production namespaces will have stricter Security Contexts and whitelisted repos.

The next step is to add Falco runtime security to make sure these polices are defined live!

Going to use the following Makefile to kick start the dev enviroment:

#Start Minikube
MINIKUBE_MEM ?=4096
MINIKUBE_CPU ?=4

##
## Start minikube
##

launch-minikube: ## Start minikube
        minikube start --memory=${MINIKUBE_MEM} --cpus=${MINIKUBE_CPU} --driver hyperkit



helm-install: ## install gatekeeper OPA via helm
        helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
        helm install gatekeeper/gatekeeper --name-template=gatekeeper --namespace gatekeeper-system --create-namespace

Start minikube:

make launch-minikube

Let start by adding the gatekeeper open policy hosted helm charts and deploy the stack to its own dedicated namespace.

helm-install

First Example is whitelisting the allowed docker repos we are allowed to pull from in production this would ideal be internally hosted and scanned.

Constant Template Example:

    spec:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sallowedrepos
spec:
  crd:
    spec:
      names:
        kind: K8sAllowedRepos
      validation:
        # Schema for the `parameters` field
        openAPIV3Schema:
          properties:
            repos:
              type: array
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sallowedrepos

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          satisfied := [good | repo = input.parameters.repos[_] ; good = startswith(container.image, repo)]
          not any(satisfied)
          msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v", [container.name, container.image, input.parameters.repos])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.initContainers[_]
          satisfied := [good | repo = input.parameters.repos[_] ; good = startswith(container.image, repo)]
          not any(satisfied)
          msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v", [container.name, container.image, input.parameters.repos])
        }

New we apply our constrant templates to the namespaces:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
  name: prod-repo-is-openpolicyagent
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces:
      - "production"
      - "development"
  parameters:
    repos:
      - "openpolicyagent"
      - "github"
      - "nginx"

Test to deploy busybox in production then test nginx:

kubectl create deployment nginx1 --image=nginx -n production
kubectl create deployment busybox --image=busybox -n production

If you describe the pod it wont come but but if we check the logs for gatekeep it will come up as enforcement.

In the default name space we can create one

kubectl  create deployment busy1 --image=busybox -- sleep 100
kubectl get pods -w