Mastering the CKS Kubernetes Exam — 4/6 — Cluster Hardening

Rafael Medeiros
8 min readSep 2, 2024

--

As you prepare for the Certified Kubernetes Security Specialist (CKS) exam, it’s crucial to focus on securing your Kubernetes cluster effectively.

This is part 4 of the 6-post series, showing all the exam’s domains and competencies.

Part 1 is here

part 2 is here

part 3 is here

Today’s competency is Minimize Microservice Vulnerabilities, which constitutes 20% of the exam:

Table of Contents

1. Setting Up Appropriate OS-Level Security Domains

2. Managing Kubernetes Secrets

3. Using Container Runtime Sandboxes in Multi-Tenant Environments

4. Implementing Pod-to-Pod Encryption Using mTLS

1. Setting Up Appropriate OS-Level Security Domains

Start with these labs:

Configuring OS-level security domains can help isolate and protect your applications and data.

Kubernetes provides the securityContext field in the pod specification to configure OS-level security features for containers. This is essential for hardening your applications.

Example: Using securityContext to Harden a Pod

apiVersion: v1
kind: Pod
metadata:
name: secure-pod
labels:
app: secure-app
spec:
containers:
- name: secure-container
image: nginx:alpine
command:
- tail
- -f
- /dev/null
securityContext:
capabilities:
drop:
- all
add:
- net_bind_service
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
restartPolicy: Always

Key Points:

  • capabilities: Drops all capabilities except CHOWN, reducing attack vectors.
  • runAsNonRoot: Ensures containers do not run as the root user.
  • runAsUser: Specifies a non-root user ID for running the container.
  • allowPrivilegeEscalation: Prevents processes from escalating privileges.

Talking about capabilities, how do you know which ones to drop?

Let’s get nginx as an example, it has the pid as 9755:

If we grep its capabilities, we get the following:

This command should return 5 lines on most systems.

  • CapInh = Inherited capabilities
  • CapPrm = Permitted capabilities
  • CapEff = Effective capabilities
  • CapBnd = Bounding set
  • CapAmb = Ambient capabilities set

If you use now capsh to decode the capBnd, we get all the capabilities that nginx uses:

capsh --decode=0000003fffffffff

Since it is a lot, we can do a trial and error and drop all capabilities.

You can check with these commands if you have the welcome page locally on port 8080 and if you remove any of the capabilities the process crashes on startup.

Obviously, net_bind_service is only required if listening on privileged ports like 80 and 443. You will be adding the others as you get the errors.

Expanding Policy-Based Security with OPA Gatekeeper

Beyond securityContext, Kubernetes can be further secured using policy-based security models through Open Policy Agent (OPA) and its Kubernetes extension, Gatekeeper. Gatekeeper allows you to define and enforce policies on Kubernetes resources.

Installing OPA:

export VERSION=v0.38.1
curl -L -o opa https://github.com/open-policy-agent/opa/releases/download/${VERSION}/opa_linux_amd64

chmod 755 ./opa

#test it
./opa run -s &

An example of rego policy is the following:

example.rego

package httpapi.authz

import input

default allow = false

allow {
input.path == "home"
input.user == "user1"
}

Here’s the breakdown:

  • Package Declaration: Defines the policy package httpapi.authz for HTTP API authorization.
  • Import Statement: Imports the input object, which provides context for the policy evaluation.
  • Default Allow Rule: Sets default policy outcome to false, denying access unless explicitly allowed.
  • Allow Rule Definition: Grants access if the request path is "home" and the user is "user".

You need to know at least how to read a rego policy to understand how to fix it in case the exam asks you to do so. Start here:

To add this policy to the OPA server, it runs on port 8181, so we send a PUT request to it, along with the policy we created earlier:

curl -X PUT --data-binary @example.rego http://localhost:8181/v1/poli
cies/samplepolicy

To test it, we can send some input data to the server url

curl -X POST -H "Content-Type: application/json" --data '{"input": {"path": "home", "user": "user3"}}' localhost:8181/v1/data

If we send with the correct user, we get allow TRUE:

Exam Tip:

For the exam, you will not need to configure a rule from scratch or even set up everything in kubernetes, but be prepared to fix rules issues.

2. Managing Kubernetes Secrets

Proper management of Kubernetes secrets is essential for keeping sensitive information secure.

Example: Creating and Using Kubernetes Secrets

  1. Create a Secret
kubectl create secret generic my-secret --from-literal=password=my-password
  1. Use the Secret in a Pod as environment variable:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: my-secret
key: password

Update Pod Definition to Mount the Secret

Modify your pod definition to mount the Secret as a volume. Here’s an example YAML configuration for a pod that mounts the Secret as a file:

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
volumeMounts:
- name: secret-volume
mountPath: /etc/secret
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: my-secret

Apply the Configuration

Save the above YAML to a file, e.g., my-pod.yaml, and apply it using kubectl:

kubectl apply -f my-pod.yaml

Verify the Secret is Mounted

After the pod is running, you can check if the Secret is mounted correctly:

kubectl exec -it my-pod -- ls /etc/secret

This should list the files or directories within /etc/secret, depending on what was included in the Secret.

3. Using Container Runtime Sandboxes in Multi-Tenant Environments

Recommend Lab:

A container runtime sandbox is a security mechanism that isolates containers from one another and from the host system more effectively than standard container runtimes. This is achieved by creating additional layers of separation between containers and the host, ensuring that the impact is limited even if a container is compromised.

Example: Using gVisor on a Containerd Kubernetes Cluster

Create a Kubernetes Cluster with gVisor

Enable gVisor in your EKS cluster by configuring the runtimeClass:



set -e

ARCH=$(uname -m)

URL=https://storage.googleapis.com/gvisor/releases/release/20230925/${ARCH}

wget ${URL}/runsc ${URL}/runsc.sha512 \
${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512

sha512sum -c runsc.sha512 -c containerd-shim-runsc-v1.sha512

rm -f *.sha512

chmod a+rx runsc containerd-shim-runsc-v1

sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin

Exam tip: You’ll probably not be asked to configure this part during the exam, but it is good to learn how to install it.

Use the RuntimeClass in a Pod Specification

But first, we create a new runtimeClass that the pod will use:

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
spec:
runtimeHandler: runsc

Apply the configuration:

kubectl apply -f gvisor-runtimeclass.yaml

Deploy a Pod with gVisor RuntimeClass:

  1. Create a YAML file for your pod, e.g., gvisor-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gvisor-pod
spec:
runtimeClassName: gvisor
containers:
- name: my-container
image: nginx
  1. Apply the pod configuration:
kubectl apply -f gvisor-pod.yaml

Verify the Pod is Using gVisor

To ensure that the pod is using gVisor, you can check the pod’s runtime class and verify that it’s running under the runsc runtime.

kubectl get pod gvisor-pod -o jsonpath='{.spec.runtimeClassName}'

This should output gvisor, indicating that the pod is using the gVisor runtime.

4. Implementing Pod-to-Pod Encryption Using mTLS

Mutual TLS (mTLS) ensures encrypted and authenticated communication between pods.

To enable mutual TLS (mTLS) between your Kubernetes pods, the easiest method is to run applications capable of handling TLS directly within the containers. Store both client and server certificates in TLS-type secrets, and ensure all inter-pod communications use TLS-encrypted channels (such as ports).

If your applications don’t support TLS, you can use a proxy or ambassador container alongside your application. This proxy intercepts traffic from other pods, establishes and terminates TLS connections, and then forwards the traffic to your application container. Service meshes like Istio implement this approach, using Envoy as the proxy.

For more information on configuring service meshes like Istio and Linkerd for mTLS between pods, refer to the Kubernetes blog for guides on Linkerd and Istio.

An example of pod-to-pod encryption outside of a service mesh is found in a standard kubeadm-based Kubernetes cluster, specifically between the API Server and etcd.

In addition to certificates, the API Server includes settings for securing TLS communications, such as --tls-cipher-suites for specifying supported cipher suites and --tls-min-version for enforcing the minimum TLS version. Details are available in the Kubernetes documentation.

Exam Tips:

They will probably not ask you about ISTIO, but at least understand how to set minimumTLS version on the api-server, as well as the accepted ciphers on etcd and api server.

To set the minimum TLS version accepted by the api-server, you can use the following flag on the controlplane:

vim /etc/kubernetes/manifests/kube-apiserver.yaml

spec:
containers:
- command:
- kube-apiserver
- --advertise-address=172.30.1.2
- --allow-privileged=true
- --tls-min-version=VersionTLS12 ##ADD THIS FLAG

To add the accepted ciphers, you can use this flag:

vim /etc/kubernetes/manifests/kube-apiserver.yaml

spec:
containers:
- command:
- kube-apiserver
- --advertise-address=172.30.1.2
- --allow-privileged=true
- --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 #ADD THIS

And then, on the ETCD side, you can also set the accepted cipher:

vim /etc/kubernetes/manifests/etcd.yaml

spec:
containers:
- command:
- etcd
- --cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 # ADD This

To find out more about kubeapi-server flags:

ETCD documentation regarding flags:

This way you established the communication using a more secure cipher, as well as a stronger TLS version.

Conclusion

Good luck with your CKS exam prep!

Stay tuned for Part 5/6, which will be out soon.

If you have any questions or need a hand with anything, just let me know. Let’s tackle this certification journey together!

Follow me on: Linkedin

--

--

Rafael Medeiros
Rafael Medeiros

Written by Rafael Medeiros

DevOps Engineer | CNCF Kubestronaut | 3x Azure | Terraform Fanatic | Another IT Professional willing to help the community

No responses yet