Mastering the CKS Kubernetes Exam — 2/6 — Cluster Hardening

Rafael Medeiros
8 min readAug 28, 2024

--

As you prepare for the Certified Kubernetes Security Specialist (CKS) exam, it’s crucial to focus on securing your Kubernetes cluster effectively.

This is part 2 of the 6-post series, showing all the exam's domains and competencies.

You can read the first part here

Today’s competency is Cluster Hardening, which constitutes 15% of the exam:

Table of Contents

  1. Restrict Access to the Kubernetes API
  2. Use Role-Based Access Controls (RBAC) to Minimize Exposure
  3. Avoid Using Default Service Accounts
  4. Regularly Update Kubernetes

1. Restrict Access to the Kubernetes API

The Kubernetes API is the core of your cluster’s operations. Securing access to it ensures that only authorized users and systems can interact with your cluster.

Changing KubeAPI Service to Cluster IP:

Let’s first inspect the kube-api settings:

vim /etc/kubernetes/manifests/kube-apiserver.yaml

If you see the following parameter:

spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.100.11
# - --kubernetes-service-node-port=31000

Delete it or change the port to 0. This will change the kube-api service from nodePort to ClusterIP, which makes it more secure.

If you check the kubernetes service now, after the kube-api restart:

kubectl get svc

The output will look like this:

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30s

Example: Restricting API Access in Azure Kubernetes Service (AKS) Cluster:

In Azure, it is as easy as a single command:

az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--api-server-authorized-ip-ranges 73.140.245.0/24

You can also remove any IP if needed:

az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--api-server-authorized-ip-ranges ""

Example Configuration: AWS EKS

First, you need to identify the endpoint of your EKS cluster, which you’ll restrict access to. This can be done using the AWS CLI or the AWS Management Console.

Get the EKS Cluster Endpoint Using AWS CLI

aws eks describe-cluster --name your-cluster-name --query "cluster.endpoint" --output text

Replace your-cluster-name with the name of your EKS cluster.

Modify the EKS Cluster Security Group

The API server in EKS is protected by a security group. You can update this security group to allow access only from specific IP addresses or IP ranges.

Find the Security Group Associated with Your EKS Cluster

  1. Retrieve the Security Group ID:
  2. The security group associated with your EKS cluster’s API server can be found in the cluster’s VPC settings.
aws eks describe-cluster --name your-cluster-name --query "cluster.resourcesVpcConfig.securityGroupIds" --output text
  1. This command will return the security group ID(s) associated with your cluster.

Update the Security Group Rules

  1. Using the AWS Management Console or AWS CLI, update the inbound rules of the security group to restrict access to the API server.
  2. Using AWS CLI:
  3. Suppose the security group ID is sg-0123456789abcdef0, and you want to allow access only from IP address 203.0.113.0/24. Run:
aws ec2 authorize-security-group-ingress --group-id "sg-0123456789abcdef0" --protocol tcp --port 443  --cidr 203.0.113.0/24
  1. This command allows inbound access to port 443 (the default port for the Kubernetes API server) only from the IP range 203.0.113.0/24.

Remove Existing Rules (if necessary):

  1. If you need to remove existing rules that allow broader access, you can use:
aws ec2 revoke-security-group-ingress \   --group-id sg-0123456789abcdef0 \   --protocol tcp \   --port 443 \   --cidr 0.0.0.0/0
  1. This command removes access to port 443 from all IP addresses.

Verify the Configuration

After updating the security group rules, verify that the restrictions are applied correctly:

  1. Try accessing the Kubernetes API server from an allowed IP address and from a non-allowed IP address to ensure that the restrictions are effective.
kubectl get nodes --server=https://your-cluster-endpoint-you-got-from-first-step \
--kubeconfig=/path/to/your/kubeconfig

2. Use Role-Based Access Controls (RBAC) to Minimize Exposure

Role-Based Access Control (RBAC) is crucial in Kubernetes for defining what users and services can and cannot do within the cluster. To minimize exposure, ensure that you create specific roles with the least privilege principle.

RBAC in Kubernetes is based on 4 main components:

  1. Role: Defines a set of permissions (rules) for resources within a namespace.
  2. ClusterRole: Similar to Role but applies cluster-wide permissions.
  3. RoleBinding: Associates a Role or ClusterRole with users or service accounts within a namespace.
  4. ClusterRoleBinding: Associates a ClusterRole with users or service accounts across the entire cluster.

Example Setup

1. Creating a Role and RoleBinding

Let’s start by creating a Role and RoleBinding in the default namespace.

Step 1: Create a Role

To create a Role named read-only-pods with read-only access to secrets in the default namespace, use the following command:

kubectl create role read-only-pods \
--verb=get,list,watch \
--resource=pods \
--namespace=default

Step 2: Create a RoleBinding

To bind the read-only-pods Role to a user named read-only-user, use the following command:

kubectl create rolebinding read-only-pods-binding \
--role=read-only-pods \
--user=read-only-user \
--namespace=default

Step 3: Testing the Role

To test if the read-only-user can list secrets in the default namespace, use the kubectl auth can-i command to impersonate the user

Test the Permission

kubectl auth can-i list pods --namespace=default --as read-only-user

Example Output and Interpretation

If the Role is correctly set up and the user has the permissions, you should see:

This indicates that read-only-user is allowed to list pods in the default namespace.

If the Role is not correctly set up or the user does not have the permissions, you will see:

This indicates that read-only-user is not allowed to list secrets in the default namespace.

3. Exercise Caution with Service Accounts

Service accounts are used by applications running inside the cluster to interact with the Kubernetes API.

Service accounts are used by applications running in pods to interact with the Kubernetes API. Each service account is associated with a set of credentials (tokens) that are automatically mounted into pods.

It’s crucial to manage these accounts carefully to avoid potential security risks.

Example: Creating a Restricted Service Account

Create a service account with minimal permissions using kubectl:

kubectl create serviceaccount restricted-sa --namespace=default

Define a Role with limited permissions. For example, allowing read-only access to pods:

kubectl create role read-only-pods \   --verb=get,list,watch \   --resource=pods \   --namespace=default

Create a RoleBinding to Associate the Role with the Service Account:

kubectl create rolebinding restricted-sa-binding \   --role=read-only-pods \   --serviceaccount=default:restricted-sa \   --namespace=default

Avoid Using Default Service Accounts

By default, Kubernetes creates a service account for each namespace named default. Avoid using this default service account for your applications unless absolutely necessary, as it may have broader permissions than required.

Example: Disabling Default Service Account Usage

Although you cannot delete the default service account, you can configure your pods to use specific service accounts with limited permissions.

  1. Specify a Service Account in Pod Definition:
  2. When creating a pod, specify the service account to use:
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
serviceAccountName: restricted-sa
containers:
- name: example-container
image: nginx
  1. Save this as pod.yaml and apply it:
kubectl apply -f pod.yaml

The pod is now using a very limited service account, which has only access to get,list and watch pods.

We can test it to see if we can get secrets:

kubectl auth can-i list secrets --namespace=default --as=system:serviceaccount:default:restricted-sa
No

Tip: try this lab in a minikube environment!

4. Regularly Update Kubernetes

Keeping your Kubernetes version up to date is essential for maintaining security and stability. Regular updates help patch vulnerabilities and ensure you benefit from the latest features and improvements.

During the exam, you’ll be allowed to check Kubernetes documentation, so we will be following the one in this page:

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade

More specifically the following link:

Example: Updating EKS Kubernetes Version

Upgrading the Control Plane Node

  1. Check Current Version:
kubectl get nodes

First, we need to drain the control plane node:

kubectl drain controlplane --ignore-daemonsets

All the pods are moved away from that node, including daemonset scheduled pods.

If you are not in the controlplane, SSH to it now:

ssh controlplane

Check kubelet, kubeadm versions

kubelet --version
kubeadm version

We can check what versions are available for upgrade with the following:

sudo apt update
sudo apt-cache madison kubeadm

Here we have 1.30.0–1.1, so this is the version we are going to upgrade to instead of going directly to the latest. Let’s play safe with our cluster :)

Start it by installing kubeadm to that version:

sudo apt-mark unhold kubeadm && \
sudo apt upgrade && sudo apt-get install -y kubeadm=1.30.0–1.1 && \
sudo apt-mark hold kubeadm

Check kubeadm again:

kubeadm version

You can now plan the upgrade:

kubeadm upgrade plan v1.30.0

And Apply:

kubeadm upgrade apply v1.30.0

The cluster has been upgraded!

After that, we upgrade the kubectl, kubelet and restart it:

apt-get install kubelet=1.30.0-1.1 kubectl=1.30.1-1.1

systemctl daemon-reload

systemctl restart kubelet

Now, the cluster has been upgraded, but it’s still not being able to schedule on it, let’s make it schedulable again:

kubectl uncordon controlplane

Now the node is fully ready!

Upgrading the Worker Node

Similar to control plane, let’s drain it first and ssh to it

kubectl drain node01--ignore-daemonsets

ssh node01

Upgrade kubeadm:

sudo apt-mark unhold kubeadm && \
sudo apt upgrade && sudo apt-get install -y kubeadm=1.30.0–1.1 && \
sudo apt-mark hold kubeadm

Run the upgrade now:

sudo kubeadm upgrade node

After that, let’s update kubelet and kubectl:

apt install kubelet=1.30.1-1.1 kubectl=1.30.1-1.1

Finally, we reload the daemon and restart the kubelet service:

sudo systemctl daemon-reload
sudo systemctl restart kubelet

If we check the nodes again:

The cluster has been successfully upgraded!

Conclusion

The path to certification is challenging, but each study session, practice exam, and hands-on exercise brings you one step closer to your goal. Trust in your preparation, stay focused, and keep a positive mindset.

Best of luck with your CKS preparation!

Part 3/6 will be here very soon, wait for it!

Feel free to reach out if you have any questions or need further assistance. Happy securing your clusters!

Follow me on: Linkedin

--

--

Rafael Medeiros
Rafael Medeiros

Written by Rafael Medeiros

DevOps Engineer | CNCF Kubestronaut | 3x Azure | Cloud | Security | Devops | Another IT Professional willing to help the community

No responses yet