Mastering the CKS Kubernetes Exam — 3/6 — Cluster Hardening

Rafael Medeiros
7 min readAug 29, 2024

--

As you prepare for the Certified Kubernetes Security Specialist (CKS) exam, it’s crucial to focus on securing your Kubernetes cluster effectively.

This is part 3 of the 6-post series, showing all the exam’s domains and competencies.

Part 1 is here

part 2 is here

Today’s competency is System Hardening, which constitutes 15% of the exam:

Table of Contents

  1. Minimize host OS footprint (reduce attack surface)
  2. Minimize IAM roles
  3. Minimize external access to the network
  4. Appropriately use kernel hardening tools such as AppArmor, seccomp

1. Minimizing Host OS Footprint

Recommended Labs:

Closing Open Ports

Open ports on your system can be potential entry points for attackers. To enhance security, you need to identify and close unnecessary open ports.

Finding Processes Listening on Ports

To identify which processes are listening on which ports, you can use several tools and commands. Here’s how to do it across different platforms:

On Linux:

  1. List Listening Ports and Associated Processes
sudo netstat -tuln

Find Process Information

To get more details about the process using a specific port, use:

sudo lsof -i :<port_number>

Replace <port_number> with the port number you are interested in. For example, to find details about processes using port 80:

sudo lsof -i :1234
1.1

Let’s find and kill this service using its pid:

ls -l /proc/5210/exe

Output:
/usr/bin/app1

Now we kill it:

kill 5210

In the next sub-section, you can use the same commands to stop and remove app1 (1.1 picture)

Stopping Services Listening on Unwanted Ports

Managing packages involves installing, removing, and updating software to ensure that only necessary and secure packages are present on your system.

Example: Removing App1 on Ubuntu

Stopping and Disabling Services

Stop and disable services associated with these packages to ensure they do not start automatically.

On Linux:

  1. Stop the Service
sudo systemctl stop app1
  1. Disable the Service
sudo systemctl disable app1

If you want to remove App1:

sudo apt-get remove --purge app1

If it is an executable, simply delete the file in this case:

rm /usr/bin/app1

2. Minimizing IAM Roles

Limiting the number of IAM roles and permissions reduces the risk of privilege escalation and unintended access.

Why Minimize IAM Roles?

  1. Least Privilege Principle: Reducing the number of roles and permissions ensures that users and services only have access to what they need.
  2. Reduced Attack Surface: Fewer roles with minimal permissions decrease the chance of unauthorized access and misuse.
  3. Simplified Management: Managing fewer roles with specific permissions simplifies auditing and monitoring.

AWS Example: Creating a Minimal IAM Role for Kubernetes Nodes

You need to create a permissions policy that specifies the exact actions your Kubernetes nodes are allowed to perform. In this case, the role will be able to:

  • Describe EC2 instances
  • Describe EC2 tags

This minimal set of permissions is usually sufficient for Kubernetes nodes to interact with EC2 metadata and manage resources.

permissions-policy.json

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeTags"
],
"Resource": "*"
}
]
}

Create the IAM Role

You will create an IAM role and attach the permissions policy defined above. Additionally, the role needs a trust policy that allows the EC2 instances to assume it.

Steps to Create the IAM Role:

  1. Create the Role with Trust Policy
  2. This step involves defining who can assume the role. For Kubernetes nodes, the role should be assumed by EC2 instances.
  3. Example Trust Policy (trust-policy.json):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

Command to Create the Role:

aws iam create-role \
--role-name MyK8sNodeRole \
--assume-role-policy-document file://trust-policy.json

Attach the Permissions Policy to the Role

After creating the role, attach the permissions policy to it. This policy defines what actions the role can perform.

Command to Attach the Policy:

aws iam put-role-policy \   --role-name MyK8sNodeRole \   --policy-name MyK8sNodePolicy \   --policy-document file://permissions-policy.json

Attach the Role to Your EC2 Instances

After creating and configuring the IAM role, you need to attach it to your EC2 instances running Kubernetes nodes. This step is done during instance launch or by modifying an existing instance.

To Attach the Role to an Existing Instance:

  • Go to the EC2 Management Console.
  • Select the instance you want to modify.
  • Choose “Actions” > “Security” > “Modify IAM Role.”
  • Select the IAM role (MyK8sNodeRole) and apply the changes.

3. Minimizing External Access to the Network

In a complex network with multiple clients and servers, it’s crucial to implement network security measures to control access. This includes managing permissions for services and ports, such as specifying which servers can use SSH or access certain services on specific ports.

Effective network security ensures proper access control and protects against unauthorized connections. Limiting external access helps protect your nodes and control the flow of data.

AWS Example: Restricting Access with Security Groups

Set up your security groups to limit inbound and outbound traffic. For example, only allow traffic from specific IP addresses or CIDR blocks:

aws ec2 create-security-group \
--group-name MyK8sSecurityGroup \
--description "Kubernetes nodes security group"

Add a range of IPs that can SSH into the Kubernetes cluster:

aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 22 \
--cidr 203.0.113.0/24 # Replace with your trusted IP range

Defines the IP address range that can access port 6443. Replace this with the trusted IP range from which you want to allow access to the Kubernetes API server:

aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 6443 \
--cidr 203.0.113.0/24 # Allow access to the Kubernetes API server

Then you can add this security group to your EKS cluster by going to:
Clusters >>cluster_name >>Networking >>Manage VPC resources

Removing Unwanted SSH Access

Assume you have a security group with an existing rule allowing SSH access. To remove this access, you can run:

aws ec2 revoke-security-group-ingress \
--group-id <security-group-id> \
--protocol tcp \
--port 22 \
--cidr 203.0.113.0/24. #<- Change this address

These settings ensure that only specified IP ranges can access your nodes, reducing exposure to potential attackers.

4. Using Kernel Hardening Tools

Recomended Lab:

Kernel hardening tools help protect your system by enforcing security policies and mitigating risks.

AppArmor (Application Armor) is a Linux kernel security module that provides mandatory access control (MAC) by enforcing security policies on applications. It helps to limit the capabilities of applications, reducing the risk of security breaches.

Here’s a vendor-neutral guide on using AppArmor to harden the security of your systems:

Install AppArmor

AppArmor is included in many Linux distributions. However, you may need to install it manually if it’s not already present.

On Debian/Ubuntu:

sudo apt-get update
sudo apt-get install apparmor apparmor-utils

sudo systemctl status apparmor

Create and Configure AppArmor Profiles

AppArmor uses profiles to define what an application can and cannot do. Profiles are typically stored in /etc/apparmor.d/.

Example: Creating a Profile for a Web Server

Create the Profile:

Open the profile file in an editor:

vim profile

#include <tunables/global>

# profile docker-nginx<flags>
profile docker-nginx flags=(attach_disconnected) {
#include <abstractions/base>

file,

# Deny all file writes.
deny /** w,
}

More about profiles here:

To apply this profile, you can run:

apparmor_parser profile #<- Which is the name of the file

To see if the profile is enabled on that node, you can run:

aa-status

If you have a lot of other profiles, you can grep by the profile name:

Applying the Profile to a Container

You can apply the profile to a container by using the following:

nginx-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
nodeName: controlplane
containers:
- name: nginx
image: nginx:latest
securityContext: #ADD
appArmorProfile: #ADD
localhostProfile: docker-ginx #ADD
type: Localhost #ADD
ports:
- containerPort: 80

Then run:

kubectl apply -f nginx-pod.yaml

You can see that the container is running with that profile by doing the following:

kubectl exec nginx-pod -- cat /proc/1/attr/current

Our pods will not be even able to come up, which is expected, because the profile denies access to all the paths:

You can see the logs that apparmor produces using the following:

cat /var/log/syslog | grep -i apparmor

Conclusion

Embarking on the journey to certification is not a small step, but each session of studying, each practice exam, and each hands-on exercise brings you closer to success.

Wishing you the best of luck as you prepare for your CKS exam!

Stay tuned for Part 4/6, which will be available shortly!

If you have any questions or need additional support, don’t hesitate to reach out.

Let’s achieve the certification together!

Follow me on: Linkedin

--

--

Rafael Medeiros
Rafael Medeiros

Written by Rafael Medeiros

DevOps Engineer | CNCF Kubestronaut | 3x Azure | Cloud | Security | Devops | Another IT Professional willing to help the community

No responses yet