Sitemap

What is EKS Auto Mode? Kubernetes Management Simplified

4 min readMar 18, 2025
Press enter or click to view image in full size

Managing worker nodes in Kubernetes can be complex, requiring manual provisioning and scaling. But what if AWS managed it for you? EKS Auto Mode is a feature that lets you run fully managed Kubernetes clusters without manually managing worker nodes.

In this guide, you’ll learn:
What EKS Auto Mode is and why it’s useful
How to enable Auto Mode in Amazon EKS
How to create node pools for different workloads
How an app can trigger a new node in a node pool

What Is EKS Auto Mode?

EKS Auto Mode is a fully managed Kubernetes infrastructure where AWS automatically:

  • Provisions and manages nodes — you don’t need to create EC2 instances manually.
  • Handles upgrades, scaling, and patches behind the scenes.
  • Optimizes resources, so you pay only for what you use.

With Auto Mode, you focus on deploying applications rather than managing cluster infrastructure.

About EKS Addons

With the introduction of Auto mode compute, several commonly used EKS add-ons are no longer required, including:

  • Amazon VPC CNI
  • kube-proxy
  • CoreDNS
  • Amazon EBS CSI Driver
  • EKS Pod Identity Agent

For storage provisioning, it is necessary to specify:

storage_provisioner = "ebs.csi.eks.amazonaws.com"

Enable EKS Auto Mode

You must have the aws and eksctl tools installed. To create an EKS cluster in Auto Mode, use the eksctl cli:

eksctl create cluster --name=<cluster-name> --enable-auto-mode

This command:
Creates an EKS cluster
Enables Auto Mode, letting AWS manage worker nodes

Using Yaml configuration

You can also create a cluster using a yaml configuration file:

# cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: test-cluster
region: us-east-1

autoModeConfig:
enabled: true
nodePools: [general-purpose, system]

This will create the cluster with auto-mode and 2 nodepools enabled by default: system and general-purpose.

Once you defined the configuration, you can create this cluster with the following command:

eksctl create cluster -f cluster.yaml

Create a Node Pool

You can create extra nodepools when the default are not enough or if you need more customization. A nodepool defines compute capacity for your workloads. You can find out more information about all the parameters here.

Use this yaml and then apply it as a kubernetes object:

# workloads.yaml
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: workloads
spec:
template:
metadata:
labels:
Environment: dev
node-type: workloads
spec:
nodeClassRef:
group: eks.amazonaws.com
kind: NodeClass
name: default

requirements:
- key: "eks.amazonaws.com/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "eks.amazonaws.com/instance-generation"
operator: Gt
values: ["4"]
- key: "eks.amazonaws.com/instance-cpu"
operator: In
values: ["2", "4", "8", "16", "32"]
- key: "capacity-type"
operator: In
values: ["on-demand"]
- key: "kubernetes.io/arch"
operator: In
values: ["amd64", "arm64"] # Ensuring Graviton compatibility

limits:
cpu: "1000"
memory: 1000Gi

This:
Defines a node pool named workloads
Uses c, m and r instances families
Scales up to 1000 cpus and 1000Gi of memory

After that, apply it to the cluster using:

kubectl apply -f workloads.yaml

Deploy an App That Triggers a Node in the Node Pool

Now, let’s deploy an example app that scales up your node pool when needed, but first, we need to make sure that we are scheduling the pods in the proper nodepool. We can do so by using one of the 3 options or all of them at the same time for a more granular control:

Schedule Pods on Workload Nodes Using nodeSelector

A simple way to force pods onto workload nodes is using nodeSelector

Use nodeAffinity for More Flexibility

A more advanced method is nodeAffinity, which provides more control over scheduling, where we set the node that the pod has more affinity, this can be done through tags.

Prevent Pods from Running on System Nodes Using podAntiAffinity

To prevent workload pods from running on system nodes, use pod anti-affinity, again, using tags.

Let’s see the result of all these 3 parameters in the following yaml…

Create a Kubernetes deployment file (deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
name: workload-app
spec:
replicas: 3
selector:
matchLabels:
app: workload-app
template:
metadata:
labels:
app: workload-app
spec:
nodeSelector:
node-type: workloads
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-type
operator: In
values:
- workloads
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: node-type
operator: In
values:
- system
topologyKey: "kubernetes.io/hostname"
containers:
- name: workload-app
image: nginx

Apply the deployment to EKS

kubectl apply -f deployment.yaml

Since each pod requests 500m CPU, running 5 replicas may exceed the available capacity — triggering EKS to add a new node to the pool

Verify That EKS Auto Mode Scaled the Nodes

To check node status, run:

kubectl get nodes

You should see new nodes added automatically as needed.

Conclusion

EKS Auto Mode removes the complexity of managing worker nodes, allowing you to focus on running applications.

You get a fully managed, scalable, and cost-efficient Kubernetes cluster.

Want more insights on DevOps, security, and automation? Don’t miss out — Follow me!

Connect with me on Linkedin!

--

--

Rafael Medeiros
Rafael Medeiros

Written by Rafael Medeiros

DevOps Engineer | CNCF Kubestronaut | 3x Azure Certified | Cloud & Security Enthusiast | Another IT professional willing to help the community

No responses yet