Getting Started with AKS

Rafael Medeiros
11 min readMar 16, 2022

What is AKS?

Azure Kubernetes Services or simply AKS is the azure version of kubernetes. It provides the same capabilities of a default Kubernetes cluster, but integrated with Azure features, where you have resources view, control-plane telemetry, log aggregation, and container health, accessible directly in the Azure portal.

A good difference from a default Kubernetes environment from AKS is that you don’t manage the control plane nodes, and therefore you don’t pay for them, you only manage and pay for the workers nodes.

In this story, we are not going to cover every single topic about kubernetes, we will instead give an overview of the basics to get started with a simple application.

What you need to know about AKS to get started

A control plane node is the node that manages the whole cluster, it will schedule pods, re-create them in case of failure, store the yaml files of the objects created in the etcd (database of Kubernetes) and it’s the node that contains the api-server, which is the responsible for receiving the commands from the kubectl, the tool that you use to manage the cluster;

Control plane Managed by Azure

A worker node is the node that executes the pods and are responsible for running the workloads. At the time of this writing, the first node pool in AKS is always linux one, so in order to run a Windows application, you have to also create a Windows node pool.

Worker Nodes managed by the customer

A namespace is a logical separation of environments or lifecycles, depending on your architecture, you can run multiple environments in a single cluster, such as dev, qa, and even prod if you’d like, or you can separate between frontend and databases, it’s up to your architecture, just keep in mind that the namespaces are used to separate resources;

A service is a logical abstraction for a deployed group of pods in a cluster . Pods can be created and destroyed at any time, so it’s hard to keep track of their ip addresses. A service enables a group of pods, to be assigned a name and unique IP address (clusterIP), this way you don’t need to remember any ip address, just point to the name of the service and the pods will be able to communicate with each other.

An Ingress controller is nothing but the way the users access your application. You can use NGINX ingresses or an application gateway. You can have multiple ingress controllers pointing to different pods, but keep in mind that an ingress controller is not able to reach pods in different namespaces, you have to do some adjustments for this to work, the best practice is to keep them in the same namespace to avoid issues;

In the following example, we have a very basic example of an AKS cluster with 2 apps:

Here I didn’t add the services objects to keep the diagram simple to understand.

1 — The user connects to the ingress public Ip and reach to app1 or app2 depending on the URL it used to connect;

2 — The diagram shows 2 namespaces, one is for the dev environment and another one is for kube-system, the default namespace that is created during aks creation that contains the core components of the AKS cluster;

3 — There are 2 node pools, one for linux pods and another one for Windows pods.

Deploying AKS

Pre-requisites:

— Azure Account;

— Azure CLI (for the second deployment method);

— Terraform (for the third deployment method)

Here I’m going to show you how to deploy AKS using the portal, Azure CLI and Terraform.

Azure Portal

Search for kubernetes services, and click the create button to start the creation wizard:

For this demo, e are going to create a basic AKS cluster, you can stikc with the defaults, but make sure that you are using the Standard preset configuration, availability is 99.5% and one node to optimize costs:

Once the validation pass, click create:

Azure CLI

With Azure CLI, you have to first authenticate in Azure by running az login command:

#To Authenticate
az login
#To create the cluster
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --generate-ssh-keys

Once that is completed, you will receive a json output that contains information about your recently created cluster.

Terraform

If you are not familiar with Terraform, you can start here. You can also check out some posts that I’ve written about it.

The following Terraform code is going to create a resource group and the AKS within it:

resource "azurerm_resource_group" "test" {
name = "test-resources"
location = "West Europe"
}

resource "azurerm_kubernetes_cluster" "test" {
name = "test-aks1"
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
dns_prefix = "testaks1"

default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}

identity {
type = "SystemAssigned"
}
}

Using any of the options above, you should see your AKS cluster up and running after some minutes:

Connecting to your AKS instance

To start managing your instance, you have first to connect to it. For this you’ll need to log in to your account using your terminal or Azure cloud shell:

Click the connect button in the overview page:

Run the following commands that will show up in the next screen, the first command will make sure that you are in the correct subscription that the cluster exists, and the second command will get the credentials of the cluster:

Your credentials will be stored in the C:\users\[your-user]\.kube\config. After that, you can try any commands to check if you cluster is working, let’s get all the nodes information:

Deploying an Application

For this demo we will be deploying an application that simply shows who connected to it. To keep things simple, the image is already created, and you can find it here:

We will first start creating a namespace with the following command issued from your terminal:

kubectl create namespace dev

If you go back to your cluster, you can check the new namespace:

We will also need a deployment to deploy the pod that will run our application, and a service with the type of load balancer to reach to the website, here is the yaml file, make sure that you’ve copied and saved it to a file, the name can be anything.yaml:

The image name is set on line 25:

The load balancer service will use port 80:

As you can see, we can declare multiple kubernetes objects inthe same yaml file, as long as you use the “---” to separate one from another as stated on line 33. Once you have the yaml file ready, you can run the following command to apply the deployment and the service:

kubectl apply -f deployment.yaml

Two objects were created, the deployment and the service. You can go back to the portal and check under workloads >> deployments:

And the service will exist in service and ingresses >>services:

That IP Address is what we are looking for, if we browse that public ip, it should point us to the simple-website pod, this acts as a load balancer by translating the public ip to the private one that the pod has:

The diagram for this implementation is the following:

Now that we have a website running, there’s another challenge that we need to solve. We don’t want our users remembering IP addresses, even because this IP address can change from time to time. We need a URL to point to this IP. For this requirement, we are going to use an NGINX ingress controller that we explained in the beginning of this story.

To set up an NGINX we have to follow the steps below, where we will be using Helm charts to do so:

Create a new namespace for the ingress to exist to make sure we are not mixing resources with different lifecycles:

kubectl create namespace ingress-basic

Add the ingress-nginx repo to helm and update helm repos:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo update

The next step will install the ingress in the ingress-basic namespace that we’ve created before, with 1 replica and its pods will exist in a Linux node:

helm install nginx-ingress ingress-nginx/ingress-nginx `--namespace ingress-basic `--set controller.replicaCount=1 `--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux `--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux `--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux `--set-string controller.config.proxy-body-size=10m `--set controller.service.externalTrafficPolicy=Local

Note that I’m using Powershell here, if you are using Bash, make sure that you replace the backticks ` for backslashes (\) to break this command in multiple lines.

In the previous steps, we have installed the NGINX ingress controller, which is responsible for managing one or multiple ingresses in the cluster, we now have to install the ingress object itself, let’s see how to do that:

Here I have another yaml file that has an ingress object and a new service, the ingress will be pointing to this service, and the service will be pointing to the deployment that contains the website running:

The URL has been configured on line 12, you can use any name you want here, as long as you point it in the hosts file as I’m going to explain in the next steps:

This service is a type of cluster IP, and not a load balancer anymore. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. The external access will happen through the ingress that we are configuring.

To apply this yaml file, you can use:

kubectl apply -f ingress.yaml

This will create the ingress below with the url that we have configured:

To access the website using that URL, you have to add the URL and the ip address in the hosts file:

C:\Windows\System32\drivers\etc\hosts

Then you add the following line to the file:

<youripaddress> <websiteurl>

In my case it will be:

52.152.204.219 simple.globalhost.local

After that, you’ll be able to browse the website using the URL you’ve configured in the yaml file:

Our final architecture will look like this:

The Nginx Ingress has also a lot of other options other than simply setting up a URL, find out for yourself:

Rollback Application Version

Suppose someone from your team have made a wrong commit and changed the image that is running your website and now everything is broken.

To represent that, let’s cause some mess in our environment by changing the image to an Apache one:

kubectl set image deployment/simple-website simple-website=httpd:2.4.53 --namespace dev

If we browse our site after a minute or so, we get the apache image:

That’s a problem, and we need to fix that asap by rolling back to the previous version of the website.

The fastest and safest way to fix it is to use the rollout commands. First, we have to check which revisions exist for that deployment with the following command:

kubectl rollout history deployment/simple-website -n dev

As you can see, we have three changes in this deployment, and the Change cause is all set to none, let’s rollback to revision 1 and see if it is what we want:

kubectl rollout undo deployment/simple-website -n dev --to-revision=1

Awesome, everything is working as expected again. Remember that the change cause was empty? This is because we never recorded the command that we used, at least we never asked the command to do so. To save to the change-cause the command you used to deploy the new versions, you simply need to add the --record parameter in the commands that you are using:

kubectl set image deployment/simple-website simple-website=httpd:2.4.53 --namespace dev --recordkubectl apply -f deployment.yaml --record

And now, if you run the history command again, you’ll see the commands that you used to deploy those revisions:

To get a better description of what the revisions are doing, you can use the following command:

You can also use the kubernetes.io/change-cause annotation in your yaml files to keep track of what you are doing if you are applying changes through files instead of using the --record parameter.

Wrap up

In this story, you have learned what is AKS, how to install it, then you’ve seen how to deploy an application and rollback it to a previous version when something goes wrong. There are lots of other things to explore, AKS itself has so many functions and resources that I can’t explain them in just a single post, but this will give you the direction to get started with your application. I hope you have learned something, let us know in the comments if you have any questions!

--

--

Rafael Medeiros

DevOps Engineer | 3x Azure | CKA | Terraform Fanatic | Another IT Professional willing to help the community