Azure Kubernetes Service (AKS) – Part 2

In the previous article Azure Kubernetes Service (AKS) – Part 1, we discussed about the creating AKS cluster with default node pools. Lets explore the AKS and see the type of Node pools exists in AKS and how we can update it considering the real time scenarios.

Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)

In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into node pools. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a system node pool.

To support applications that have different compute or storage demands, you can create additional user node pools. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and tunnelfront.

User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use these additional user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage.

There are certain limitations of using Node-pools.

  • Restricted VM sizes

Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. Now to ensure this the required kube-system pods and your applications can reliably be scheduled, its recommended NOT to use the below VM SKUs in AKS:

  • Standard_A0
  • Standard_A1
  • Standard_A1_v2
  • Standard_B1s
  • Standard_B1ms
  • Standard_F1
  • Standard_F1s

Please refer the link with maximum Nodes, Pods supported in AKS cluster. https://docs.microsoft.com/en-us/azure/aks/quotas-skus-regions

  • We can delete system node pools, provided there is another system node pool available in the AKS cluster.
  • The system node pools must contain at least one node, and user node pools may contain zero or more nodes.
  • The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature is not supported with Basic SKU load balancers.
  • The AKS cluster must use virtual machine scale sets for the nodes.
  • The node pool name may only contain lowercase alphanumeric characters and must begin with a lowercase letter.
    • For Linux node pools the length must be between 1 and 12 characters,
    • for Windows node pools the length must be between 1 and 6 characters.
  • All node pools must reside in the same virtual network.
  • while creating multiple node pools during the AKS cluster creation time, all the Kubernetes versions used by node pools must match the version set for the control plane. Although it can be updated post the provisioning of AKS Cluster by using per node pool operations.

Since we already created an article on creating AKS cluster, In continuation to the same lets discuss further on the adding/updating/deleting the node pools in AKS cluster.

a. Add a node pool in AKS cluster with a unique subnet (under preview)

This feature of adding a node pool in a unique subnet is in a preview state during the time of writing this article and may be released in up coming time.

There might be situations where we need to split the workload into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.

There are certain Limitations:

  • All subnets assigned to nodepools must belong to the same virtual network.
  • System pods must have access to all nodes in the cluster to provide critical functionality such as DNS resolution via coreDNS.
  • Assignment of a unique subnet per node pool is limited to Azure CNI during preview. This may change later since it is under preview.
  • Using network policies with a unique subnet per node pool is not supported during preview.

To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.

Please refer link https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#add-a-node-pool-with-a-unique-subnet-preview.

b. Scale a node pool manually

The AKS cluster created in the Azure Kubernetes Service (AKS) – Part 1, has a single node pool. Lets add a second node pool using the az aks nodepool add command.

We are using a node pool named nodepool2 that runs 3nodes. Based on demand the number of nodes in node pool can be scaled up or down.

Note:

  1. To ensure the AKS cluster operates reliably and efficiently, there should be at least 2 nodes in the default system node pool.
  2. The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters.
    • For Linux node pools the length must be between 1 and 12 characters.
    • for Windows node pools the length must be between 1 and 6 characters.

az aks nodepool scale –resource-group AKSResourceGroup –cluster-name AK8sCluster –name nodepool2 –node-count 3

Lets verify the status of newly created node pool using the az aks node pool list command.

In the node pool nodepool1 the node count is 2 and similarly in nodepool2 the node count is 3, which we just created.

az aks nodepool list –cluster-name AK8sCluster –resource-group AKSResourceGroup

Note: If no VmSize is specified while adding a new node pool, the default VM Size is Standard_D2s_v3 for Windows node pools and Standard_DS2_v2 for Linux node pools, and If no OrchestratorVersion is specified, then by default it will be on same version as the control plane.

c. Delete a node pool

If in case the newly created node-pool is not required then we can plan to delete it and remove the underlying VM nodes.

Lets use the az aks node pool delete command to delete the newly created node pool nodepool2.

az aks nodepool delete –cluster-name AK8sCluster –resource-group AKSResourceGroup –name nodepool2

Now lets verify the node pool using the command az aks nodepool.

az aks nodepool list -resource-group AKSResourceGroup –cluster-name AK8sCluster

Note:

There are no recovery options for data loss that may occur when you delete a node pool. If pods can’t be scheduled on other node pools, those applications are unavailable. Make sure you don’t delete a node pool when in-use applications don’t have data backups or the ability to run on other node pools in your cluster.

d. Specify a VM size for a node pool

In the previous examples, we have created node-pools with default VM size in the cluster. As we haven’t mentioned the VM Size option while creating the Node-pool.

There might be a common scenario to create node-pools with different VM sizes and capabilities in production environment.

For example: you may create a node pool that contains nodes with large amounts of CPU or memory, or a node pool that provides GPU support.

Lets create a GPU-based node pool that uses the Standard_NC6 VM size. These VMs are powered by the NVIDIA Tesla K80 card.

Create a node pool using the az aks node pool add command. specify the name gpunodepool1, and use the –node-vm-size parameter to specify the Standard_NC6 size:

az aks nodepool add –resource-group AKSResourceGroup –cluster-name AK8sCluster –name gpunodepool1 –node-count 1  –node-vm-size Standard_NC6 –no-wait

az aks nodepool list -g AKSResourceGroup –cluster-name AK8sCluster

Leave a Reply

Your email address will not be published. Required fields are marked *