Blog >

AWS EKS Upgrade 1.18 to 1.21

Introduction

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service offering from AWS. We rely on EKS to run our APIs because of the security, reliability, and scalability that EKS offers. It provides high availability and scalability for the Control Plane to run across multiple availability zones (thus eliminating a single point of failure situation).

Since EKS is a fully managed service, EKS Updates deliver new features, design updates, and bug fixes. So, it is always recommended to update the EKS cluster every six months, when a new EKS version is released, unless the application needs to run on a specific version of Kubernetes.

This post explains the steps to upgrade your EKS cluster from v1.18 to v1.21 with zero downtime.


EKS Upgrade (v1.18 to v1.19) – Steps

Below are the detailed steps to upgrade the cluster as well as migration of applications to worker nodes from v1.18 to v1.19:

1. Upgrade Control Plane

2. Upgrade kube-proxy version

3. Upgrade CoreDns version

4. Upgrade Amazon VPC CNI version

5. Create PDB for all services

6. Edit Deployment for application

7. Update Node Group in AWS

Step 1: Update Control Plane

Before updating the Control plane, check the version of your Kubernetes cluster and worker node. If your worker node version is older than your current Kubernetes version, then you must update your worker node first then only proceed with updating your control plane.

Check Kubernetes Control Plane version and worker node from the following command:

kubectl version –short

kubectl get nodes

Since the Pod Security Policy(PSP) admission controller is enabled by default from 1.13 and later versions of Kubernetes, we need to make sure that a proper pod security policy is in place, before updating the Kubernetes version on the Control Plane.

Check the default security policy using the command below:

kubectl get psp eks.privileged

After applying all the changes, now update the cluster from the AWS console.

1. Login to AWS console and click on update now in the EKS cluster which you want to update. During the process of updating, you cannot make any changes on your cluster and no scheduling of the nodes will take place. So, it is preferred to do this activity during non-peak, non-business hours.

2. After the update, the IP of the API server endpoint will change so, if there is any dependency on deployment with these IPs then you must change old IPs with New IPs.

Once it is upgraded, update the EKS version in Terraform YAMLs.

Step 2: Update kube-proxy version

We will patch the kube-proxy daemonset to use the image that corresponds to the cluster’s Region and current Kubernetes version. Retrieve the kube-proxy version:

kubectl get daemonset kube-proxy –namespace kube-system -o=jsonpath='{$.spec.template.spec.containers[:1].image}’

Update the kube-proxy:

For 1.19:

kubectl set image daemonset.apps/kube-proxy -n kube-system kube-proxy=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.19.6-e

For 1.20:

kubectl set image daemonset.apps/kube-proxy -n kube-system kube-proxy=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.20.4-e

For 1.21:

kubectl set image daemonset.apps/kube-proxy -n kube-system kube-proxy=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.21.2-e

Ref: https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html

Step 3: Upgrade Core DNS version

Check if the cluster is running on CoreDNS and its version:

kubectl get pod -n kube-system -l k8s-app=kube-dns

Describe CoreDNS version:

kubectl describe deployment coredns –namespace kube-system | grep Image | cut -d “/” -f 3

Retrieve CoreDNS version and URL using the command below:

kubectl get deployment coredns –namespace kube-system -o=jsonpath='{$.spec.template.spec.containers[:1].image}’

Replace CoreDNS version and URL

For 1.19:

  kubectl set image –namespace kube-system deployment.apps/coredns coredns=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/coredns:v1.8.0-e

For 1.20:

kubectl set image –namespace kube-system deployment.apps/coredns coredns=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/coredns:v1.8.3-e For 1.21:

kubectl set image –namespace kube-system deployment.apps/coredns coredns=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/coredns:v1.8.4-e Ref: https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html

Step 4: Upgrade Amazon VPC CNI version

Check the version of the VPC CNI:

kubectl describe daemonset aws-node –namespace kube-system | grep Image | cut -d “/” -f 2

The output should be like this:

amazon-k8s-cni-init:v1.7.5-eksbuild.1

amazon-k8s-cni:v1.7.5-eksbuild.1

Update VPC CNI version:

curl -o aws-k8s-cni.yaml https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.10/config/master/aws-k8s-cni.yaml

For Oregon region:

kubectl apply -f aws-k8s-cni.yaml

For Singapore region:

sed -i -e ‘s/us-west-2/ap-southeast-1/’ aws-k8s-cni.yaml (if region is other than us-west-2)


kubectl apply -f aws-k8s-cni.yaml

Ref: https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html

Step 5: Create PDB for all the services

Create pdb.yml file same as below for all services:

apiVersion: policy/v1beta1

kind: PodDisruptionBudget

metadata:

name: <service-name>-pdb

namespace: default

spec:

minAvailable: 100%

selector:

matchLabels:

app.kubernetes.io/name: <service-name>

Step 6: Edit Deployment for application

We can use open-source tool called kubent for checking which deployment Yaml needs to be updated for the versions we are going to upgrade to:

sh -c “$(curl -sSL https://git.io/install-kubent)”

kubent –target-version 1.19.6

Change the API Version for all the services in their deployment Yaml and deploy it and recheck using the same command till it shows Deprecated APIs.

Ref: https://github.com/doitintl/kube-no-trouble

Step 7: Update Node Group in AWS

Go to the EKS cluster in the AWS console and under Configuration → Compute, click update now Under Node Group and use Rolling update Under Update Strategy and wait for the upgrade to be completed.

If you get the error PodEvictionFailure , then it is because PodDisruptionBudget is not configured properly.

EKS Upgrade (v1.19 to v1.20) – Steps

Follow the same steps described above for v1.18 to v1.19

EKS Upgrade (v1.20 to v1.21) – Steps

Follow the same steps described above for v1.18 to v1.19

Talk to our experts.

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Quick Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Popular Posts

Recent posts

Start your cloud transformation now.

TALK TO OUR EXPERTS

GET READY TO KNOW THE LASTEST UPDATE!

Sign up for our newsletter and stay up-to-date on the latest event trends, industry news, and exclusive Eventify promotions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.