Amazon EKS Distro

beam design-printed paper on desk

Amazon EKS Distro (EKS-D) is a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service (EKS) to create reliable and secure Kubernetes clusters. With EKS-D, you can rely on the same versions of Kubernetes and its dependencies deployed by Amazon EKS. This includes the latest upstream updates, as well as extended security patching support. EKS-D follows the same Kubernetes version release cycle as Amazon EKS, and we provide the bits here. EKS-D provides the same software that has enabled tens of thousands of Kubernetes clusters on Amazon EKS.

What is the difference between EKS (the AWS Kubernetes cloud service) and EKS-D? The main difference is in how they are managed. EKS is a fully managed Kubernetes platform, while EKS-D is available to install and manage yourself. You can run EKS-D on-premises, in a cloud, or on your own systems. EKS-D provides a path to having essentially the same Amazon EKS Kubernetes distribution running wherever you need to run it.

Once EKS-D is running, you are responsible for managing and upgrading it yourself. For end users, however, running applications is the same as with EKS since the two support the same API versions and same set of components.

spectrum of EKS-based solutions options

 

Project Tenets (unless you know better ones)

The tenets of the EKS Distro (EKS-D) project are:

  1. The Source: The goal of the EKS Distro is to be the Kubernetes source for EKS and EKS Anywhere
  2. Simple: Make using a Kubernetes distribution simple and boring (reliable and secure)
  3. Opinionated Modularity: Provide opinionated defaults about the best components to include with Kubernetes but give customers the ability to swap them out
  4. Open: Provide open source tooling backed, validated and maintained by Amazon
  5. Ubiquitous: Enable customers and partners to integrate a Kubernetes distribution in the most common tooling (Kubernetes installers and distributions, infrastructure as code, and more)
  6. Stand Alone: Provided for use anywhere without AWS dependencies
  7. Better with AWS: Enable AWS customers to adopt additional AWS services easily

 

Getting started with EKS-D

There are a lot of tools that are providing install methods as well integrations with EKS Distro, here we are going to describe how to use Kubernetes in Docker (KinD) to run EKS-D on your local machine. KinD, like Minikube and Microk8s, is way to host Kubernetes on your local machine except it doesn’t require additional drivers or a virtual machine.

The first step involves creating a fork of the kbst/kind project from Kubestack. Once the project has been forked, update the Makefile in the root directory with the environment variables, e.g. the RELEASE_BRANCHVERSION, and SOURCE_URL of the build you want to use.

RELEASE_BRANCH = 1-20
VERSION := v$(subst -,.,$(RELEASE_BRANCH)).4
SOURCE_URL = https://distro.eks.amazonaws.com/kubernetes-${RELEASE_BRANCH}/releases/1/artifacts/kubernetes/${VERSION}/kubernetes-src.tar.gz
GIT_SHA := $(shell echo `git rev-parse --verify HEAD^{commit}`)
IMAGE_NAME = public.ecr.aws/jicowan/kind-eks-d
TEST_IMAGE = ${IMAGE_NAME}:${GIT_SHA}

In this example, I am using Kubernetes v1.20.4. I am also pushing the resulting container image to ECR public. You can find a list of the available releases at https://distro.eks.amazonaws.com/#releases. To get the SOURCE_URL for a release, download the release manifest and run the following yq query:

yq eval '.status.components.[] | select(.name="kubernetes") | .assets.[] | select(.name=="kubernetes-src.tar.gz")' ./kubernetes-1-20-eks-1.yaml

The output should looks similar to this:

archive:
sha256: 7b642868c905e41a93beded9968d2c48daef7ecd57815bf0f83ca1337e1f6176
sha512: 095e57e905a041963689ff9b2d1de30dd6f0344530253cccd4e3d91985091cc37564b95f45c1ed160129306d06f5d2670feb457cbb01e274f5a0c0f3c724f834
uri: https://distro.eks.amazonaws.com/kubernetes-1-20/releases/1/artifacts/kubernetes/v1.20.4/kubernetes-src.tar.gz
description: Kubernetes source tarball
name: kubernetes-src.tar.gz
type: Archive

Verify that the SOURCE_URL matches the URI in the output from yq. Be sure to leave the RELEASE_BRANCH and VERSION variables intact as they will be populated by the values you specified earlier.

If, like me, you want to push the image the your own registry, you will need to update the main.yml file under GitHub actions. As I am pushing the resulting image to ECR, I need to update Docker login section with environment variables for my AWS access key ID and secret key. GitHub actions will use these secrets to authenticate to ECR public.

- name: Docker login      
uses: docker/login-action@v1
with:
registry: public.ecr.aws
username: ${{ secrets.AWS_ACCESS_KEY_ID }}
password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
env:
AWS_REGION: us-east-1

Before saving your changes, create GitHub secret for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Please refer to https://docs.github.com/en/actions/reference/encrypted-secrets if you’re unfamiliar with how to create GitHub secrets. When you save the changes main.yml it will trigger workflow that will build and push your image to your registry.

Next, create a configuration file called kind-eks-d-v1.20.conf for KinD that looks similar to the following:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: eks-d-1-20
nodes:
- role: control-plane
image: public.ecr.aws/jicowan/kind-eks-d:4e13a3f38c26a14e6a333fc7b8246c02ac4b33b2
- role: worker
image: public.ecr.aws/jicowan/kind-eks-d:4e13a3f38c26a14e6a333fc7b8246c02ac4b33b2

Be aware you may need to update the image with the URI of your image, although you’re free to use the image that I pushed to ECR public.

Finally, create a KinD cluster by executing the following command:

kind create cluster --config ./kind-eks-d-v1.20.conf

In about 3–5m you’ll have an EKS-D cluster running on your local machine!

This article was updated on Jan 29, 2022

Toyhoshi

I'm Marco Varagnolo aka Toyhoshi, the author behind Clouday.dev, curious about everything by nature, i'm a system architect. Huge supporter of DevOps mindset. My hands are dirty all day long with Kubernetes, containers and many similar things in between. What else do you need? :-)