Deploying K8S Cluster on AWS EC2 Instances

Recently, I needed to deploy a working k8s cluster on top of AWS EC2 Ubuntu based instances. As always, I jump on an opportunity to share with the community, hope anyone finds it useful. Step 0: Pre-requisites To make everything work, you will need to have an aws IAM user with proper permissions andaccess_key and secret_key . You’d also want to install and configure aws-cli. Follow the official documentation https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey Step 1: Create the Instances The easiest way is to use terraform for instances creation. Here is the code: terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 3.0" } } } provider "aws" { region = "us-east-2" } locals { instance_type = "t2.medium" } resource "aws_instance" "control_plane" { ami = "ami-09040d770ffe2224f" instance_type = local.instance_type count = 1 key_name = "myawesomekey" tags = { Name = "control_plane" k8s_role = "control_plane" } } resource "aws_instance" "worker" { ami = "ami-09040d770ffe2224f" instance_type = local.instance_type count = 3 key_name = "myawesomekey" tags = { Name = "worker_${count.index + 1}" k8s_role = "worker" } } output "public_ips" { value = { control_plane = aws_instance.control_plane.*.public_ip workers = { for idx, ip in aws_instance.worker : "worker_${idx + 1}" => ip.public_ip } } } You’d want to modify the region, ami, instance_type, count and key_name according to your setup but they are all self-explanatory. save it as main.tf, open your terminal then: terraform init terraform apply In case you never used and need to install and configure terraform, you can follow the official guide https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli Once you have your instances created, you’d need to set up the control plane and the worker nodes. 2. Setup Ansible Follow the official documentation to install and configure Ansible https://docs.ansible.com/ansible/latest/installation_guide/installation_distros.html Ansible is the best way to perform it: Let’s make sure we use aws_ec2 ansible inventory plugin, so we can run over the instances using their tags that tf created. Modify your ansible.cfg accordingly and make sure you have boto3 installed [inventory] enable_plugins = aws_ec2 create aws_ec2.yaml with the following content: plugin: aws_ec2 regions: - us-east-2 keyed_groups: - prefix: role key: tags.k8s_role - prefix: '' key: tags.k8s_role separator: "" filters: instance-state-name: running And here is the one playbook to rule them all: -- - name: Deploy Kubernetes Cluster hosts: all become: yes tasks: - name: Update and install prerequisites apt: name: - apt-transport-https - curl - socat - conntrack update_cache: yes state: latest - name: Install Docker apt: name: docker.io state: latest - name: Start and enable Docker service systemd: name: docker enabled: yes state: started - name: Add Kubernetes community GPG key become: yes ansible.builtin.get_url: url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key dest: /tmp/kubernetes.gpg mode: '0644' register: download_gpg - name: De-armoring GPG key become: yes command: gpg --dearmor -o /etc/apt/keyrings/kubernetes.gpg /tmp/kubernetes.gpg when: download_gpg is changed - name: Add Kubernetes APT repository become: yes apt_repository: repo: "deb [signed-by=/etc/apt/keyrings/kubernetes.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb /" state: present filename: kubernetes - name: Update package listings and install Kubernetes components apt: name: - kubelet - kubeadm - kubectl update_cache: yes state: latest - name: Hold Kubernetes packages ansible.builtin.dpkg_selections: name: "{{ item }}" selection: hold loop: - kubelet - kubeadm - kubectl - name: Disable swap command: swapoff -a ignore_errors: true - name: Remove swap from fstab lineinfile: path: /etc/fstab regexp: '^.* swap .*' line: '# commented out by Ansible to disable swap' state: present - name: Configure sysctl settings for Kubernetes copy: dest: /etc/sysctl.d/k8s.conf content: | net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 notify: reload sysctl - name: Enable and start kubelet service s

Jan 15, 2025 - 17:51
Deploying K8S Cluster on AWS EC2 Instances

Recently, I needed to deploy a working k8s cluster on top of AWS EC2 Ubuntu based instances. As always, I jump on an opportunity to share with the community, hope anyone finds it useful.

Step 0: Pre-requisites

To make everything work, you will need to have an aws IAM user with proper permissions andaccess_key and secret_key . You’d also want to install and configure aws-cli. Follow the official documentation
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey

Step 1: Create the Instances
The easiest way is to use terraform for instances creation. Here is the code:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

provider "aws" {
  region = "us-east-2"
}

locals {
  instance_type = "t2.medium"
}

resource "aws_instance" "control_plane" {
  ami           = "ami-09040d770ffe2224f"
  instance_type = local.instance_type
  count         = 1
  key_name      = "myawesomekey"
  tags = {
    Name     = "control_plane"
    k8s_role = "control_plane"
  }
}

resource "aws_instance" "worker" {
  ami           = "ami-09040d770ffe2224f"
  instance_type = local.instance_type
  count         = 3
  key_name      = "myawesomekey"

  tags = {
    Name     = "worker_${count.index + 1}"
    k8s_role = "worker"
  }
}

output "public_ips" {
  value = {
    control_plane = aws_instance.control_plane.*.public_ip
    workers       = { for idx, ip in aws_instance.worker : "worker_${idx + 1}" => ip.public_ip }
  }
}

You’d want to modify the region, ami, instance_type, count and key_name according to your setup but they are all self-explanatory.

save it as main.tf, open your terminal then:

terraform init
terraform apply

In case you never used and need to install and configure terraform, you can follow the official guide https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli

Once you have your instances created, you’d need to set up the control plane and the worker nodes.

2. Setup Ansible

Follow the official documentation to install and configure Ansible
https://docs.ansible.com/ansible/latest/installation_guide/installation_distros.html

Ansible is the best way to perform it:
Let’s make sure we use aws_ec2 ansible inventory plugin, so we can run over the instances using their tags that tf created.
Modify your ansible.cfg accordingly and make sure you have boto3 installed

[inventory]
enable_plugins = aws_ec2

create aws_ec2.yaml with the following content:

plugin: aws_ec2
regions:
  - us-east-2
keyed_groups:
  - prefix: role
    key: tags.k8s_role
  - prefix: ''
    key: tags.k8s_role
    separator: ""    
filters:
  instance-state-name: running

And here is the one playbook to rule them all:

--
- name: Deploy Kubernetes Cluster
  hosts: all
  become: yes
  tasks:
    - name: Update and install prerequisites
      apt:
        name:
          - apt-transport-https
          - curl
          - socat
          - conntrack
        update_cache: yes
        state: latest

    - name: Install Docker
      apt:
        name: docker.io
        state: latest

    - name: Start and enable Docker service
      systemd:
        name: docker
        enabled: yes
        state: started

    - name: Add Kubernetes community GPG key
      become: yes
      ansible.builtin.get_url:
        url: https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key
        dest: /tmp/kubernetes.gpg
        mode: '0644'
      register: download_gpg

    - name: De-armoring GPG key
      become: yes
      command: gpg --dearmor -o /etc/apt/keyrings/kubernetes.gpg /tmp/kubernetes.gpg
      when: download_gpg is changed


    - name: Add Kubernetes APT repository
      become: yes
      apt_repository:
        repo: "deb [signed-by=/etc/apt/keyrings/kubernetes.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb /"
        state: present
        filename: kubernetes


    - name: Update package listings and install Kubernetes components
      apt:
        name:
          - kubelet
          - kubeadm
          - kubectl
        update_cache: yes
        state: latest

    - name: Hold Kubernetes packages
      ansible.builtin.dpkg_selections:
        name: "{{ item }}"
        selection: hold
      loop:
        - kubelet
        - kubeadm
        - kubectl

    - name: Disable swap
      command: swapoff -a
      ignore_errors: true

    - name: Remove swap from fstab
      lineinfile:
        path: /etc/fstab
        regexp: '^.* swap .*'
        line: '# commented out by Ansible to disable swap'
        state: present

    - name: Configure sysctl settings for Kubernetes
      copy:
        dest: /etc/sysctl.d/k8s.conf
        content: |
          net.bridge.bridge-nf-call-ip6tables = 1
          net.bridge.bridge-nf-call-iptables = 1
      notify: reload sysctl

    - name: Enable and start kubelet service
      systemd:
        name: kubelet
        enabled: yes
        state: started

    - name: Configure firewall for necessary Kubernetes ports
      ufw:
        rule: allow
        port: "{{ item }}"
        proto: tcp
      with_items:
        - 6443    # Kubernetes API server
        - 2379:2380 # etcd server client API
        - 10250   # Kubelet API
        - 10255   # Read-only Kubelet API (optional)
      when: inventory_hostname in groups['control_plane']

    - name: Open additional required ports for networking
      ufw:
        rule: allow
        port: "{{ item }}"
        proto: "{{ 'tcp' if item != '8472' else 'udp' }}"
      with_items:
        - 8472    # Overlay Network (UDP, flannel VXLAN if using flannel)
      when: inventory_hostname in groups['control_plane']

    - name: Reload UFW
      command: ufw reload
      when: inventory_hostname in groups['control_plane']

  handlers:
    - name: reload sysctl
      command: sysctl --system

- name: Initialize Kubernetes control plane
  hosts: role_control_plane
  become: yes
  tasks:
    - name: Initialize the Kubernetes control plane
      command: kubeadm init --pod-network-cidr=10.244.0.0/16
      register: kubeadm_init

    - name: Save kubeadm join command to local file
      local_action:
        module: shell
        cmd: "echo '{{ kubeadm_init.stdout }}' | grep -A 2 'kubeadm join' | tr -d '\\n' | sed 's/\\\\//g' > join_command.sh"
      run_once: true
      delegate_to: localhost
      become: no

    - name: Set up kubectl for root
      command: "{{ item }}"
      with_items:
        - mkdir -p /root/.kube
        - cp -i /etc/kubernetes/admin.conf /root/.kube/config
        - chown root:root /root/.kube/config

    - name: Install Flannel CNI plugin
      command: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
      when: inventory_hostname in groups['control_plane']

- name: Set up Kubernetes worker nodes
  hosts: role_worker
  tasks:
    - name: Ensure the join command script is present and correct
      local_action:
        module: stat
        path: "./join_command.sh"
      register: script_stat

    - name: Copy kubeadm join command to worker nodes
      copy:
        src: "./join_command.sh"
        dest: "/tmp/join_command.sh"
        mode: '0755'
      when: script_stat.stat.exists

    - name: Join cluster
      command: bash /tmp/join_command.sh
      args:
        executable: /bin/bash
      become: yes

3.Run the playbook
Save it as playbook.yaml and ansible-playbook -i aws_ec2.yaml playbook.yaml

4. In case you don’t like Ansible
If you prefer running bash scripts on the instances directly, here is the script for the control_plane:

#!/bin/bash

# Step 1: Update and install prerequisites
sudo apt update && sudo apt install -y apt-transport-https curl socat conntrack

# Step 2: Install Docker
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker

# Step 3: Configure UFW (Uncomplicated Firewall) to allow necessary Kubernetes ports
sudo ufw allow 6443/tcp    # Kubernetes API server
sudo ufw allow 2379:2380/tcp # etcd server client API
sudo ufw allow 10250/tcp   # Kubelet API
sudo ufw allow 10255/tcp   # Read-only Kubelet API (optional)
sudo ufw allow 10259/tcp   # kube-scheduler
sudo ufw allow 10257/tcp   # kube-controller-manager
sudo ufw allow 8472/udp    # Overlay Network (flannel VXLAN if using flannel)
sudo ufw reload
# Stop and disable the firewall for now, as during the installation additional ports may be required to be opened,
# you can start it back later, as we configured the opration neccessery ports.
sudo systemctl stop ufw
sudo systemctl disable ufw

# Step 4: Add the Kubernetes community GPG key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Step 5: Add the Kubernetes APT repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Step 6: Update package listings and install Kubernetes components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# Step 7: Install crictl (required by kubelet)
VERSION="v1.24.2"  # Adjust the version to match your Kubernetes version
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amdoinux-amd64.tar.gz -C /usr/local/bin
rm crictl-$VERSION-linux-amd64.tar.gz

# Step 8: Pull all necessary Kubernetes images required for kubeadm init
sudo kubeadm config images pull

# Step 9: Initialize the Kubernetes control plane
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

# Step 10: Set up the kubectl configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Step 11: Deploy a pod network to the cluster
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# Step 12: set environment variable for kubectl to work
export KUBECONFIG=/etc/kubernetes/admin.conf

# Step 13: Validate kubectl is working
kubectl get pods --all-namespaces
kubectl get nodes

# Step 13: Output the join token to join other nodes to this cluster
kubeadm token create --print-join-command

echo "Kubernetes has been successfully installed and initialized!"

And another one for the worker node:

#!/bin/bash


# Step 1: Update and install prerequisites
sudo apt update && sudo apt install -y apt-transport-https curl socat conntrack

# Step 2: Install Docker
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker

# Step 3: Add the Kubernetes signing key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Step 4: Add the Kubernetes APT repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Step 5: Update package listings and install Kubernetes components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# Step 6: Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

# Step 7: Configure sysctl settings
cat <

Make sure to capture the output of kubeadm token create --print-join-token command from the control plane and run it on the workers to join them to the cluster.

You’re done!