12 Demo of Kubernetes using Flannel over EdgeVPN.io
Renato Figueiredo edited this page 2021-02-11 09:20:45 -05:00
This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

Introduction

This document outlines a step-by-step demo to deploy a 4-node distributed Kubernetes cluster that uses EdgeVPN.io (Evio) as the underlying virtual network, and Flannel as a pod overlay that works on top of Evio.

The instructions below assume installation starting from a fresh install of Ubuntu 18.04 LTS in four cluster nodes

In all nodes: install Evio software and dependences

On all nodes, go through the Evio installation steps.

In all nodes: configure Evio

Now you need to copy a configuration file for your node(s). You can do so by requesting trial accounts.

The examples below assume you have received a trial account, and configuration files in Trialtest_config.zip. It also assumes your Kubernetes manager node has virtual address 10.10.100.1, and other nodes use addresses 10.10.100.2, 10.10.100.3, and 10.10.100.4. The hostnames are kubenode1 (manager), kubenode2, kubenode3, and kubenode4, respectively.

Run the following commands in each node - making sure you copy the proper configuration file (e.g. config-001.json for kubenode1 10.10.100.1, etc):

sudo apt-get install unzip
unzip Trialtest_config.zip
sudo cp config-001.json /etc/opt/evio/config.json
sudo systemctl start evio

Check that your nodes are up and running, and that they can ping each other (ping by IP address, not by hostname - unless you have a DNS system in place for your cluster)

In all nodes: install Kubernetes and dependences

Repeat the steps below in each node in your cluster:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt-get update -y
sudo apt-get install -y openvswitch-switch \
                        python3 python3-pip python3-venv \
                        apt-transport-https \
                        ca-certificates \
                        curl git \
                        software-properties-common \
                        containerd.io \
                        docker-ce-cli \
                        docker-ce 

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

sudo swapoff -a

In all nodes: configure Kubernetes to use Evio IP address

For each node, you need to edit the file below to configure with the node's Evio virtual IP address (e.g. 10.10.100.1 for the manager node kubenode1):

sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

You need to edit one single line in this file, to add the corresponding --node-ip to KUBELET_CONFIG_ARGS, as shown below:

Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --node-ip=10.10.100.1"

In kubenode1 only, start up the Kubernetes manager

In kubenode1 only, run:

sudo systemctl daemon-reload
sudo systemctl restart kubelet
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.100.1

Then, run:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

In kubenode2..4, join the cluster

Now, copy and paste the join command shown in the manager's output, and run the command (as root or with sudo) in the other nodes to join. The command will look like this (the token and hash will be different for your setup):

sudo kubeadm join 10.10.100.1:6443 --token yxigqc.vwmi19vbiedklgp7     --discovery-token-ca-cert-hash sha256:590b6698140222b480549e0c7f949ecb4db96c961f388a6377765efe8fde35f1

In kubenode1, check that the cluster nodes are connected

Verify that all nodes are connected to the cluster:

sudo kubectl get nodes -o wide

In kubenode1 only, deploy Flannel

First, download the template kube-flannel.yml. You may check the latest version out using git (it is found under Documentation), or download with wget:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Now edit this file to add the argument -iface= corresponding to container command "/opt/bin/flanneld" in the kube-flannel.yml deployment file as shown in the example below. (Note: make sure you edit entries that corresponds to the architecture you're using - there can be multiple containers: entries, and to be safe, you should add --iface to all)

Add a new line with --iface= the name of the bridge created by Evio in your host overlay; if you use a trial account, the name is "brl101000F" as below:

containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=brl101000F
        resources:
          requests:

Now deploy the Flannel plugin:

sudo kubectl apply -f kube-flannel.yml 

You can verify that Flannel gets deployed on all nodes with:

sudo kubectl get pods --all-namespaces -o wide

On kubenode1, deploy test containers

In the master node, paste the contents below into a file named testpod2.yaml. This runs a simple dnsutils test pod on kubenode2:

apiVersion: v1
kind: Pod
metadata:
  name: dnsutils2
  namespace: default
spec:
  nodeName: kubenode2
  containers:
  - name: dnsutils
    image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

Then deploy with:

sudo kubectl apply -f testpod2.yaml

To check that the pod is running:

sudo kubectl get pods -o wide

You should see an output like this - the IP address is a Flannel IP address:

NAME        READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
dnsutils2   1/1     Running   0          48s   10.244.2.3   kubenode2         <none>           <none>

You can copy testpod2.yaml to testpod3.yaml and testpod4.yaml, then edit these files to change the name to dnsutil3 and dnsutil4 (respectively) and change the node name to deploy on kubenode3 and kubenode4, respectively:

apiVersion: v1
kind: Pod
metadata:
  name: dnsutils3
  namespace: default
spec:
  nodeName: kubenode3
  containers:
  - name: dnsutils
    image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

Then deploy with:

sudo kubectl apply -f testpod3.yaml
sudo kubectl apply -f testpod4.yaml

Check that the pods are running:

sudo kubectl get pods -o wide

You should see an output like this - the IP address is a Flannel IP address:

NAME        READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
dnsutils2   1/1     Running   0          14m     10.244.2.3   kubenode2       <none>           <none>
dnsutils3   1/1     Running   0          4m30s   10.244.1.2   kubenode3       <none>           <none>```

Now you can log in to one of the pods, and ping the other (replace the 10.244.1.2 IP address below with the one assigned by Flannel for your setup)

sudo kubectl exec -it -n default dnsutils2 /bin/sh
/ # ping 10.244.1.2

Miscellaneous

To rejoin a node

If a node needs to rejoin.

First, on the master node: 1) create a join token, and 2) delete the node (in this example, the node rejoining is named mynode2):

kubeadm token create --print-join-command
sudo kubectl delete node mynode2

Then, on the cluster node that is rejoining:

sudo swapoff -a
sudo rm /etc/kubernetes/kubelet.conf
sudo rm /etc/kubernetes/pki/ca.crt
sudo systemctl daemon-reload
sudo systemctl restart kubelet

Then, paste the kubeadm join command generated at the master above (the join command looks like this):

sudo kubeadm join 10.10.100.1:6443 --token ...