It is already a long time since I wanted to have some hands on experience with kubernetes.
The day when this happens is finally here and I am looking forward to testing all sorts of commands on my own cluster.
Kubernetes – why?
I have used docker
for development in the past years and at this moment, I wouldn’t go back to a “dockerless” development experience. In my opinion, the main two advantages are the following:
- It reduces the differences between development environments and development, staging and production environments. So, it solves much of the “But it worked on my machine” situations;
- With docker, you can simply start developing, without having to locally install and configure node, python or whatever else you want to use.
kubernetes
comes as a tool to help orchestrating the containers you create with docker. Minimally explained: it offers namespaces where you can run different projects at the same time and a system to check their status/health. Other tools are available, but apparently kubernetes solves the problem right, and because of its popularity, there is a lot of information out there.
Some interesting information on the matter:
- https://medium.com/better-programming/why-kubernetes-bbb7d66fccf5
- https://www.youtube.com/watch?v=QJ4fODH6DXI
- https://www.youtube.com/watch?v=1xo-0gCVhTU
k3s
k3s is a lightweight kubernetes distribution, quite easy to install, upgrade and thus use. As a developer, it suits my needs exactly. I want to get familiar with the commands, do some trial an error with minimal costs as time and money and probably create a helper environment to assist with projects.
You can learn more about it here:
- https://k3s.io/
- https://www.youtube.com/watch?v=-HchRyqNtkU
- https://www.youtube.com/watch?v=2LNxGVS81mE
Setup on Alpine
I have previously setup Alpine on an older computer and I will use that installation for doing my experiment.
Sources:
- https://kauri.io/38-install-and-configure-a-kubernetes-cluster-with/418b3bc1e0544fbc955a4bbba6fff8a9/a
- https://teada.net/k3s-on-alpine-linux/
- https://github.com/bbruun/k3s-getting-started
- https://dzone.com/articles/lightweight-kubernetes-k3s-installation-and-spring
- https://medium.com/@yannalbou/k3d-k3s-k8s-perfect-match-for-dev-and-testing-896c8953acc0
Disable swap
For kubernetes performance reasons, it is recommended that you diable swap:
swapoff -a
sudo nano /etc/fstab
and disable the swap partition
This experiment does not aim for the highest performance, but the easiest path to get a working and usable demo, so we will not be doing this. If you want to know more, you can access this page:
https://serverfault.com/questions/881517/why-disable-swap-on-kubernetes
Enable alpine cgroups
Use the following command to set the cgroups:
rc-update add cgroups default
Read more:
- https://en.wikipedia.org/wiki/Cgroups
- https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
- https://www.youtube.com/watch?v=el7768BNUPw
Install
Installing k3s
is as easy as running:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode 644" sh -
The output of this command has some important information, namely the uninstall script in tour case:
[INFO] Finding release for channel stable
[INFO] Using v1.18.3+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/rancher/k3s/k3s.env
[INFO] openrc: Creating service file /etc/init.d/k3s
[INFO] openrc: Enabling k3s service for default runlevel
[INFO] openrc: Starting k3s
* Caching service dependencies … [ ok ]
* Starting k3s …
Code language: JavaScript (javascript)
The --write-kubeconfig-mode 644
sets insecure permissions to the kubeconfig file. We will do this as quick solution, but it is highly recommended to bring the configuration file locally or at least run the problematic commands with sudo.
The command above installs k3s
as a server/agent combination. You can use the following parameters to install only the server: --disable-agent --tls-san <ip of your server>
and then:
cat /var/lib/rancher/k3s/server/node-token
to see the token of your installation;curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --write-kubeconfig-mode 644 --server https://<ip of your server previously installed> --token=<string copied form the output above>" sh -
Commands you can run and learn more about:
k3s --help
crictl --help
kubectl get nodes -o wide
The first deployment – `Hello world!` nginx
This was completely new territory for me, but I was excited to figure it out. The task: start a simple nginx
deployment with will be exposed to the outside world.
mkdir ~/nginx-deployment && cd ~/nginx-deployment
nano ~/nginx-deployment.yml
and add the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
nano ~/nginx-ingress.yml
and add the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: nginx.example.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
path:
nano ~/nginx-service.yml
and add the following:
apiVersion: extensions/v1beta1
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
type: NodePort
selector:
app: nginx
Note: you should change nginx.example.com
to a different host or update /etc/hosts
to properly resolve it.
Now you should run:
kubectl create namespace nginx
kubectl -n nginx create -f nginx-deployment.yml
kubectl -n nginx create -f nginx-service.yml
kubectl -n nginx create -f nginx-ingress.yml
If it was successful, then curl http://nginx.example.com
should show the html for the nginx default page. Other useful commands to begin your journey with:
kubectl --namespace nginx get deployments
kubectl --namespace nginx get pods
kubectl --namespace nginx get ingresses
kubectl --namespace nginx get services
Next step: automation
In the next articles we will experiment with automating some tasks. As stated before, the aim of this article and the following ones is to provide a development environment which would ease the day to day tasks of a programmer.
Uninstall
As mentioned above, the uninstall script was displayed after the installation. In our case it was the following:
/usr/local/bin/k3s-uninstall.sh
Sources
In time, I have seen a lot of videos and tutorials. Even if they have not been used directly, they were very-very good sources of information. I will mention some of them here, with the same sentiment of gratitude as always. A big thank you for their creators!
- https://www.youtube.com/watch?v=hMr3prm9gDM
- https://ahmermansoor.blogspot.com/2019/05/install-lightweight-kubernetes-k3s-with-k3os.html
- https://medium.com/@rboulanouar/hands-on-k3os-k3s-cluster-5fe6c3497a1e
- https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
- https://rancher.com/docs/k3s/latest/en/installation/kube-dashboard/
- https://rancher.com/docs/k3s/latest/en/quick-start/
- https://rancher.com/docs/k3s/latest/en/installation/private-registry/