Alpine and k3s – a lightweight kubernetes experience

It is already a long time since I wanted to have some hands on experience with kubernetes.

The day when this happens is finally here and I am looking forward to testing all sorts of commands on my own cluster.

Kubernetes – why?

I have used docker for development in the past years and at this moment, I wouldn’t go back to a “dockerless” development experience. In my opinion, the main two advantages are the following:

  • It reduces the differences between development environments and development, staging and production environments. So, it solves much of the “But it worked on my machine” situations;
  • With docker, you can simply start developing, without having to locally install and configure node, python or whatever else you want to use.

kubernetes comes as a tool to help orchestrating the containers you create with docker. Minimally explained: it offers namespaces where you can run different projects at the same time and a system to check their status/health. Other tools are available, but apparently kubernetes solves the problem right, and because of its popularity, there is a lot of information out there.

Some interesting information on the matter:


k3s is a lightweight kubernetes distribution, quite easy to install, upgrade and thus use. As a developer, it suits my needs exactly. I want to get familiar with the commands, do some trial an error with minimal costs as time and money and probably create a helper environment to assist with projects.

You can learn more about it here:

This is why we should go lightweight, in the begining 🙂

Setup on Alpine

I have previously setup Alpine on an older computer and I will use that installation for doing my experiment.


Disable swap

For kubernetes performance reasons, it is recommended that you diable swap:

  • swapoff -a
  • sudo nano /etc/fstab and disable the swap partition

This experiment does not aim for the highest performance, but the easiest path to get a working and usable demo, so we will not be doing this. If you want to know more, you can access this page:

Enable alpine cgroups

Use the following command to set the cgroups:

rc-update add cgroups default

Read more:


Installing k3s is as easy as running:

curl -sfL | INSTALL_K3S_EXEC="--write-kubeconfig-mode 644" sh -

The output of this command has some important information, namely the uninstall script in tour case:

[INFO] Finding release for channel stable
[INFO] Using v1.18.3+k3s1 as release
[INFO] Downloading hash
[INFO] Downloading binary
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/
[INFO] Creating uninstall script /usr/local/bin/
[INFO] env: Creating environment file /etc/rancher/k3s/k3s.env
[INFO] openrc: Creating service file /etc/init.d/k3s
[INFO] openrc: Enabling k3s service for default runlevel
[INFO] openrc: Starting k3s
* Caching service dependencies … [ ok ]
* Starting k3s …Code language: JavaScript (javascript)

The --write-kubeconfig-mode 644 sets insecure permissions to the kubeconfig file. We will do this as quick solution, but it is highly recommended to bring the configuration file locally or at least run the problematic commands with sudo.

The command above installs k3s as a server/agent combination. You can use the following parameters to install only the server: --disable-agent --tls-san <ip of your server> and then:

  • cat /var/lib/rancher/k3s/server/node-token to see the token of your installation;
  • curl -sfL | INSTALL_K3S_EXEC="agent --write-kubeconfig-mode 644 --server https://<ip of your server previously installed> --token=<string copied form the output above>" sh -

Commands you can run and learn more about:

  • k3s --help
  • crictl --help
  • kubectl get nodes -o wide

The first deployment – `Hello world!` nginx

This was completely new territory for me, but I was excited to figure it out. The task: start a simple nginx deployment with will be exposed to the outside world.

mkdir ~/nginx-deployment && cd ~/nginx-deployment

nano ~/nginx-deployment.yml and add the following:

apiVersion: apps/v1
kind: Deployment
  name: nginx
      app: nginx
  replicas: 2
        app: nginx
      - name: nginx
        image: nginx:stable
        - containerPort: 80

nano ~/nginx-ingress.yml and add the following:

apiVersion: extensions/v1beta1
kind: Ingress
  name: nginx
  - host:
      - backend:
          serviceName: nginx
          servicePort: 80

nano ~/nginx-service.yml and add the following:

apiVersion: extensions/v1beta1
apiVersion: v1
kind: Service
  name: nginx
  - port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort
    app: nginx

Note: you should change to a different host or update /etc/hosts to properly resolve it.

Now you should run:

  • kubectl create namespace nginx
  • kubectl -n nginx create -f nginx-deployment.yml
  • kubectl -n nginx create -f nginx-service.yml
  • kubectl -n nginx create -f nginx-ingress.yml

If it was successful, then curl should show the html for the nginx default page. Other useful commands to begin your journey with:

  • kubectl --namespace nginx get deployments
  • kubectl --namespace nginx get pods
  • kubectl --namespace nginx get ingresses
  • kubectl --namespace nginx get services

Next step: automation

In the next articles we will experiment with automating some tasks. As stated before, the aim of this article and the following ones is to provide a development environment which would ease the day to day tasks of a programmer.


As mentioned above, the uninstall script was displayed after the installation. In our case it was the following:



In time, I have seen a lot of videos and tutorials. Even if they have not been used directly, they were very-very good sources of information. I will mention some of them here, with the same sentiment of gratitude as always. A big thank you for their creators!