Mon homelab

Rôle d'homelab

Que ce soit pour tester de nouvelles technologies, automatiser des déploiements ou maîtriser les outils DevOps, un homelab est un terrain de jeu idéal. Mon homelab me permet d’expérimenter en toute liberté, sans craindre de casser un environnement de production. C’est un espace d’apprentissage où chaque erreur devient une leçon, et chaque réussite, une compétence de plus.

Pour les administrateurs système ou les passionnés de DevOps, disposer d’un tel laboratoire à domicile est une façon concrète de progresser, d’innover et de rester à la pointe des pratiques IT. Découvrez comment le mien est organisé et ce qu’il m’apporte au quotidien.

La machine

Mon homelab est constitué d'une machine Fedora avec :

  • Ryzen 5 1600X (6 cœurs matériel, 12 cœurs virtuels);
  • 64Go de RAM;
  • SSD de 500Go pour le système;
  • RAID 10 de 8To pour le reste.

Architecture

Afin de me donner le plus de libertés possible, Incus est installé sur la machine Fedora, me permettant de créer des machines virtuelles et des conteneurs afin de ne pas effectuer les tests directement sur la machine elle-même.

Parmi ces machines virtuelles, trois sont importantes, il s'agit des machines virtuelles contenant le cluster Kubernetes.

/medias/machine.drawio.svg

Services annexes

Un serveur NFS est aussi en place sur la machine "hôte" afin de fournir du stockage à Kubernetes, nous y reviendrons plus tard.

Mise en place du cluster K8s

Création des machines virtuelles (à la main)

Créer un nouveau projet Incus pour Kubernetes

incus project create kubernetes
incus project switch kubernetes

Créer un nouveau profil pour les noeuds de Kubernetes

incus profile create kubenode
  name: kubenode
  description: Profile for kubernetes cluster node
  project: kubernetes
  config:
    boot.autostart: "true"
    linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
    security.nesting: "true"
    security.privileged: "true"
    limits.cpu: "4"
    limits.memory: "6GiB"
    cloud-init.vendor-data: |
      #cloud-config
      users:
        - name: kubeadmin
          gecos: kubeadmin
          sudo: ALL=(ALL) NOPASSWD:ALL
          groups: wheel, root
          lock_passwd: false
          ssh_authorized_keys:
            -  ssh-ed25519 ... evrardve@hostname
          passwd: "<hash linux mot de passe>"
      packages:
        - openssh-server
      runcmd:
        - systemctl enable --now sshd
        - systemctl restart sshd      

Ce profil permet de mutualiser certains éléments de configuration entre les machines virtuelles qui consitueront le cluster K8s tels que la mémoire vive, le nombre de CPUs ainsi qu'un bloc cloud-init. Ce bloc cloud-init permet de configurer l'utilisateur admin de la VM et d'installer le serveur ssh.

Ne pas oublier le commentaire #cloud-init en haut, sinon cloud-init ne prendra pas en compte la configuration !

Puis créer les 3 machines virtuelles

incus launch images:fedora/43/cloud kube-main \
  --vm \
  --profile kubenode \
  --project kubernetes \
  --device eth0,nic,network=incusbr0,name=eth0,ipv4.address=10.1.1.100

incus launch images:fedora/43/cloud kube-worker1 \
  --vm \
  --profile kubenode \
  --project kubernetes \
  --device eth0,nic,network=incusbr0,name=eth0,ipv4.address=10.1.1.101

incus launch images:fedora/43/cloud kube-worker2 \
  --vm \
  --profile kubenode \
  --project kubernetes \
  --device eth0,nic,network=incusbr0,name=eth0,ipv4.address=10.1.1.102

incus start kube-main
incus start kube-worker1
incus start kube-worker2

Création des machines virtuelles (avec Open Tofu)

Source sur git

terraform {
  required_providers {
    incus = {
      source  = "lxc/incus"
      version = "0.3.1"
    }
  }
}

provider "incus" {
}

resource "incus_project" "kubernetes" {
  name        = "kubernetes"
  description = "Kubernetes project"

  config = {
    "features.storage.volumes" = false
    "features.images"          = false
    "features.profiles"        = false
    "features.storage.buckets" = false
  }
}

locals {
  ssh_public_key = trimspace(file("~/.ssh/id_ed25519.pub"))
}

locals {
  kubeadmin_password_hash = trimspace(file("./kubeadmin_password_hash"))
}

data "template_file" "cloud_init" {
  template = file("${path.module}/files/cloud-init.yaml")
  vars = {
    ssh_public_key = local.ssh_public_key
  }
}

resource "incus_profile" "kubenode" {
  name        = "kubenode"
  project     = "kubernetes"
  description = "Kubernetes lab node"

  depends_on = [
    incus_project.kubernetes
  ]

  config = {
    "security.nesting"    = "true"
    "security.privileged" = "true"
    "limits.cpu"         = "4"
    "limits.memory"      = "6GiB"
    "limits.memory.swap" = "false"
    "boot.autostart"     = "true"
    "cloud-init.vendor-data" = templatefile(
      "${path.module}/files/cloud-init.yaml", { ssh_public_key = local.ssh_public_key, kubeadmin_password_hash = local.kubeadmin_password_hash }
      )
  }

  device {
    name = "eth0"
    type = "nic"
    properties = {
      network = "incusbr0"
      name    = "eth0"
    }
  }

  device {
    name = "root"
    type = "disk"
    properties = {
      pool = "default"
      path = "/"
    }
  }
}

resource "incus_instance" "kube-main" {
  name  = "kube-main"
  type  = "virtual-machine"
  image = "images:fedora/43/cloud"
  profiles = [incus_profile.kubenode.name]
  project  = incus_project.kubernetes.name

  depends_on = [
    incus_profile.kubenode
  ]

  device {
    name = "eth0"
    type = "nic"
    properties = {
      network        = "incusbr0"
      name           = "eth0"
      "ipv4.address" = "10.1.1.100"
    }
  }
}

resource "incus_instance" "kube-worker1" {
  name     = "kube-worker1"
  type  = "virtual-machine"
  image    = "images:fedora/43/cloud"
  profiles = [incus_profile.kubenode.name]
  project  = incus_project.kubernetes.name

  depends_on = [
    incus_profile.kubenode
  ]

  device {
    name = "eth0"
    type = "nic"
    properties = {
      network        = "incusbr0"
      name           = "eth0"
      "ipv4.address" = "10.1.1.101"
    }
  }
}

resource "incus_instance" "kube-worker2" {
  name     = "kube-worker2"
  type  = "virtual-machine"
  image    = "images:fedora/43/cloud"
  profiles = [incus_profile.kubenode.name]
  project  = incus_project.kubernetes.name

  depends_on = [
    incus_profile.kubenode
  ]

  device {
    name = "eth0"
    type = "nic"
    properties = {
      network        = "incusbr0"
      name           = "eth0"
      "ipv4.address" = "10.1.1.102"
    }
  }
}

Installation de Kubernetes

J'ai effectué l'installation de Kubernetes avec un playbook Ansible

SELinux doit être désactivé sur les machines virtuelles pour que K8s puisse gérer les règles IPTables de ces dernières.

SELinux doit être désactivé sur la machine hôte pour que K8s puisse créer des volumes en utilisant la storage class NFS.

Installation de base

Source sur git

- name: Install kubernetes
  become: true
  hosts: incus-k8s-nodes
  tasks:
    - name: Disable SELinux
      ansible.posix.selinux:
        state: disabled

    - name: Install nfs-utils
      ansible.builtin.dnf:
        name: nfs-utils
        state: present
        update_cache: true

    - name: Check if firewalld is installed
      ansible.builtin.command:
        cmd: rpm -q firewalld
      failed_when: false
      changed_when: false
      register: firewalld_check

    - name: Disable firewall
      ansible.builtin.systemd_service:
        name: firewalld
        state: stopped
        enabled: false
        masked: true
      when: firewalld_check.rc == 0

    - name: Install iptables and iproute-tc
      ansible.builtin.dnf:
        name: "{{ item }}"
        state: present
        update_cache: true
      loop:
        - iptables
        - iproute-tc

    - name: Configure network
      block:
        - name: Configure kernel modules
          ansible.builtin.copy:
            src: files/etc_modules-load.d_k8s.conf
            dest: /etc/modules-load.d/k8s.conf
            owner: root
            group: root
            mode: "0644"

        - name: Enable overlay and br_netfilter module
          community.general.modprobe:
            name: "{{ item }}"
            state: present
          loop:
            - overlay
            - br_netfilter

        - name: Configure sysctl
          ansible.posix.sysctl:
            name: "{{ item.key }}"
            value: "{{ item.value }}"
            state: present
            reload: true
          loop:
            - { key: net.bridge.bridge-nf-call-iptables, value: 1 }
            - { key: net.bridge.bridge-nf-call-ip6tables, value: 1 }
            - { key: net.ipv4.ip_forward, value: 1 }

    - name: Install kubernetes
      ansible.builtin.dnf:
        name: "{{ item }}"
        state: present
      loop:
        - cri-o1.34
        - kubernetes1.34
        - kubernetes1.34-kubeadm
        - kubernetes1.34-client

    - name: Start and enable cri-o
      ansible.builtin.systemd_service:
        name: crio
        state: started
        enabled: true

    - name: Start and enable kubelet
      ansible.builtin.systemd_service:
        name: kubelet
        state: started
        enabled: true

    - name: Check if kubeadm_init_result.txt exists on kube-main
      when: inventory_hostname == "kube-main"
      ansible.builtin.stat:
        path: /root/kubeadm_init_result.txt
      register: kubeadm_init_file_check
      failed_when: false

    - name: Run init command
      when: inventory_hostname == "kube-main" and kubeadm_init_file_check.stat.exists == false
      ansible.builtin.shell:
        cmd: "kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/crio/crio.sock > /root/kubeadm_init_result.txt"
      register: kubeadm_init_result
      changed_when: kubeadm_init_result.rc == 0
      failed_when: kubeadm_init_result.rc != 0

    - name: AFTER INIT -- Check if kubeadm_init_result.txt exists on kube-main
      when: inventory_hostname == "kube-main"
      ansible.builtin.stat:
        path: /root/kubeadm_init_result.txt
      register: kubeadm_init_file_check

    - name: Read init result file content
      when: inventory_hostname == "kube-main" and kubeadm_init_file_check.stat.exists == true
      ansible.builtin.command:
        cmd: cat /root/kubeadm_init_result.txt
      register: kubeadm_init_file_content

    - name: Retrieve kubeadm_init_file_content for other tasks
      ansible.builtin.set_fact:
        kubeadm_init_file_content: "{{ kubeadm_init_file_content }}"
      run_once: true
      delegate_to: localhost

    - name: Set join command from file content
      ansible.builtin.set_fact:
        join_command: >-
          {{
            (kubeadm_init_file_content.stdout_lines[-2] +
             kubeadm_init_file_content.stdout_lines[-1])
             | to_json()
             | replace("\\", '')
             | replace("\t", '')
             | replace('"', '')
          }}          

    - name: Display join command on worker nodes
      when: inventory_hostname in ["kube-worker1", "kube-worker2"]
      ansible.builtin.debug:
        var: join_command

    - name: Check if kubeadm join was already runned
      when: inventory_hostname in ["kube-worker1", "kube-worker2"]
      ansible.builtin.stat:
        path: /var/log/kubeadm_join.log
      register: kubeadm_join_file_check

    - name: Join worker nodes to the cluster
      when: inventory_hostname in ["kube-worker1", "kube-worker2"] and kubeadm_join_file_check.stat.exists == false
      ansible.builtin.command:
        cmd: "{{ join_command }} >> /var/log/kubeadm_join.log"
      register: kubeadm_join_result
      changed_when: kubeadm_join_result.rc == 0
      failed_when: kubeadm_join_result.rc != 0

    - name: Create .kube directory on localhost
      ansible.builtin.file:
        path: ~/.kube
        state: directory
        mode: "0755"

    - name: Fetch admin.conf from kube-main
      when: inventory_hostname == "kube-main"
      ansible.builtin.fetch:
        src: /etc/kubernetes/admin.conf
        dest: ~/.kube/config
        flat: true

Installation du réseau et du stockage NFS

Source sur git

- name: Post install
  hosts: localhost
  vars_files:
    - config/config_vars.yaml
  tasks:
    - name: Apply network overlay
      delegate_to: localhost
      kubernetes.core.k8s:
        state: present
        src: https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

    - name: Add CSI driver helm repo
      delegate_to: localhost
      kubernetes.core.helm_repository:
        name: nfs-subdir-external-provisioner
        repo_url: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

    - name: Install CSI driver
      delegate_to: localhost
      kubernetes.core.helm:
        name: nfs-subdir-external-provisioner
        chart_ref: nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
        update_repo_cache: true
        create_namespace: false
        release_namespace: kube-system
        values:
          storageClass:
            name: nfs-csi
            defaultClass: true
          nfs:
            server: "{{ nfs.server }}"
            path: "{{ nfs.path }}"

Installation de Traefik

Source sur git

Il s'agit ici d'installer Traefik. C'est un reverse-proxy qui supporte HTTP(s) et TCP avec une génération automatique de certificats SSL. J'ai choisi d'utiliser le challenge letsencrypt "DNS".

  # traefik_ovh_secrets.template.yaml
  ---
  apiVersion: v1
  kind: Secret
  metadata:
    name: ovh-api-credentials
    namespace: traefik
  type: Opaque
  data:
    OVH_ENDPOINT: "{{ ovh_creds.ovh_endpoint | b64encode }}"
    OVH_APPLICATION_KEY: "{{ ovh_creds.ovh_application_key | b64encode }}"
    OVH_APPLICATION_SECRET: "{{ ovh_creds.ovh_application_secret | b64encode }}"
    OVH_CONSUMER_KEY: "{{ ovh_creds.ovh_consumer_key | b64encode }}"
  # traefik.values.yaml
  ---
persistence:
  enabled: true
  size: 1G

ports:
  web:
    exposedPort: 80
    nodePort: 30080
  websecure:
    exposedPort: 443
    nodePort: 30443
    tls:
      enabled: true
  ssh:
    port: 2222
    expose:
      default: true
    exposedPort: 2222
    nodePort: 30022
    protocol: TCP

service:
  type: NodePort

ingressRoute:
  dashboard:
    enabled: true
    matchRule: Host(`traefik.kube-main.lab`)
    entryPoints:
      - web

providers:
  kubernetesCRD:
    allowExternalNameServices: true
  kubernetesGateway:
    enabled: true

gateway:
  listeners:
    web:
      namespacePolicy:
        from: All

certificatesResolvers:
  letsencrypt_dns_stag:
    acme:
      email: "{{ email }}"
      caServer: https://acme-staging-v02.api.letsencrypt.org/directory
      storage: "/data/acme_dns_stag.json"
      dnsChallenge:
        provider: ovh
        delayBeforeCheck: 0
  letsencrypt_dns:
    acme:
      email: "{{ email }}"
      storage: "/data/acme_dns.json"
      dnsChallenge:
        provider: ovh
        delayBeforeCheck: 0

env:
  - name: OVH_ENDPOINT
    valueFrom:
      secretKeyRef:
        name: ovh-api-credentials
        key: OVH_ENDPOINT
  - name: OVH_APPLICATION_KEY
    valueFrom:
      secretKeyRef:
        name: ovh-api-credentials
        key: OVH_APPLICATION_KEY
  - name: OVH_APPLICATION_SECRET
    valueFrom:
      secretKeyRef:
        name: ovh-api-credentials
        key: OVH_APPLICATION_SECRET
  - name: OVH_CONSUMER_KEY
    valueFrom:
      secretKeyRef:
        name: ovh-api-credentials
        key: OVH_CONSUMER_KEY

logs:
  general:
    level: INFO
  # playbook.yaml
  - name: Setup Traefik
    vars_files:
      - secrets/traefik_secrets.yaml
    hosts:
      - localhost
    tasks:
      - name: Create Traefik namespace
        delegate_to: localhost
        kubernetes.core.k8s:
          name: traefik
          api_version: v1
          kind: Namespace
          state: present

      - name: Add Traefik chart repo
        delegate_to: localhost
        kubernetes.core.helm_repository:
          name: traefik
          repo_url: "https://traefik.github.io/charts"

      - name: Setup Traefik config map for OVH DNS
        delegate_to: localhost
        kubernetes.core.k8s:
          template: files/traefik_ovh_secret.template.yaml
          state: present

      - name: Setup Traefik
        delegate_to: localhost
        kubernetes.core.helm:
          name: traefik
          chart_ref: traefik/traefik
          update_repo_cache: true
          create_namespace: true
          release_namespace: traefik
          values: "{{ lookup('template', 'files/traefik_values.template.yaml') | from_yaml }}"

Ce playbook installe Traefik en HTTP, HTTPs et TCP. Les points HTTP et HTTPS seront utilisés pour exposer les services web qui seront déployés dans le cluster. Le point TCP sera utilisé par l'instance git qui sera déployée dans le cluster (pour le git via SSH).

Redictection réseau

Il faut maintenant configurer le réseau pour les services déployés dans le cluster soient accessibles depuis l'extérieur. Traefik est configuré pour exposer les ports 30080, 30443 et 30022 sur les machines du cluster.

Cependant, mes machines virtuelles ne sont pas accessibles directement dans mon réseau local, il faut donc que le réseau passe par la machines hôte pour ensuite aller vers la machine virtuelle.

/medias/machines_reseau.drawio.svg

Pour cela j'ai utilisé les commandes suivantes :

  firewall-cmd --zone=trusted --add-forward-port=port=8080:proto=tcp:toport=30080:toaddr=10.1.1.100 --permanent
  firewall-cmd --zone=trusted --add-forward-port=port=8443:proto=tcp:toport=30443:toaddr=10.1.1.100 --permanent
  firewall-cmd --zone=trusted --add-forward-port=port=30022:proto=tcp:toport=30022:toaddr=10.1.1.100 --permanent
  firewall-cmd --reload

  firewall-cmd --zone=FedoraServer --add-forward-port=port=30080:proto=tcp:toport=30080:toaddr=10.1.1.100 --permanent
  firewall-cmd --zone=FedoraServer --add-forward-port=port=30443:proto=tcp:toport=30443:toaddr=10.1.1.100 --permanent
  firewall-cmd --zone=FedoraServer --add-forward-port=port=30022:proto=tcp:toport=30022:toaddr=10.1.1.100 --permanent
  firewall-cmd --reload

L'adresse IP 10.1.1.100 correspond à la machine virtuelle kube-main.

Dans mon routeur j'ai configuré comme ceci :

  • port 80 -> homelab:8080
  • port 443 -> homelab:443
  • port 22 -> homelab:30022

Suite

Dans un prochain article, sera détaillée l'installation de storage class permettant la persistence des données dans les pods de K8s.