Setting up a Home Lab Load Balancer with Metallb
For a while now, I've been wanting a better set up on my home lab K3s cluster for exposing pods outside of the cluster itself for accesibility within my home network. After polling the community on this matter, I found that MetalLB is the recommended tool to use for this. So this post will go over my experience setting up MetalLB alongside the rest of my deployments in my K3s cluster.
First things first, I have to assign a few IPs for MetalLB to use. This process varies from router to router, but in my case, I had to go into my router's advanced DHCP settings where it set the start and end IP for my router's DHCP server. Here I was able to take the end IP and bump it back a bit to make room for a few IPs at the end. What that looked like in practice was:
1Before:
2Start IP: 192.168.68.50
3End IP: 192.168.71.250
4
5After:
6Start IP: 192.168.68.50
7End IP: 192.168.71.240
This gave me 9 free IPs that my router recognized but wouldn't dole out to other devices on my home network. From this point, it was a matter of deploying the helm chart. If you've been reading my other blog posts, you'll know that I have my K8s deployments for my home lab set up in a helmfile to handles dependencies and other things with my various helm charts. See my previous post about that here. In general, there were a few things I had to do to pull this chart into my helmfile:
- Add the Bitnami helm repo
- Create the release in my helmfile
- Customize it with a values.yaml file
Those steps looked like this:
Adding the repo:
1repositories:
2...
3- name: bitnami
4 url: https://charts.bitnami.com/bitnami
Adding the helm release:
1releases:
2...
3- name: metallb
4 namespace: default
5 chart: bitnami/metallb
6 version: 3.0.2
7 values:
8 - ./metallb/values.yaml
Customizing the release with a values.yaml file:
1configInline:
2 address-pools:
3 - name: default
4 protocol: layer2
5 addresses:
6 - 192.168.71.241-192.168.71.250
7
8controller:
9 image:
10 tag: 0.11.0
11 metrics:
12 serviceMonitor:
13 enabled: true
14 labels:
15 release: prometheus-operator
16
17speaker:
18 image:
19 tag: 0.11.0
20 secretName: metallb-speaker-memberlist
21 metrics:
22 serviceMonitor:
23 enabled: true
24 labels:
25 release: prometheus-operator
To explain the above lines:
configInLine
is what we use to tell MetalLB what IP addresse range to use. This settings supports both an IP range like I have here, or also a CIDR notation block.- In the controller block, I'm specifying a custom image tag as my version of K8s is getting a little old (1.19) and also configuring a Prometheus service monitor. I try to have a service monitor for all my deployments so I can spin up dashboards for them to have visibility, so this was nice.
- In the speaker block, I'm doing pretty much the same thing as I was doing in the controller block.
Once I had all of this, it was time to try a deployment. Right off the bat though, I wasn't able to get my test deployment (Grafana) hooked up with a LoadBalancer service. What I found after quite a bit of digging was that K3s ships with a default service controller (servicelb). And since K8s expects a singleton for this component of the cluster, it didn't know how to handle a second one. The solution to this was to reprovision my K3s nodes with k3sup, passing the following parameters:
1--disable servicelb,traefik
Once this was done (plus or minus a re-deployment of the MetalLB helm chart) I was able to get my Grafana deployment working through a MetalLB service just by switching the service type from NodePort to LoadBalancer.
That's about it for this time folks. Thanks for reading and stay tuned for more nerdy DevOps posts!