MetalLB is a load-balancer implementation for bare-metal Kubernetes clusters, it gives your on-prem or non-cloud Kubernetes environment the same external load-balancing capability that cloud providers (like AWS ELB or GCP Load Balancer) offer automatically.
Purpose
MetalLB allows Services of type LoadBalancer to work in environments without a built-in cloud load balancer, such as:
- Bare-metal clusters.
- Local clusters (e.g., running in Parallels, VMware, KVM).
- Edge deployments or lab environments.
Without MetalLB, LoadBalancer services simply stay in the “pending” state forever because Kubernetes relies on the cloud provider to assign an external IP which doesn’t exist on bare-metal.
How it works
MetalLB supports two operating modes:
- Layer 2 (L2) mode
- Simplest mode — uses ARP (IPv4) or NDP (IPv6) to announce service IPs on your local network.
- When a service of type LoadBalancer is created, MetalLB assigns it an IP from a configured address pool and announces it at the Ethernet level.
- Works great for small or flat LAN networks.
- BGP mode
- More advanced — advertises service IPs to routers via the Border Gateway Protocol.
- Suitable for large, routed networks where you want proper integration with upstream routers and multiple gateways.
Typical use case
MetalLB can assign external IPs to services like:
- ingress-nginx-controller.
- kubernetes-dashboard.
- Any app you expose via type: LoadBalancer.
- This allows you to access services directly from your LAN using an assigned IP (e.g., 192.168.30.240) instead of manually port-forwarding.
Install options
Option 1: FRR Sidecar Mode (Legacy / Manual)
Overview
| Item | Value |
|---|
| Deployment type | MetalLB speaker with FRR sidecar |
| Operator/CRDs | None |
| Use case | Small or test clusters; simple setup |
Architecture
| Component | Detail |
|---|
| Speaker pod | Runs FRR as a sidecar container in the same pod. |
| Network namespace | FRR shares the network namespace with the MetalLB speaker. |
| FRR configuration path | Speaker writes configuration directly to /etc/frr/ inside the pod. |
| FRR process placement | No separate FRR DaemonSet. FRR is embedded within the speaker pod. |
How it works
| Step | Action |
|---|
| 1 | Deploy the standard MetalLB manifests (non-operator). |
| 2 | Enable FRR integration by adding an frr container to the speaker pod spec. |
| 3 | MetalLB speaker writes BGP configuration to FRR via vtysh or FRR config files. |
Pros and cons
| Pros | Cons |
|---|
| Simpler setup; no operator and no CRDs. | Configuration is not declarative; FRRConfiguration CRDs cannot be used. |
| Easier debugging because FRR is colocated with the speaker. | You must manage FRR restarts and configuration persistence manually. |
| Fewer moving parts; good for small or test clusters. | No webhook validation or structured error checking. |
Option 2: Operator / FRR-K8s Mode
Overview
| Item | Value |
|---|
| Deployment type | MetalLB Operator with FRR-K8s |
| Management | metallb-operator |
| Configuration | Declarative via Kubernetes CRDs |
| Use case | Production or larger clusters |
Architecture
| Component | Detail |
|---|
| FRR placement | Runs as its own DaemonSet (frr-k8s) on each node. |
| MetalLB components | Operator installs controller and speakers automatically. |
| Config source | FRR configuration comes from CRDs, not static ConfigMaps. |
| Control path | Operator reconciles CRDs and programs FRR via API and vtysh. |

Key CRDs
| CRD | Purpose | Example fields |
|---|
FRRConfiguration | Global FRR and BGP policy for the cluster or nodes. | route-maps, prefix-lists, communities, timers. |
BGPPeer | Defines external or internal BGP neighbours. | peer IP, remote AS, password, ASN capability. |
BGPAdvertisement | Which LoadBalancer IPs/prefixes to advertise. | prefixes, communities, aggregation, selection. |
How it works
| Step | Action |
|---|
| 1 | Deploy the MetalLB Operator. It installs controller, speakers, and frr-k8s automatically. |
| 2 | Each node runs an frr-k8s pod providing the full FRR routing stack. |
| 3 | You declare configuration using CRDs (FRRConfiguration, BGPPeer, BGPAdvertisement). |
| 4 | The Operator watches those CRDs and pushes BGP config into FRR using its API and vtysh. |
| 5 | MetalLB controller assigns LoadBalancer IPs; speakers coordinate with the local FRR instance. |
| 6 | FRR maintains BGP sessions and announces the LoadBalancer prefixes to external routers. |
Pros and cons
| Pros | Cons |
|---|
| Declarative, clean configuration stored as Kubernetes objects. | More complex to troubleshoot end to end. |
| Automatic sync of BGP peers and policies from CRDs. | Operator reconciliation or network policies can block progress. |
| Certificates, webhooks, and validation are handled automatically. | Higher operational footprint than the sidecar mode. |