Skip to main content

MetalLB

MetalLB is a load-balancer implementation for bare-metal Kubernetes clusters, it gives your on-prem or non-cloud Kubernetes environment the same external load-balancing capability that cloud providers (like AWS ELB or GCP Load Balancer) offer automatically.

Purpose

MetalLB allows Services of type LoadBalancer to work in environments without a built-in cloud load balancer, such as:

  • Bare-metal clusters.
  • Local clusters (e.g., running in Parallels, VMware, KVM).
  • Edge deployments or lab environments.

Without MetalLB, LoadBalancer services simply stay in the “pending” state forever because Kubernetes relies on the cloud provider to assign an external IP which doesn’t exist on bare-metal.

How it works

MetalLB supports two operating modes:

  1. Layer 2 (L2) mode
    • Simplest mode — uses ARP (IPv4) or NDP (IPv6) to announce service IPs on your local network.
    • When a service of type LoadBalancer is created, MetalLB assigns it an IP from a configured address pool and announces it at the Ethernet level.
    • Works great for small or flat LAN networks.
  2. BGP mode
    • More advanced — advertises service IPs to routers via the Border Gateway Protocol.
    • Suitable for large, routed networks where you want proper integration with upstream routers and multiple gateways.

Typical use case

MetalLB can assign external IPs to services like:

  • ingress-nginx-controller.
  • kubernetes-dashboard.
  • Any app you expose via type: LoadBalancer.
  • This allows you to access services directly from your LAN using an assigned IP (e.g., 192.168.30.240) instead of manually port-forwarding.

Install options

Option 1: FRR Sidecar Mode (Legacy / Manual)

Overview

ItemValue
Deployment typeMetalLB speaker with FRR sidecar
Operator/CRDsNone
Use caseSmall or test clusters; simple setup

Architecture

ComponentDetail
Speaker podRuns FRR as a sidecar container in the same pod.
Network namespaceFRR shares the network namespace with the MetalLB speaker.
FRR configuration pathSpeaker writes configuration directly to /etc/frr/ inside the pod.
FRR process placementNo separate FRR DaemonSet. FRR is embedded within the speaker pod.

How it works

StepAction
1Deploy the standard MetalLB manifests (non-operator).
2Enable FRR integration by adding an frr container to the speaker pod spec.
3MetalLB speaker writes BGP configuration to FRR via vtysh or FRR config files.

Pros and cons

ProsCons
Simpler setup; no operator and no CRDs.Configuration is not declarative; FRRConfiguration CRDs cannot be used.
Easier debugging because FRR is colocated with the speaker.You must manage FRR restarts and configuration persistence manually.
Fewer moving parts; good for small or test clusters.No webhook validation or structured error checking.

Option 2: Operator / FRR-K8s Mode

Overview

ItemValue
Deployment typeMetalLB Operator with FRR-K8s
Managementmetallb-operator
ConfigurationDeclarative via Kubernetes CRDs
Use caseProduction or larger clusters

Architecture

ComponentDetail
FRR placementRuns as its own DaemonSet (frr-k8s) on each node.
MetalLB componentsOperator installs controller and speakers automatically.
Config sourceFRR configuration comes from CRDs, not static ConfigMaps.
Control pathOperator reconciles CRDs and programs FRR via API and vtysh.

Key CRDs

CRDPurposeExample fields
FRRConfigurationGlobal FRR and BGP policy for the cluster or nodes.route-maps, prefix-lists, communities, timers.
BGPPeerDefines external or internal BGP neighbours.peer IP, remote AS, password, ASN capability.
BGPAdvertisementWhich LoadBalancer IPs/prefixes to advertise.prefixes, communities, aggregation, selection.

How it works

StepAction
1Deploy the MetalLB Operator. It installs controller, speakers, and frr-k8s automatically.
2Each node runs an frr-k8s pod providing the full FRR routing stack.
3You declare configuration using CRDs (FRRConfiguration, BGPPeer, BGPAdvertisement).
4The Operator watches those CRDs and pushes BGP config into FRR using its API and vtysh.
5MetalLB controller assigns LoadBalancer IPs; speakers coordinate with the local FRR instance.
6FRR maintains BGP sessions and announces the LoadBalancer prefixes to external routers.

Pros and cons

ProsCons
Declarative, clean configuration stored as Kubernetes objects.More complex to troubleshoot end to end.
Automatic sync of BGP peers and policies from CRDs.Operator reconciliation or network policies can block progress.
Certificates, webhooks, and validation are handled automatically.Higher operational footprint than the sidecar mode.