MetalLB operator (FRR-K8s)
Operator with FRR backend (frr-k8s): You can control what’s allowed inbound using the FRRConfiguration CR. That’s where you set toReceive.allowed and (optionally) prefix lists.
Preparation
- Create namespace.
kubectl create ns metallb-system - Label worker.
kubectl label node dev-w-p1 metallb=enabled --overwrite
Install the MetalLB Operator (CRDs + controller)
- From the metallb GitHub account.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb-operator/main/bin/metallb-operator.yaml - Verify operator & webhooks are up.
kubectl -n metallb-system get deploy
kubectl -n metallb-system get pods
kubectl get crd | egrep 'metallb\.io|frrk8s\.metallb\.io'
Create the memberlist secret
- With the current operator version it appears that the secret is not created automatically.
kubectl -n metallb-system create secret generic metallb-memberlist \
--from-literal=secretkey="$(openssl rand -base64 32)"
Tell the operator to use the frr-k8s BGP backend
-
Set the bgp backend to use frr.
cat <<'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
bgpBackend: frr-k8s
EOF -
Watch the operator reconcile MetalLB resources. The operator should create controller and speaker.
kubectl -n metallb-system get deploy,ds,svc
kubectl -n metallb-system logs deploy/metallb-operator-controller-manager -c manager --tail=100 -
If the logs show that the controller can’t reach the Kubernetes API inside the cluster then that is likely to be caused by the egress being blocked by a Network Policy, such as a default-deny.
kubectl -n metallb-system get networkpolicyNAME POD-SELECTOR AGE
default-deny <none> 15m
operator control-plane=controller-manager 15m
webhook-service component=webhook-server 15m- A temporary solution is to delete all policies in the metallb-system and add them back once everything is running correctly.
kubectl -n metallb-system delete networkpolicy --all - Restart the two Deployments and watch for them to become ready.
kubectl -n metallb-system rollout restart deploy/controller
kubectl -n metallb-system rollout restart deploy/frr-k8s-webhook-server
kubectl -n metallb-system rollout status deploy/controller
kubectl -n metallb-system rollout status deploy/frr-k8s-webhook-server
kubectl -n metallb-system get pods -o wide
kubectl -n metallb-system logs deploy/controller --tail=100
kubectl -n metallb-system logs deploy/frr-k8s-webhook-server --tail=100 - If the webhook server are crash looping check the component first:
ubectl -n metallb-system get pods -l "component=frr-k8s-webhook-server" -o wide --show-labels - Then check the pod:
POD=$(kubectl -n metallb-system get pods -l component=frr-k8s-webhook-server -o name | head -1)
kubectl -n metallb-system describe $PODName: frr-k8s-webhook-server-895fbdff5-s26xf
Namespace: metallb-system
Priority: 0
Service Account: frr-k8s-daemon
Node: dev-w-p1/192.168.30.207
Start Time: Mon, 06 Oct 2025 15:09:55 +0800
Labels: component=frr-k8s-webhook-server
pod-template-hash=895fbdff5
Annotations: cni.projectcalico.org/containerID: 171a6dba1ad480257d8033288191f29d8aa272ca2b8a84ba665f456c2fe069b3
cni.projectcalico.org/podIP: 10.70.200.54/32
cni.projectcalico.org/podIPs: 10.70.200.54/32
kubectl.kubernetes.io/default-container: frr-k8s-webhook-server
kubectl.kubernetes.io/restartedAt: 2025-10-06T15:09:55+08:00
Status: Running
IP: 10.70.200.54
IPs:
IP: 10.70.200.54
Controlled By: ReplicaSet/frr-k8s-webhook-server-895fbdff5
Containers:
frr-k8s-webhook-server:
Container ID: containerd://f0f59154d63a3b7d80f1c0f92d7ae52140df7227ccc7b47e8e44df4d94de9a80
Image: quay.io/metallb/frr-k8s:v0.0.20
Image ID: quay.io/metallb/frr-k8s@sha256:6cfcf2461e397f9c503de8370da14fcc95057ef7dc8616539cbf6de0ddfd8567
Ports: 19443/TCP, 7572/TCP
Host Ports: 0/TCP, 0/TCP
Command:
/frr-k8s
Args:
--log-level=info
--webhook-mode=onlywebhook
--restart-on-rotator-secret-refresh=true
--namespace=$(NAMESPACE)
--metrics-bind-address=:7572
--webhook-port=19443
State: Running
Started: Mon, 06 Oct 2025 15:09:56 +0800
Ready: True
Restart Count: 0
Liveness: http-get https://:webhook/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:webhook/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
NAMESPACE: metallb-system (v1:metadata.namespace)
Mounts:
/tmp/k8s-webhook-server/serving-certs from cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-97k4l (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
cert:
Type: Secret (a volume populated by a Secret)
SecretName: frr-k8s-webhook-server-cert
Optional: false
kube-api-access-97k4l:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/control-plane:NoSchedule op=Exists
node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none> - If you see that the container starts the webhook on 19443 (Serving webhook server ... port:19443) but its probes hit http://:monitoring/metrics on 7572 (and there’s no metrics listener exposed), so the kubelet kills it → CrashLoopBackOff then it will need to be patched.
# Probes -> HTTPS /healthz on the "webhook" port (19443)
# Add port names, and make sure the webhook port arg is present.
kubectl -n metallb-system patch deploy frr-k8s-webhook-server --type='json' -p='[
{"op":"add","path":"/spec/template/spec/containers/0/ports","value":[
{"name":"webhook","containerPort":19443,"protocol":"TCP"},
{"name":"monitoring","containerPort":7572,"protocol":"TCP"}
]},
{"op":"replace","path":"/spec/template/spec/containers/0/livenessProbe","value":{
"httpGet":{"path":"/healthz","port":"webhook","scheme":"HTTPS"},
"initialDelaySeconds":10,"periodSeconds":10,"timeoutSeconds":1,"failureThreshold":3,"successThreshold":1
}},
{"op":"replace","path":"/spec/template/spec/containers/0/readinessProbe","value":{
"httpGet":{"path":"/healthz","port":"webhook","scheme":"HTTPS"},
"initialDelaySeconds":10,"periodSeconds":10,"timeoutSeconds":1,"failureThreshold":3,"successThreshold":1
}},
{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--webhook-port=19443"}
]' - Ensure the Service points to the right pods and port.
# Selector must match the pods (you already have this, but just in case)
kubectl -n metallb-system patch svc frr-k8s-webhook-service --type=merge \
-p='{"spec":{"selector":{"component":"frr-k8s-webhook-server"}}}'
# 443 -> targetPort 19443
kubectl -n metallb-system patch svc frr-k8s-webhook-service --type=merge \
-p='{"spec":{"ports":[{"name":"https","port":443,"protocol":"TCP","targetPort":19443}]}}' - Rollout and verify.
kubectl -n metallb-system rollout restart deploy/frr-k8s-webhook-server
kubectl -n metallb-system rollout status deploy/frr-k8s-webhook-server
# Should show an endpoint on 19443
kubectl -n metallb-system get endpoints frr-k8s-webhook-service -o wide
# Quick in-pod check (once pod is Running/Ready)
POD=$(kubectl -n metallb-system get pods -l component=frr-k8s-webhook-server -o name | head -1)
kubectl -n metallb-system exec -it "$POD" -- sh -lc '
(command -v ss >/dev/null && ss -ltn || netstat -ltn) | grep 19443 || true
wget -qO- --no-check-certificate https://127.0.0.1:19443/healthz || true'deployment "frr-k8s-webhook-server" successfully rolled out
NAME ENDPOINTS AGE
frr-k8s-webhook-service 10.70.200.54:19443 143m
netstat: /proc/net/tcp6: No such file or directory
tcp 0 0 0.0.0.0:19443 0.0.0.0:* LISTEN
- A temporary solution is to delete all policies in the metallb-system and add them back once everything is running correctly.
frr-k8s webhook service sanity check
- Sometimes the frr-k8s-webhook-service ends up with a mismatched selector/port. Let’s validate and fix if needed.
kubectl -n metallb-system get deploy frr-k8s-webhook-server -o jsonpath='{.spec.template.metadata.labels}'; echo
kubectl -n metallb-system get deploy frr-k8s-webhook-server -o yaml | sed -n '/containers:/,/ports:/p'{"component":"frr-k8s-webhook-server"}
containers:
- args:
- --log-level=info
- --webhook-mode=onlywebhook
- --restart-on-rotator-secret-refresh=true
- --namespace=$(NAMESPACE)
- --metrics-bind-address=:7572
- --webhook-port=19443
command:
- /frr-k8s
env:
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/metallb/frr-k8s:v0.0.20
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: webhook
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: frr-k8s-webhook-server
ports: - Check the Service selector/targetPort and endpoints.
kubectl -n metallb-system get svc frr-k8s-webhook-service -o yaml | sed -n '/selector:/,/ports:/p'
kubectl -n metallb-system get svc frr-k8s-webhook-service -o yaml | sed -n '/ports:/,/selector:/p'
kubectl -n metallb-system get endpoints frr-k8s-webhook-service -o wideselector:
component: frr-k8s-webhook-server
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
ports:
- name: https
port: 443
protocol: TCP
targetPort: 19443
selector:
NAME ENDPOINTS AGE
frr-k8s-webhook-service 10.70.200.54:19443 142m - If endpoints are empty, patch the Service to only select the webhook deployment label and to target the deployment’s webhook port.
- Ensure selector matches the deployment label.
kubectl -n metallb-system patch svc frr-k8s-webhook-service \
--type='merge' \
-p='{"spec":{"selector":{"app.kubernetes.io/component":"frr-k8s-webhook-server"}}}' - Remove any stray selector keys (ignore if not present).
kubectl -n metallb-system patch svc frr-k8s-webhook-service \
--type='json' -p='[{"op":"remove","path":"/spec/selector/component"}]' || true - Ensure targetPort matches the container’s webhook port (commonly 19443).
kubectl -n metallb-system patch svc frr-k8s-webhook-service \
--type='merge' \
-p='{"spec":{"ports":[{"name":"https","port":443,"protocol":"TCP","targetPort":19443}]}}' - Re-check endpoints again (should show an IP:19443).
kubectl -n metallb-system get endpoints frr-k8s-webhook-service -o wideNAME ENDPOINTS AGE
frr-k8s-webhook-service 10.70.200.54:19443 141m
- Ensure selector matches the deployment label.
Basic MetalLB data-plane objects (pool + advertisement + peer)
- Address pool for your LB IPs.
cat <<'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: pool-10-70-1
namespace: metallb-system
spec:
addresses:
- 10.70.1.0/24
avoidBuggyIPs: true
EOF - Advertise that pool over BGP.
cat <<'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: adv-10-70-1
namespace: metallb-system
spec:
ipAddressPools:
- pool-10-70-1
EOF - Your external BGP neighbor.
cat <<'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: edge-router
namespace: metallb-system
spec:
myASN: 64502
peerASN: 64500
peerAddress: 192.168.30.254
EOF - Verify the objects:
kubectl -n metallb-system get ipaddresspools.metallb.io
kubectl -n metallb-system get bgpadvertisements.metallb.io
kubectl -n metallb-system get frrconfigurations.frrk8s.metallb.iokubectl -n metallb-system get bgpadvertisements.metallb.io
kubectl -n metallb-system get frrconfigurations.frrk8s.metallb.io
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
pool-10-70-1 true true ["10.70.1.0/24"]
NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS PEERS
adv-10-70-1 ["pool-10-70-1"]
NAME AGE
frr-global 93m
FRR route-maps via FRRConfiguration
- Only do this after the webhook endpoints are good. This configuration permits receiving all and only advertises the pool.
cat <<'EOF' | kubectl apply -f -
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
name: frr-global
namespace: metallb-system
spec:
nodeSelector:
matchLabels:
metallb: enabled
bgp:
routers:
- asn: 64502
neighbors:
- address: 192.168.30.254
asn: 64500
toReceive:
allowed:
mode: all
toAdvertise:
allowed:
prefixes:
- 10.70.1.0/24
prefixes:
- 10.70.1.0/24
EOF - Confirm FRR is programmed.
POD=$(kubectl -n metallb-system get pods -o name | grep '^pod/frr-k8s-' | head -n1 | sed 's|pod/||')
kubectl -n metallb-system exec -it "$POD" -c frr -- vtysh -c "show running-config"
kubectl -n metallb-system exec -it "$POD" -c frr -- vtysh -c "show bgp summary"Building configuration...
Current configuration:
!
frr version 9.1_git
frr defaults traditional
hostname dev-w-p1
log file /etc/frr/frr.log informational
log timestamp precision 3
service integrated-vtysh-config
!
router bgp 64502
no bgp ebgp-requires-policy
no bgp default ipv4-unicast
bgp graceful-restart preserve-fw-state
no bgp network import-check
neighbor 192.168.30.254 remote-as 64500
!
address-family ipv4 unicast
network 10.70.1.0/24
neighbor 192.168.30.254 activate
neighbor 192.168.30.254 route-map 192.168.30.254-in in
neighbor 192.168.30.254 route-map 192.168.30.254-out out
exit-address-family
exit
!
ip prefix-list 192.168.30.254-inpl-ipv4 seq 1 permit any
ip prefix-list 192.168.30.254-allowed-ipv4 seq 1 permit 10.70.1.0/24
!
ipv6 prefix-list 192.168.30.254-allowed-ipv6 seq 1 deny any
ipv6 prefix-list 192.168.30.254-inpl-ipv4 seq 2 permit any
!
route-map 192.168.30.254-out permit 1
match ip address prefix-list 192.168.30.254-allowed-ipv4
exit
!
route-map 192.168.30.254-out permit 2
match ipv6 address prefix-list 192.168.30.254-allowed-ipv6
exit
!
route-map 192.168.30.254-in permit 3
match ip address prefix-list 192.168.30.254-inpl-ipv4
exit
!
route-map 192.168.30.254-in permit 4
match ipv6 address prefix-list 192.168.30.254-inpl-ipv4
exit
!
end
IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.30.207, local AS number 64502 vrf-id 0
BGP table version 12
RIB entries 21, using 2016 bytes of memory
Peers 1, using 13 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
192.168.30.254 4 64500 112 100 12 0 0 01:35:47 11 1 N/A
Total number of neighbors 1
Deploy a test service and confirm it gets an External IP
- Deploy a simple Google Hello app and service as per option 1.
- Check that an External IP is assigned.
kubectl -n dev get svc hello-service -o wide
kubectl -n metallb-system get events --sort-by=.lastTimestamp | tail -n 50
kubectl -n metallb-system logs deploy/controller --tail=50NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hello-service LoadBalancer 10.70.150.148 10.70.1.1 80:30546/TCP 17h app=hello
BGP verification (from k8s side)
- See which prefixes we advertise to the peer and what we receive.
POD=$(kubectl -n metallb-system get pods -o name | grep frr-k8s | head -1 | sed 's#pod/##')
kubectl -n metallb-system exec -it "$POD" -c frr -- vtysh -c "show bgp summary"IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.30.207, local AS number 64502 vrf-id 0
BGP table version 12
RIB entries 21, using 2016 bytes of memory
Peers 1, using 13 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
192.168.30.254 4 64500 103 93 12 0 0 01:28:17 11 1 N/A
Total number of neighbors 1kubectl -n metallb-system exec -it "$POD" -c frr -- vtysh -c "show bgp ipv4 unicast neighbors 192.168.30.254 advertised-routes"BGP table version is 12, local router ID is 192.168.30.207, vrf id 0
Default local pref 100, local AS 64502
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*> 10.70.1.0/24 0.0.0.0 0 32768 i
Total number of prefixes 1kubectl -n metallb-system exec -it "$POD" -c frr -- vtysh -c "show bgp ipv4 unicast neighbors 192.168.30.254 routes"
kubectl -n metallb-system exec -it "$POD" -c frr -- vtysh -c "show running-config"Building configuration...
Current configuration:
!
frr version 9.1_git
frr defaults traditional
hostname dev-w-p1
log file /etc/frr/frr.log informational
log timestamp precision 3
service integrated-vtysh-config
!
router bgp 64502
no bgp ebgp-requires-policy
no bgp default ipv4-unicast
bgp graceful-restart preserve-fw-state
no bgp network import-check
neighbor 192.168.30.254 remote-as 64500
!
address-family ipv4 unicast
network 10.70.1.0/24
neighbor 192.168.30.254 activate
neighbor 192.168.30.254 route-map 192.168.30.254-in in
neighbor 192.168.30.254 route-map 192.168.30.254-out out
exit-address-family
exit
!
ip prefix-list 192.168.30.254-inpl-ipv4 seq 1 permit any
ip prefix-list 192.168.30.254-allowed-ipv4 seq 1 permit 10.70.1.0/24
!
ipv6 prefix-list 192.168.30.254-allowed-ipv6 seq 1 deny any
ipv6 prefix-list 192.168.30.254-inpl-ipv4 seq 2 permit any
!
route-map 192.168.30.254-out permit 1
match ip address prefix-list 192.168.30.254-allowed-ipv4
exit
!
route-map 192.168.30.254-out permit 2
match ipv6 address prefix-list 192.168.30.254-allowed-ipv6
exit
!
route-map 192.168.30.254-in permit 3
match ip address prefix-list 192.168.30.254-inpl-ipv4
exit
!
route-map 192.168.30.254-in permit 4
match ipv6 address prefix-list 192.168.30.254-inpl-ipv4
exit
!
end
All resources in metallb-system
- Get all resources.
kubectl get all -n metallb-systemNAME READY STATUS RESTARTS AGE
pod/controller-9fc4d4f46-8lqlv 1/1 Running 0 100m
pod/frr-k8s-hvbls 6/6 Running 0 128m
pod/frr-k8s-webhook-server-895fbdff5-s26xf 1/1 Running 0 89m
pod/frr-k8s-wfxvs 6/6 Running 0 128m
pod/metallb-operator-controller-manager-6d57bf4f46-vwhq6 1/1 Running 0 130m
pod/metallb-operator-webhook-server-855f4d57bd-2jzp6 1/1 Running 0 130m
pod/speaker-5szrx 1/1 Running 0 128m
pod/speaker-7bjkk 1/1 Running 0 128m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frr-k8s-webhook-service ClusterIP 10.70.174.85 <none> 443/TCP 128m
service/metallb-operator-webhook-service ClusterIP 10.70.137.228 <none> 443/TCP 130m
service/metallb-webhook-service ClusterIP 10.70.161.61 <none> 443/TCP 130m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/frr-k8s 2 2 2 2 2 kubernetes.io/os=linux 128m
daemonset.apps/speaker 2 2 2 2 2 kubernetes.io/os=linux 128m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 128m
deployment.apps/frr-k8s-webhook-server 1/1 1 1 128m
deployment.apps/metallb-operator-controller-manager 1/1 1 1 130m
deployment.apps/metallb-operator-webhook-server 1/1 1 1 130m
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-67cc595bfc 0 0 0 100m
replicaset.apps/controller-86466dcc8c 0 0 0 103m
replicaset.apps/controller-94c7698d5 0 0 0 109m
replicaset.apps/controller-9fc4d4f46 1 1 1 128m
replicaset.apps/frr-k8s-webhook-server-55cb5d868 0 0 0 103m
replicaset.apps/frr-k8s-webhook-server-5f8cfbd69d 0 0 0 100m
replicaset.apps/frr-k8s-webhook-server-6c98c75455 0 0 0 108m
replicaset.apps/frr-k8s-webhook-server-7b575d5dc8 0 0 0 128m
replicaset.apps/frr-k8s-webhook-server-7f46b9c779 0 0 0 89m
replicaset.apps/frr-k8s-webhook-server-895fbdff5 1 1 1 89m
replicaset.apps/metallb-operator-controller-manager-6d57bf4f46 1 1 1 130m
replicaset.apps/metallb-operator-webhook-server-855f4d57bd 1 1 1 130m
Uninstall process for metallb operator and FRR
danger
Destructive; proceed with care.
- Uninstall any Helm releases (ignore errors if they weren’t installed).
helm -n metallb-system uninstall metallb || true
helm -n metallb-system uninstall metallb-operator || true
helm -n metallb-system uninstall frr-k8s || true - Delete all MetalLB custom resources (cluster-wide).
kubectl delete metallbs.metallb.io --all -A --ignore-not-found
kubectl delete ipaddresspools.metallb.io --all -A --ignore-not-found
kubectl delete bgpadvertisements.metallb.io --all -A --ignore-not-found
kubectl delete l2advertisements.metallb.io --all -A --ignore-not-found
kubectl delete bgppeers.metallb.io --all -A --ignore-not-found
kubectl delete communities.metallb.io --all -A --ignore-not-found
kubectl delete bfdprofiles.metallb.io --all -A --ignore-not-found
kubectl delete servicebgpstatuses.metallb.io --all -A --ignore-not-found
kubectl delete servicel2statuses.metallb.io --all -A --ignore-not-found - Delete all FRR-k8s custom resources (cluster-wide).
kubectl delete frrconfigurations.frrk8s.metallb.io --all -A --ignore-not-found
kubectl delete frrnodestates.frrk8s.metallb.io --all -A --ignore-not-found
kubectl delete bgpsessionstates.frrk8s.metallb.io --all -A --ignore-not-found - Remove the Operator if you applied it via the upstream manifest.
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb-operator/main/bin/metallb-operator.yaml --ignore-not-found=true - Delete validating/mutating webhooks that reference MetalLB / FRR.
kubectl get validatingwebhookconfigurations | awk '/metallb|frr-k8s/{print $1}' | xargs -r kubectl delete validatingwebhookconfiguration
kubectl get mutatingwebhookconfigurations | awk '/metallb|frr-k8s/{print $1}' | xargs -r kubectl delete mutatingwebhookconfiguration - Delete CRDs.
kubectl get crd | egrep 'metallb\.io|frrk8s\.metallb\.io' | awk '{print $1}' | xargs -r kubectl delete crd - Delete the namespace (cleans up any leftover workloads/services/secrets).
kubectl delete ns metallb-system --ignore-not-found=true - If the namespace gets stuck in “Terminating”, clear finalizers.
kubectl get ns metallb-system -o json | sed 's/"finalizers": \[[^]]*\]/"finalizers": []/' | kubectl replace --raw /api/v1/namespaces/metallb-system/finalize -f - || true - Verify it’s all gone.
kubectl get all -A | egrep -i 'metallb|frr-k8s' || echo "No MetalLB/FRR workloads left"
kubectl get crd | egrep 'metallb\.io|frrk8s\.metallb\.io' || echo "No MetalLB/FRR CRDs left"
kubectl get validatingwebhookconfigurations | egrep -i 'metallb|frr-k8s' || echo "No related validating webhooks"
kubectl get mutatingwebhookconfigurations | egrep -i 'metallb|frr-k8s' || echo "No related mutating webhooks"
kubectl get ns metallb-system || echo "Namespace metallb-system removed"