This document shows how to expose multi-network Pods to internal or external
clients by creating Google Cloud external passthrough Network Load Balancer and internal passthrough Network Load Balancer
resources in GKE. It describes the required configuration,
capabilities, and limitations of multi-network LoadBalancer Services.
If you connect workloads to multiple VPC networks, use a
Kubernetes Service of type LoadBalancer to route traffic to Pods on a specific
secondary network. When you create the Service, GKE creates a
passthrough Network Load Balancer to manage this traffic.
For more information about multi-networking in GKE, see About multi-network support for Pods.
How multi-network LoadBalancer services work
To expose a multi-network workload, create a Service of type: LoadBalancer.
The Service must include a special selector that targets Pods based on the
network of their secondary interface. Add an annotation to specify whether to
create an internal or external load balancer.
The networking.gke.io/network label in the selector filters endpoints by
network. This label ensures that the load balancer only sends traffic to the Pod
interfaces connected to the specified network.
Limitations
Multi-network load balancers have the following limitations:
- Services that use
externalTrafficPolicy: Clusteraren't supported. - Services that target
hostNetworkPods aren't supported. - IPv6 and dual-stack networking aren't supported.
- You can't change the network of an existing Service.
- Only Layer 3 networks are supported.
- Load balancers based on target pools or instance group backends aren't supported.
- ClusterIP and NodePort Services aren't supported on secondary (non-default) networks.
Before you begin
Before you begin, complete the following tasks:
- Follow the steps in Set up multi-network support for Pods to prepare your VPC networks and create a GKE cluster with an additional network.
- Ensure that your cluster has subsetting for Layer 4 Internal Load Balancers
enabled. To enable this feature, use the
--enable-l4-ilb-subsettingflag when you create or update the cluster. - Ensure that your cluster is running GKE version 1.37 or later.
Deploy multi-network Pods
To attach Pods to an additional network, create a Deployment with the
networking.gke.io/interfaces annotation. This annotation specifies the
networks and interfaces for the Pods.
Save the following manifest as
web-app-deployment.yaml:apiVersion: apps/v1 kind: Deployment metadata: name: web-app labels: app: web-app spec: replicas: 3 selector: matchLabels: app: web-app template: metadata: labels: app: web-app annotations: networking.gke.io/default-interface: 'eth1' networking.gke.io/interfaces: '[ {"interfaceName":"eth0","network":"default"}, {"interfaceName": "eth1","network": "dmz"} ]' spec: containers: - name: whereami image: us-docker.pkg.dev/google-samples/containers/gke/whereami:v1 ports: - containerPort: 8080This manifest creates a Deployment named
web-appwith three Pods. The Pods have two interfaces:eth0connected to thedefaultnetwork andeth1connected to thedmznetwork. Thenetworking.gke.io/default-interfaceannotation setseth1as the default interface for the Pods.Apply the manifest to your cluster:
kubectl apply -f web-app-deployment.yaml
If you use a non-default interface for your Service, you must configure routing
within the Pod. To configure routing, add an initContainer to your Pod
specification that has the NET_ADMIN capability.
The following example shows an initContainer that adds a default route for the
eth1 interface:
initContainers:
- name: init-routes-busybox
image: busybox
command: ['sh', '-c', 'ip route add default dev eth1 table 200 && ip rule add from 172.16.1.0/24 table 200']
securityContext:
capabilities:
add: ["NET_ADMIN"]
In the initContainer command, replace 172.16.1.0/24 with the secondary IP
address range of your Pod network.
Deploy an internal LoadBalancer Service
To expose the web-app Deployment on the dmz network, create an internal
LoadBalancer Service.
Save the following manifest as
internal-lb-service.yaml:apiVersion: v1 kind: Service metadata: name: web-app-internal-lb namespace: default annotations: networking.gke.io/load-balancer-type: "Internal" spec: externalTrafficPolicy: Local ports: - port: 80 protocol: TCP targetPort: 8080 selector: networking.gke.io/network: dmz app: web-app type: LoadBalancerThis manifest creates a Service with the following properties:
networking.gke.io/load-balancer-type: "Internal": Specifies an internal passthrough Network Load Balancer.selector: Selects Pods with the labelapp: web-appthat are connected to thedmznetwork.
Apply the manifest to your cluster:
kubectl apply -f internal-lb-service.yaml
Deploy an external LoadBalancer Service
To expose the web-app Deployment to external clients, create an external
LoadBalancer Service.
Save the following manifest as
external-lb-service.yaml:apiVersion: v1 kind: Service metadata: name: web-app-external-lb namespace: default annotations: cloud.google.com/l4-rbs: "enabled" spec: externalTrafficPolicy: Local ports: - port: 80 protocol: TCP targetPort: 8080 selector: networking.gke.io/network: dmz app: web-app type: LoadBalancerThis manifest creates a Service with the following properties:
cloud.google.com/l4-rbs: "enabled": specifies a backend service-based external passthrough Network Load Balancer.selector: selects Pods with the labelapp: web-appthat are connected to thedmznetwork.
Apply the manifest to your cluster:
kubectl apply -f external-lb-service.yaml
Verify the Services
After you deploy the Services, verify that the load balancers are created and configured correctly.
Check the status of the Services:
kubectl get servicesThe output is similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web-app-external-lb LoadBalancer 10.8.47.77 35.239.57.231 80:31550/TCP 5m web-app-internal-lb LoadBalancer 10.8.43.251 172.16.0.43 80:32628/TCP 6m kubernetes ClusterIP 10.8.32.1 <none> 443/TCP 43hThe
EXTERNAL-IPaddress for the internal load balancer belongs to thedmznetwork.List the forwarding rules in your project:
gcloud compute forwarding-rules listThe output is similar to the following:
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET af901673cc0f24907a6aa8c3ce4afc21 us-central1 35.239.57.231 TCP us-central1/backendServices/k8s2-xhvzqabw-default-web-app-external-lb-u4xbs4ot k8s2-tcp-xhvzqabw-default-web-app-internal-lb-vp1x1d6a us-central1 172.16.0.43 TCP us-central1/backendServices/k8s2-xhvzqabw-default-web-app-internal-lb-vp1x1d6aDescribe the forwarding rule for the internal load balancer to verify that it is attached to the correct network:
gcloud compute forwarding-rules describe k8s2-tcp-xhvzqabw-default-web-app-internal-lb-vp1x1d6a --region=$REGIONReplace
REGIONwith the region of your cluster.The output is similar to the following. Verify that the
networkandsubnetworkfields match the details of thedmznetwork.IPAddress: 172.16.0.43 IPProtocol: TCP ... loadBalancingScheme: INTERNAL name: k8s2-tcp-xhvzqabw-default-web-app-internal-lb-vp1x1d6a network: https://www.googleapis.com/compute/v1/projects/projectId/global/networks/dmz-vpc ... subnetwork: https://www.googleapis.com/compute/v1/projects/projectId/regions/us-central1/subnetworks/dmz-subnet
Test the load balancers
To test the external load balancer, send a request to its external IP address:
curl EXTERNAL_LB_IP:80Replace
EXTERNAL_LB_IPwith the external IP address of theweb-app-external-lbService.To test the internal load balancer, send a request from a host in the same VPC as the load balancer:
curl INTERNAL_LB_IP:80Replace
INTERNAL_LB_IPwith the IP address of theweb-app-internal-lbService.
Troubleshooting
This section describes how to troubleshoot issues with multi-network load balancers.
Load balancer creation fails
If load balancer creation fails, check the Service events for error messages:
kubectl describe service SERVICE_NAME
Replace SERVICE_NAME with the name of your Service.
An error message such as network some-other-network does not exist indicates
that the network specified in the Service selector isn't defined in the cluster.
Verify that the network exists:
kubectl get networks
If the network exists, verify that the Network object correctly references a
valid GKENetworkParamSet resource. To check for configuration errors, inspect
the Network resource status:
kubectl get networks NETWORK_NAME -o yaml
Replace NETWORK_NAME with the name of your network.
In a valid configuration, both the ParamsReady and Ready conditions are
True. If ParamsReady isn't True, make sure that the parametersRef in the
Network specification correctly matches the name, kind, and group of an
existing GKENetworkParamSet resource.
If the Network resource is correct but still isn't ready, check the status of
the referenced GKENetworkParamSet for errors, such as a missing subnet:
kubectl get gkenetworkparamsets GNP_NAME -o yaml
Replace GNP_NAME with the name of your GKENetworkParamSet.
Load balancer has no backends
If the load balancer is provisioned but has no healthy backends, do the following:
- Verify that a node pool exists with network interfaces in the network that the Service uses.
- Verify that the Pods selected by the Service are running.
Check the endpoints for the Service:
kubectl describe endpointslice -l kubernetes.io/service-name=SERVICE_NAMEThe
multinet-endpointslice-controller.gke.iocontroller creates the multi-network endpoints. The Pod IP addresses listed in the EndpointSlice belong to the network that the Service uses. If the EndpointSlice has no endpoints, verify that the Service selector labels match running Pods and that the network selector matches the network of the Pods.
What's next
- Learn about multi-network support for Pods.
- Learn about multi-network Services.