This document shows you two sample configurations for setting up a cross-region internal Application Load Balancer in a Shared VPC environment with Cloud Storage buckets:
- The first example creates all of the load balancer components and backends in one service project.
- The second example creates the load balancer's frontend components and URL map in one service project, while the load balancer's backend bucket and Cloud Storage buckets are created in a different service project.
Both examples require the same initial configuration to grant required roles and set up a Shared VPC before you can start creating load balancers.
Apart from the aforementioned example configurations in this document, you can also set up a Shared VPC deployment where the load balancer's frontend and URL map are created in the host project and the backend buckets, along with the Cloud Storage buckets, are created in a service project. For more information on other valid Shared VPC architectures, see Shared VPC architectures.
If you don't want to use a Shared VPC network, see Set up a cross-region internal Application Load Balancer with Cloud Storage buckets.
Before you begin
Make sure that your setup meets the following prerequisites.
Create Google Cloud projects
Create Google Cloud projects for one host and two service projects.
Required roles
To get the permissions that you need to set up a regional external Application Load Balancer in a Shared VPC environment with Cloud Storage buckets, ask your administrator to grant you the following IAM roles:
-
To set up Shared VPC:
Compute Shared VPC Admin (
roles/compute.xpnAdmin) on the host project -
To provide access to a service project administrator to use the Shared VPC network:
Compute Network User (
roles/compute.networkUser) on the host project -
To create Cloud Storage buckets:
Storage Object Admin (
roles/storage.objectAdmin) on the service project -
To create the load balancing resources:
Compute Network Admin (
roles/compute.networkAdmin) on the service project -
To create Compute Engine instances:
Compute Instance Admin (
roles/compute.instanceAdmin.v1) on the service project -
To create and modify Certificate Manager SSL certificates:
Certificate Manager Owner (
roles/certificatemanager.owner) on the service project -
To reference backend buckets in other service projects:
Compute Load Balancer Services User (
roles/compute.loadBalancerServiceUser) on the service project
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Set up a Shared VPC environment
Complete the following steps in the host project to set up a Shared VPC environment:
- Configure the subnets for the load balancer's forwarding rules.
- Configure the proxy-only subnets.
- Configure a firewall rule.
- Set up a Shared VPC in the host project.
The steps in this section don't need to be performed every time you want to create a new load balancer. However, you must ensure that you have access to the resources described here before you proceed to creating the load balancer.
The host project uses the following VPC network, region, and subnets:
Network. The network is a custom mode VPC network named
lb-network.Subnets for load balancer. A subnet named
subnet-usin theus-east1region uses10.1.2.0/24for its primary IP range. A subnet namedsubnet-asiain theasia-east1region uses10.1.3.0/24for its primary IP range.Subnet for Envoy proxies. A subnet named
proxy-only-subnet-us-east1in theus-east1region uses10.129.0.0/23for its primary IP range. A subnet namedproxy-only-subnet-asia-east1in theasia-east1region uses10.130.0.0/23for its primary IP range.
Configure the subnets for the load balancer's forwarding rules
Console
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter
lb-network.In the Subnets section, for Subnet creation mode select Custom.
In the New subnet section, enter the following information:
- Name:
subnet-us - Select a Region:
us-east1 - IP address range:
10.1.2.0/24
- Name:
Click Done.
Click Add subnet.
Create another subnet for the load balancer's forwarding rule in a different region. In the New subnet section, enter the following information:
- Name:
subnet-asia - Region:
asia-east1 - IP address range:
10.1.3.0/24
- Name:
Click Done.
Click Create.
gcloud
Create a custom VPC network, named
lb-network, with thegcloud compute networks createcommand.gcloud compute networks create lb-network \ --subnet-mode=custom \ --project=HOST_PROJECT_IDCreate a subnet, named
subnet-us, in thelb-networkVPC network in theus-east1region with thegcloud compute networks subnets createcommand.gcloud compute networks subnets create subnet-us \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-east1 \ --project=HOST_PROJECT_IDCreate a subnet, named
subnet-asia, in thelb-networkVPC network in theasia-east1region with thegcloud compute networks subnets createcommand.gcloud compute networks subnets create subnet-asia \ --network=lb-network \ --range=10.1.3.0/24 \ --region=asia-east1 \ --project=HOST_PROJECT_IDReplace
HOST_PROJECT_IDwith the Google Cloud project ID assigned to the project that is enabled as a host project in a Shared VPC environment.
Configure the proxy-only subnets
A proxy-only subnet provides a set of IP addresses that Google Cloud uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based regional load balancers in the
same region as the VPC network. There can only be one active
proxy-only subnet for a given purpose, per region, per network. In this example,
we create two proxy-only subnets—one in the us-east1 region,
and the other in the asia-east1 region.
Console
In the Google Cloud console, go to the VPC networks page.
Click the name of the VPC network that you created.
On the Subnet tab, click Add subnet.
Enter the following information:
- For Name, enter
proxy-only-subnet-us. - For Region, enter
us-east1. - For Purpose, select Cross-region Managed Proxy.
- For IP address range, enter
10.129.0.0/23.
- For Name, enter
Click Add.
Create another proxy-only subnet in the
asia-east1region. On the Subnet tab, click Add subnet.Enter the following information:
- For Name, enter
proxy-only-subnet-asia. - For Region, enter
asia-east1. - For Purpose, select Cross-region Managed Proxy.
- For IP address range, enter
10.130.0.0/23.
- For Name, enter
Click Add.
gcloud
Create a proxy-only subnet in the
us-east1region with thegcloud compute networks subnets createcommand.In this example, the proxy-only subnet is named
proxy-only-subnet-us.gcloud compute networks subnets create proxy-only-subnet-us \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=us-east1 \ --network=lb-network \ --range=10.129.0.0/23 \ --project=HOST_PROJECT_IDCreate a proxy-only subnet in the
asia-east1region with thegcloud compute networks subnets createcommand.In this example, the proxy-only subnet is named
proxy-only-subnet-asia.gcloud compute networks subnets create proxy-only-subnet-asia \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=asia-east1 \ --network=lb-network \ --range=10.130.0.0/23 \ --project=HOST_PROJECT_IDReplace
HOST_PROJECT_IDwith the Google Cloud project ID assigned to the host project.
Configure a firewall rule
This example uses an ingress firewall rule that allows SSH access on port 22
to the client VM. In this example, this firewall rule is named fw-allow-ssh.
Console
In the Google Cloud console, go to the Firewall policies page.
Click Create firewall rule to create the rule to allow incoming SSH connections on the client VM:
- Name:
fw-allow-ssh - Network:
lb-network - Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-ssh - Source filter: IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports:
- Choose Specified protocols and ports.
- Select the TCP checkbox, and then enter
22for the port number.
- Name:
Click Create.
gcloud
Create a firewall rule to allow SSH connectivity to VMs with the network tag
allow-ssh. When you omit--source-ranges, Google Cloud interprets the rule to mean any source.In this example the firewall rule is named
fw-allow-ssh.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22 \ --project=HOST_PROJECT_IDReplace
HOST_PROJECT_IDwith the Google Cloud project ID assigned to the host project.
Set up a Shared VPC in the host project
You can enable a Shared VPC host project, share subnets of the host project, and attach service projects to the host project so that the service projects can use the Shared VPC network. To set up Shared VPC in the host project, see the following pages:
After completing the preceding steps, you can pursue either of the following setups:
- Configure a load balancer in the service project
- Configure a load balancer with a cross-project configuration
Configure a load balancer in the service project
This example creates a cross-region internal Application Load Balancer where all the load balancing components (forwarding rule, target proxy, URL map, and backend bucket) and Cloud Storage buckets are created in the service project.
The load balancer's networking resources, such as the VPC subnet, proxy-only subnet, and firewall rule, are created in the host project.
This section shows you how to set up the load balancer and backends.
The example setups on this page explicitly configure a reserved IP address for the load balancer's forwarding rule, rather than allowing an ephemeral IP address to be allocated. As a best practice, we recommend reserving IP addresses for forwarding rules.
Configure your Cloud Storage buckets
The process for configuring your Cloud Storage buckets is as follows:
- Create the Cloud Storage buckets.
- Copy content to the Cloud Storage buckets.
- Make the Cloud Storage buckets publicly accessible.
Create the Cloud Storage buckets
In this example, you create two Cloud Storage buckets, one in the
us-east1 region and another in the asia-east1 region. For production
deployments, we recommend that you choose a multi-region
bucket, which automatically replicates
objects across multiple Google Cloud regions. This can improve the
availability of your content and improve failure tolerance across your
application.
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
Click Create.
In the Get started section, enter a globally unique name that follows the naming guidelines.
Click Choose where to store your data.
Set Location type to Region.
From the list of regions, select us-east1.
Click Create.
Click Buckets to return to the Cloud Storage Buckets page. Use these instructions to create a second bucket, but set the Location to asia-east1.
gcloud
Create the first bucket in the
us-east1region with thegcloud storage buckets createcommand.gcloud storage buckets create gs://BUCKET1_NAME \ --default-storage-class=standard \ --location=us-east1 \ --uniform-bucket-level-access \ --project=SERVICE_PROJECT_IDCreate the second bucket in the
asia-east1region with thegcloud storage buckets createcommand.gcloud storage buckets create gs://BUCKET2_NAME \ --default-storage-class=standard \ --location=asia-east1 \ --uniform-bucket-level-access \ --project=SERVICE_PROJECT_IDReplace the following:
BUCKET1_NAMEandBUCKET2_NAME: Cloud Storage bucket namesSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
Copy content to the Cloud Storage buckets
To populate the Cloud Storage buckets, copy a graphic file from a public Cloud Storage bucket to your own Cloud Storage buckets.
Run the following commands in Cloud Shell, replacing the bucket name variables with your unique Cloud Storage bucket names:
gcloud storage cp gs://gcp-external-http-lb-with-bucket/three-cats.jpg gs://BUCKET1_NAME/love-to-purr/
gcloud storage cp gs://gcp-external-http-lb-with-bucket/two-dogs.jpg gs://BUCKET2_NAME/love-to-fetch/
Replace BUCKET1_NAME and BUCKET2_NAME withCloud Storage bucket names.
Make the Cloud Storage buckets publicly accessible
To make all objects in a bucket readable to everyone on the public internet,
grant the principal allUsers the Storage Object Viewer role
(roles/storage.objectViewer).
Console
To grant all users access to view objects in your buckets, repeat the following procedure for each bucket:
- In the Google Cloud console, go to the Cloud Storage Buckets page.
In the list of buckets, click the name of the bucket that you want to make public.
Select the Permissions tab.
In the Permissions section, click the Grant access button. The Grant access dialog appears.
In the New principals field, enter
allUsers.In the Select a role field, enter
Storage Object Viewerin the filter box and select the Storage Object Viewer from the filtered results.Click Save.
Click Allow public access.
gcloud
To grant all users access to view objects in your buckets,
run the gcloud storage buckets add-iam-policy-binding command.
gcloud storage buckets add-iam-policy-binding gs://BUCKET1_NAME --member=allUsers --role=roles/storage.objectViewer
gcloud storage buckets add-iam-policy-binding gs://BUCKET2_NAME --member=allUsers --role=roles/storage.objectViewer
Replace BUCKET1_NAME and BUCKET2_NAME
withCloud Storage bucket names.
Reserve the load balancer's IP address
Reserve a static internal IP address for the following:
- Forwarding rule in the
us-east1region Forwarding rule in the
asia-east1region
Console
In the Google Cloud console, go to the IP addresses page.
Click Reserve internal.
For Name, enter a name for the new address.
For IP version, select IPv4.
Click Reserve to reserve the IP address.
Follow these steps again to reserve an IP address in the
asia-east1region.
gcloud
To reserve a static internal IP address in the
us-east1region, use thegcloud compute addresses createcommand.gcloud compute addresses create ADDRESS1_NAME \ --region=us-east1 \ --subnet=projects/HOST_PROJECT_ID/regions/us-east1/subnetworks/subnet-us \ --project=SERVICE_PROJECT_ID
Replace the following:
ADDRESS1_NAME: the name that you want to assign to this IP addressHOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
To reserve a static internal IP address in the
asia-east1region, use thegcloud compute addresses createcommand.gcloud compute addresses create ADDRESS2_NAME \ --region=asia-east1 \ --subnet=projects/HOST_PROJECT_ID/regions/asia-east1/subnetworks/subnet-asia \ --project=SERVICE_PROJECT_ID
Replace the following:
ADDRESS2_NAME: the name that you want to assign to this IP addressHOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
Use the
gcloud compute addresses describecommand to view the result:gcloud compute addresses describe ADDRESS1_NAME \ --project=SERVICE_PROJECT_ID
gcloud compute addresses describe ADDRESS2_NAME \ --project=SERVICE_PROJECT_ID
Replace the following:
ADDRESS1_NAMEandADDRESS2_NAME: the name that you have assigned to the IP addressesSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
The IP address returned is referred to as
RESERVED_IP_ADDRESSin the following sections.
Set up an SSL certificate resource
For a cross-region internal Application Load Balancer that uses HTTPS as the request-and-response protocol, create an SSL certificate resource using Certificate Manager as described in one of the following documents:
- Deploy a cross-region Google-managed certificate issued by your CA Service instance
- Deploy a cross-region Google-managed certificate with DNS authorization
- Deploy a cross-region self-managed certificate
After you create the certificate, you can attach the certificate to the HTTPS target proxy.
We recommend using a Google-managed certificate.
Configure the load balancer with backend buckets
This section shows you how to create the following resources for a cross-region internal Application Load Balancer:
- Two backend buckets. The backend buckets serve as a wrapper to the Cloud Storage buckets that you created earlier.
- URL map
- Target proxy
- Two global forwarding rules with regional IP addresses. The forwarding rules are assigned IP addresses from the subnets created for the load balancer's forwarding rules. If you try to assign an IP address to the forwarding rule from the proxy-only subnet, the forwarding rule creation fails.
In this example, you can use HTTP or HTTPS as the request-and-response protocol between the client and the load balancer. To create an HTTPS load balancer, you must add an SSL certificate resource to the load balancer's frontend.
To create the aforementioned load balancing components using the gcloud CLI, follow these steps:
Create two backend buckets, one for each Cloud Storage bucket, with the
gcloud compute backend-buckets createcommand. The backend buckets have a load balancing scheme ofINTERNAL_MANAGED.In this example, the backend buckets are named
backend-bucket-catsandbackend-bucket-dogs, indicative of the content in the Cloud Storage buckets.gcloud compute backend-buckets create backend-bucket-cats \ --gcs-bucket-name=BUCKET1_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --project=SERVICE_PROJECT_IDgcloud compute backend-buckets create backend-bucket-dogs \ --gcs-bucket-name=BUCKET2_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --project=SERVICE_PROJECT_IDReplace the following:
BUCKET1_NAMEandBUCKET2_NAME: Cloud Storage bucket namesSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
Create a URL map to route incoming requests to the backend bucket with the
gcloud compute url-maps createcommand.In this example, the URL map is named
lb-map.gcloud compute url-maps create lb-map \ --default-backend-bucket=backend-bucket-cats \ --global \ --project=SERVICE_PROJECT_IDReplace
SERVICE_PROJECT_IDwith the Google Cloud project ID assigned to the service project.Configure the host and path rules of the URL map with the
gcloud compute url-maps add-path-matchercommand.In this example, the default backend bucket is
backend-bucket-cats, which handles all the paths that exist within it. However, any request targetinghttp://FORWARDING_RULE_IP_ADDRESS/love-to-fetch/two-dogs.jpguses thebackend-bucket-dogsbackend. For example, if the/love-to-fetch/folder also exists within your default backend (backend-bucket-cats), the load balancer prioritizes thebackend-bucket-dogsbackend because there is a specific path rule for/love-to-fetch/*.gcloud compute url-maps add-path-matcher lb-map \ --path-matcher-name=path-matcher-pets \ --new-hosts=* \ --backend-bucket-path-rules="/love-to-fetch/*=backend-bucket-dogs" \ --default-backend-bucket=backend-bucket-cats --project=SERVICE_PROJECT_IDReplace
SERVICE_PROJECT_IDwith the Google Cloud project ID assigned to the service project.Create a target proxy with the
gcloud compute target-http-proxies createcommand.For HTTP traffic, create a target HTTP proxy, named
http-proxy, to route requests to the URL map:gcloud compute target-http-proxies create http-proxy \ --url-map=lb-map \ --global \ --project=SERVICE_PROJECT_IDReplace
SERVICE_PROJECT_IDwith the Google Cloud project ID assigned to the service project.For HTTPS traffic, create a target HTTPS proxy, named
https-proxy, to route requests to the URL map. The proxy is the part of the load balancer that holds the SSL certificate for an HTTPS load balancer. After you create the certificate, you can attach the certificate to the HTTPS target proxy.gcloud compute target-https-proxies create https-proxy \ --url-map=lb-map \ --certificate-manager-certificates=CERTIFICATE_NAME \ --global \ --project=SERVICE_PROJECT_IDReplace the following:
CERTIFICATE_NAME: the name of the SSL certificate you created using Certificate ManagerSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
Create two global forwarding rules, one with an IP address in the
us-east1region and another with an IP address in theasia-east1region with thegcloud compute forwarding-rules createcommand.For HTTP traffic, create the global forwarding rules (
http-fw-rule-1andhttp-fw-rule-2) to route incoming requests to the HTTP target proxy:gcloud compute forwarding-rules create http-fw-rule-1 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-east1/subnetworks/subnet-us \ --subnet-region=us-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=80 \ --target-http-proxy=http-proxy \ --global-target-http-proxy \ --global \ --project=SERVICE_PROJECT_IDgcloud compute forwarding-rules create http-fw-rule-2 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/asia-east1/subnetworks/subnet-asia \ --subnet-region=asia-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=80 \ --target-http-proxy=http-proxy \ --global-target-http-proxy \ --global \ --project=SERVICE_PROJECT_IDReplace the following:
HOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectRESERVED_IP_ADDRESS: the IP address that you reservedSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
For HTTPS traffic, create the global forwarding rules (
https-fw-rule-1andhttps-fw-rule-2) to route incoming requests to the HTTPS target proxy:gcloud compute forwarding-rules create https-fw-rule-1 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-east1/subnetworks/subnet-us \ --subnet-region=us-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=443 \ --target-https-proxy=https-proxy \ --global-target-https-proxy \ --global \ --project=SERVICE_PROJECT_IDgcloud compute forwarding-rules create https-fw-rule-2 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/asia-east1/subnetworks/subnet-asia \ --subnet-region=asia-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=443 \ --target-https-proxy=https-proxy \ --global-target-https-proxy \ --global \ --project=SERVICE_PROJECT_IDReplace the following:
HOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectRESERVED_IP_ADDRESS: the IP address that you reservedSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
Send an HTTP request to the load balancer
Send a request from an internal client VM to the forwarding rule of the load balancer.
Get the IP address of the load balancer's forwarding rule
To get the IP address of the load balancer's forwarding rule, complete the following steps:
Get the IP address of the load balancer's forwarding rule (
http-fw-rule-1), which is in theus-east1region.gcloud compute forwarding-rules describe http-fw-rule-1 \ --global \ --project=SERVICE_PROJECT_IDGet the IP address of the load balancer's forwarding rule (
http-fw-rule-2), which is in theasia-east1region.gcloud compute forwarding-rules describe http-fw-rule-2 \ --global \ --project=SERVICE_PROJECT_IDReplace
SERVICE_PROJECT_IDwith the Google Cloud project ID assigned to the service project.Copy the returned IP address to use as
FORWARDING_RULE_IP_ADDRESSin the subsequent steps.
Create a client VM to test connectivity
To create a client VM to test connectivity, complete the following steps:
Create a client VM, named
client-a, in theus-east1region.gcloud compute instances create client-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-east1/subnetworks/subnet-us \ --zone=us-east1-c \ --tags=allow-ssh \ --project=SERVICE_PROJECT_IDReplace the following:
HOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectSERVICE_PROJECT_ID: the Google Cloud project ID assigned to the service project
Establish an SSH connection to the client VM.
gcloud compute ssh client-a \ --zone=us-east1-c \ --project=SERVICE_PROJECT_IDReplace
SERVICE_PROJECT_IDwith the Google Cloud project ID assigned to the service project.In this example, the cross-region internal Application Load Balancer has frontend virtual IP addresses (VIP) in both the
us-east1andasia-east1regions in the VPC network. Make an HTTP request to the VIP in either region by using curl.curl http://FORWARDING_RULE_IP_ADDRESS/love-to-fetch/two-dogs.jpg --output two-dogs.jpg
curl http://FORWARDING_RULE_IP_ADDRESS/love-to-purr/three-cats.jpg --output three-cats.jpg
Replace
FORWARDING_RULE_IP_ADDRESSwith the IP address of the load balancer's forwarding rule.
Test high availability
To test high availability, complete the following steps:
Delete the forwarding rule (
http-fw-rule-1) in theus-east1region to simulate regional outage and check whether the client in theus-eastregion can still access data from the backend bucket.gcloud compute forwarding-rules delete http-fw-rule-1 \ --global \ --project=SERVICE_PROJECT_IDReplace
SERVICE_PROJECT_IDwith the Google Cloud project ID assigned to the service project.Make an HTTP request to the VIP of the forwarding rule in either region by using curl.
curl http://FORWARDING_RULE_IP_ADDRESS/love-to-fetch/two-dogs.jpg --output two-dogs.jpg
curl http://FORWARDING_RULE_IP_ADDRESS/love-to-purr/three-cats.jpg --output three-cats.jpg
Replace
FORWARDING_RULE_IP_ADDRESSwith the IP address of the forwarding rule.If you make an HTTP request to the VIP in the
us-east1region, the DNS routing policies detect that this VIP isn't responding, and return the next most optimal VIP to the client (in this example,asia-east1). This behavior helps ensure that your application stays up even during regional outages.
Configure a load balancer with a cross-project configuration
The previous example on this page shows you how to set up a Shared VPC deployment where all the load balancer components and its backends are created in the service project.
Cross-region internal Application Load Balancers also let you configure Shared VPC deployments where a URL map in one host or service project can reference backend buckets located across multiple service projects in Shared VPC environments.
You can use the steps in this section as a reference to configure any of the supported combinations listed here:
- Forwarding rule, target proxy, and URL map in the host project, and backend bucket in a service project
- Forwarding rule, target proxy, and URL map in a service project, and backend bucket in another service project
In this section, the latter configuration is outlined as an example.
Setup overview
This example configures a load balancer with its frontend and backend in two different service projects.
If you haven't already done so, you must complete all of the prerequisite steps to set up Shared VPC and configure the network, subnets, and firewall rules required for this example. For instructions, see the following sections at the start of this page:
Configure the Cloud Storage buckets and backend buckets in service project B
All the steps in this section must be performed in service project B
To create the backend bucket, you need to do the following:
- Create the Cloud Storage buckets.
- Copy content to the Cloud Storage buckets.
- Make the Cloud Storage buckets publicly accessible.
- Create the backend buckets and point it to the Cloud Storage buckets.
Create the Cloud Storage buckets
In this example, you create two Cloud Storage buckets, one in the
us-east1 region and another in the asia-east1 region. For production
deployments, we recommend that you choose a multi-region
bucket, which automatically replicates
objects across multiple Google Cloud regions. This can improve the
availability of your content and improve failure tolerance across your
application.
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
Click Create.
In the Get started section, enter a globally unique name that follows the naming guidelines.
Click Choose where to store your data.
Set Location type to Region.
From the list of regions, select us-east1.
Click Create.
Click Buckets to return to the Cloud Storage Buckets page. Use these instructions to create a second bucket, but set the Location to asia-east1.
gcloud
Create the first bucket in the
us-east1region with thegcloud storage buckets createcommand.gcloud storage buckets create gs://BUCKET1_NAME \ --default-storage-class=standard \ --location=us-east1 \ --uniform-bucket-level-access \ --project=SERVICE_PROJECT_B_IDCreate the second bucket in the
asia-east1region with thegcloud storage buckets createcommand.gcloud storage buckets create gs://BUCKET2_NAME \ --default-storage-class=standard \ --location=asia-east1 \ --uniform-bucket-level-access \ --project=SERVICE_PROJECT_B_ID
Replace the following:
BUCKET1_NAMEandBUCKET2_NAME: Cloud Storage bucket names.SERVICE_PROJECT_B_ID: the Google Cloud project ID assigned to the service project B.
Copy content to the Cloud Storage buckets
To populate the Cloud Storage buckets, copy a graphic file from a public Cloud Storage bucket to your own Cloud Storage buckets.
Run the following commands in Cloud Shell, replacing the bucket name variables with your unique Cloud Storage bucket names:
gcloud storage cp gs://gcp-external-http-lb-with-bucket/three-cats.jpg gs://BUCKET1_NAME/love-to-purr/
gcloud storage cp gs://gcp-external-http-lb-with-bucket/two-dogs.jpg gs://BUCKET2_NAME/love-to-fetch/
Replace BUCKET1_NAME and BUCKET2_NAME with
Cloud Storage bucket names.
Make the Cloud Storage buckets publicly accessible
To make all objects in a bucket readable to everyone on the public internet,
grant the principal allUsers the Storage Object Viewer role
(roles/storage.objectViewer).
Console
To grant all users access to view objects in your buckets, repeat the following procedure for each bucket:
- In the Google Cloud console, go to the Cloud Storage Buckets page.
In the list of buckets, click the name of the bucket that you want to make public.
Select the Permissions tab.
In the Permissions section, click the Grant access button. The Grant access dialog appears.
In the New principals field, enter
allUsers.In the Select a role field, enter
Storage Object Viewerin the filter box and select the Storage Object Viewer from the filtered results.Click Save.
Click Allow public access.
gcloud
To grant all users access to view objects in your buckets, run the gcloud storage buckets add-iam-policy-binding command.
gcloud storage buckets add-iam-policy-binding gs://BUCKET1_NAME --member=allUsers --role=roles/storage.objectViewer
gcloud storage buckets add-iam-policy-binding gs://BUCKET2_NAME --member=allUsers --role=roles/storage.objectViewer
Replace BUCKET1_NAME and BUCKET2_NAME
with Cloud Storage bucket names.
Configure the load balancer with backend buckets
To create the backend buckets, follow these steps:
Create two backend buckets, one for each Cloud Storage bucket, bucket, with the
gcloud compute backend-buckets createcommand. The backend buckets have a load balancing scheme ofINTERNAL_MANAGED.In this example, the backend buckets are named
backend-bucket-catsandbackend-bucket-dogs, indicative of the content in the Cloud Storage buckets.gcloud compute backend-buckets create backend-bucket-cats \ --gcs-bucket-name=BUCKET1_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --project=SERVICE_PROJECT_B_IDgcloud compute backend-buckets create backend-bucket-dogs \ --gcs-bucket-name=BUCKET2_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --project=SERVICE_PROJECT_B_IDReplace the following:
BUCKET1_NAMEandBUCKET2_NAME: Cloud Storage bucket names.SERVICE_PROJECT_B_ID: the Google Cloud project ID assigned to the service project B.
Configure the load balancer frontend components in service project A
All the steps in this section must be performed in service project A
In service project A, you need to create the following frontend load balancing components:
- SSL certificate resource that is attached to the target proxy. You can follow the steps outlined in the earlier section to create the SSL certificate.
- Two IP addresses for the two forwarding rules of the load balancer. You can follow the steps outlined in the earlier section to create the IP addresses for the forwarding rules.
- URL map that references the backend buckets in service project B
- Target proxy
- Two forwarding rules, each bearing a regional IP addresses.
To create the frontend components, do the following:
Create a URL map to route incoming requests to the backend bucket with the
gcloud compute url-maps createcommand.In this example, the URL map is named
lb-map.gcloud compute url-maps create lb-map \ --default-backend-bucket=projects/SERVICE_PROJECT_B_ID/global/backendBuckets/backend-bucket-cats \ --global \ --project=SERVICE_PROJECT_A_IDReplace the following:
SERVICE_PROJECT_B_ID: the Google Cloud project ID assigned to the service project BSERVICE_PROJECT_A_ID: the Google Cloud project ID assigned to the service project A
Configure the host and path rules of the URL map with the
gcloud compute url-maps add-path-matchercommand.In this example, the default backend bucket is
backend-bucket-cats, which handles all the paths that exist within it. However, any request targetinghttp://FORWARDING_RULE_IP_ADDRESS/love-to-fetch/two-dogs.jpguses thebackend-bucket-dogsbackend. For example, if the/love-to-fetch/folder also exists within your default backend (backend-bucket-cats), the load balancer prioritizes thebackend-bucket-dogsbackend because there is a specific path rule for/love-to-fetch/*.gcloud compute url-maps add-path-matcher lb-map \ --path-matcher-name=path-matcher-pets \ --new-hosts=* \ --backend-bucket-path-rules="/love-to-fetch/*=projects/SERVICE_PROJECT_B_ID/global/backendBuckets/backend-bucket-dogs" \ --default-backend-bucket=projects/SERVICE_PROJECT_B_ID/global/backendBuckets/backend-bucket-cats \ --project=SERVICE_PROJECT_A_IDReplace the following:
SERVICE_PROJECT_B_ID: the Google Cloud project ID assigned to the service project BSERVICE_PROJECT_A_ID: the Google Cloud project ID assigned to the service project A
Create a target proxy with the
gcloud compute target-http-proxies createcommand.For HTTP traffic, create a target HTTP proxy, named
http-proxy, to route requests to the URL map:gcloud compute target-http-proxies create http-proxy \ --url-map=lb-map \ --global \ --project=SERVICE_PROJECT_A_IDReplace
SERVICE_PROJECT_A_IDwith the Google Cloud project ID assigned to the service project A.For HTTPS traffic, create a target HTTPS proxy, named
https-proxy, to route requests to the URL map. The proxy is the part of the load balancer that holds the SSL certificate for an HTTPS load balancer. After you create the certificate, you can attach the certificate to the HTTPS target proxy.gcloud compute target-https-proxies create https-proxy \ --url-map=lb-map \ --certificate-manager-certificates=CERTIFICATE_NAME \ --global \ --project=SERVICE_PROJECT_A_IDReplace the following:
CERTIFICATE_NAME: the name of the SSL certificate you created using Certificate ManagerSERVICE_PROJECT_A_ID: the Google Cloud project ID assigned to the service project A
Create two global forwarding rules, one with an IP address in the
us-east1region and another with an IP address in theasia-east1region with thegcloud compute forwarding-rules createcommand.For HTTP traffic, create the global forwarding rules (
http-fw-rule-1andhttp-fw-rule-2) to route incoming requests to the HTTP target proxy:gcloud compute forwarding-rules create http-fw-rule-1 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-east1/subnetworks/subnet-us \ --subnet-region=us-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=80 \ --target-http-proxy=http-proxy \ --global-target-http-proxy \ --global \ --project=SERVICE_PROJECT_A_IDgcloud compute forwarding-rules create http-fw-rule-2 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/asia-east1/subnetworks/subnet-asia \ --subnet-region=asia-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=80 \ --target-http-proxy=http-proxy \ --global-target-http-proxy \ --global \ --project=SERVICE_PROJECT_A_IDReplace the following:
HOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectRESERVED_IP_ADDRESS: the IP address that you reservedSERVICE_PROJECT_A_ID: the Google Cloud project ID assigned to the service project A
For HTTPS traffic, create the global forwarding rules (
https-fw-rule-1andhttps-fw-rule-2) to route incoming requests to the HTTPS target proxy:gcloud compute forwarding-rules create https-fw-rule-1 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-east1/subnetworks/subnet-us \ --subnet-region=us-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=443 \ --target-https-proxy=https-proxy \ --global-target-https-proxy \ --global \ --project=SERVICE_PROJECT_A_IDgcloud compute forwarding-rules create https-fw-rule-2 \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/asia-east1/subnetworks/subnet-asia \ --subnet-region=asia-east1 \ --address=RESERVED_IP_ADDRESS \ --ports=443 \ --target-https-proxy=https-proxy \ --global-target-https-proxy \ --global \ --project=SERVICE_PROJECT_A_IDReplace the following:
HOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectRESERVED_IP_ADDRESS: the IP address that you reservedSERVICE_PROJECT_A_ID: the Google Cloud project ID assigned to the service project A
Grant permissions to the Load Balancer Admin to use the backend bucket
If you want load balancers to reference backend buckets in other service
projects, the load balancer administrator must have the compute.backendBuckets.use
permission. To grant this permission, you can use the predefined
IAM role called
Compute Load Balancer Services User (roles/compute.loadBalancerServiceUser).
This role must be granted by the Service Project Admin and can be applied at
the service project level or at the individual backend bucket level.
In this example, a Service Project Admin from service project B must run one
of the following commands to grant the compute.backendBuckets.use permission
to a Load Balancer Admin from service project A. This can be done either at the
project level (for all backend buckets in the project) or per backend bucket.
Console
Project-level permissions
Use the following steps to grant permissions to all backend buckets in your project.
You require the compute.backendBuckets.setIamPolicy and the
resourcemanager.projects.setIamPolicy permissions to complete this step.
In the Google Cloud console, go to the IAM page.
Select your project.
Click Grant access.
In the New principals field, enter the principal's email address or other identifier.
In the Assign roles section, click Add roles.
In the Select roles dialog, in the Search for roles field, enter
Compute Load Balancer Services User.Select the Compute Load Balancer Services User checkbox.
Click Apply.
Optional: Add a condition to the role.
Click Save.
Resource-level permissions for individual backend buckets
Use the following steps to grant permissions to individual backend buckets in your project.
You require the compute.backendBuckets.setIamPolicy permission to
complete this step.
In the Google Cloud console, go to the Backends page.
From the backends list, select the backend bucket that you want to grant access to and click Permissions.
Click Add principal.
In the New principals field, enter the principal's email address or other identifier.
In the Select a role list, select Compute Load Balancer Services User.
Click Save.
gcloud
Project-level permissions
Use the following steps to grant permissions to all backend buckets in your project.
You require the compute.backendBuckets.setIamPolicy and the
resourcemanager.projects.setIamPolicy permissions to complete this step.
gcloud projects add-iam-policy-binding SERVICE_PROJECT_B_ID \
--member="user:LOAD_BALANCER_ADMIN" \
--role="roles/compute.loadBalancerServiceUser"
Replace the following:
SERVICE_PROJECT_B_ID: the Google Cloud project ID assigned to the service project BLOAD_BALANCER_ADMIN: the principal to add the binding for
Resource-level permissions for individual backend buckets
At the backend bucket level, Service Project Admins can use either of the
following commands to grant the Compute Load Balancer Services User role
(roles/compute.loadBalancerServiceUser):
gcloud projects add-iam-policy-bindingcommandgcloud compute backend-buckets add-iam-policy-bindingcommand
Use the gcloud projects add-iam-policy-binding command to grant the
Compute Load Balancer Services User role.
You require the compute.backendBuckets.setIamPolicy
permission to complete this step.
gcloud projects add-iam-policy-binding SERVICE_PROJECT_B_ID \
--member="user:LOAD_BALANCER_ADMIN" \
--role="roles/compute.loadBalancerServiceUser" \
--condition='expression=resource.name=="projects/SERVICE_PROJECT_B_ID/global/backendBuckets/BACKEND_BUCKET_NAME",title=Shared VPC condition'
SERVICE_PROJECT_B_ID: the Google Cloud project ID assigned to the service project BLOAD_BALANCER_ADMIN: the principal to add the binding forBACKEND_BUCKET_NAME: the name of the backend bucket
gcloud compute backend-buckets add-iam-policy-binding command
to grant the Compute Load Balancer Services User role.
gcloud compute backend-buckets add-iam-policy-binding BACKEND_BUCKET_NAME \
--member="user:LOAD_BALANCER_ADMIN" \
--role="roles/compute.loadBalancerServiceUser" \
--project=SERVICE_PROJECT_B_ID \
Send an HTTP request to the load balancer
Send a request from an internal client VM to the forwarding rule of the load balancer.
Get the IP address of the load balancer's forwarding rule
To get the IP address of the load balancer's forwarding rule, complete the following steps:
Get the IP address of the load balancer's forwarding rule (
http-fw-rule-1), which is in theus-east1region.gcloud compute forwarding-rules describe http-fw-rule-1 \ --global \ --project=SERVICE_PROJECT_A_IDGet the IP address of the load balancer's forwarding rule (
http-fw-rule-2), which is in theasia-east1region.gcloud compute forwarding-rules describe http-fw-rule-2 \ --global \ --project=SERVICE_PROJECT_A_IDReplace
SERVICE_PROJECT_A_IDwith the Google Cloud project ID assigned to the service project A.Copy the returned IP address to use as
FORWARDING_RULE_IP_ADDRESSin the subsequent steps.
Create a client VM to test connectivity
To create a client VM to test connectivity, complete the following steps:
Create a client VM, named
client-a, in theus-east1region.gcloud compute instances create client-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-east1/subnetworks/subnet-us \ --zone=us-east1-c \ --tags=allow-ssh \ --project=SERVICE_PROJECT_A_IDReplace the following:
HOST_PROJECT_ID: the Google Cloud project ID assigned to the host projectSERVICE_PROJECT_A_ID: the Google Cloud project ID assigned to the service project A
Establish an SSH connection to the client VM.
gcloud compute ssh client-a \ --zone=us-east1-c \ --project=SERVICE_PROJECT_A_IDReplace
SERVICE_PROJECT_A_IDwith the Google Cloud project ID assigned to the service project A.In this example, the cross-region internal Application Load Balancer has frontend virtual IP addresses (VIP) in both the
us-east1andasia-east1regions in the VPC network. Make an HTTP request to the VIP in either region by using curl.curl http://FORWARDING_RULE_IP_ADDRESS/love-to-fetch/two-dogs.jpg --output two-dogs.jpg
curl http://FORWARDING_RULE_IP_ADDRESS/love-to-purr/three-cats.jpg --output three-cats.jpg
Replace
FORWARDING_RULE_IP_ADDRESSwith the IP address of the load balancer's forwarding rule.
To test high availability, see Test high availability section in this document.
What's next
- Shared VPC overview
- Internal Application Load Balancer overview
- Proxy-only subnets for Envoy-based load balancers
- Manage certificates
- Clean up a load balancing setup