Configure a VPC network

Google Cloud Managed Lustre runs within a Virtual Private Cloud (VPC) which provides networking functionality to Compute Engine virtual machine (VM) instances, Google Kubernetes Engine (GKE) clusters, and serverless workloads.

The same VPC network must be specified when creating the Managed Lustre instance and client Compute Engine VMs or Google Kubernetes Engine clusters.

Required permissions

You must have the following IAM permissions:

  • serviceusage.services.enable
  • compute.networks.create
  • compute.addresses.create
  • compute.addresses.get
  • compute.firewalls.create
  • servicenetworking.services.addPeering

These permissions can be granted by adding all of the following predefined roles:

Or, create a custom role containing the specific permissions.

To grant a role to a user:

gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="user:EMAIL_ADDRESS"
  --role=ROLE

Create and configure the VPC

  1. Enable service networking.

    gcloud services enable servicenetworking.googleapis.com
    
  2. Create a VPC Network in custom mode.

    gcloud compute networks create NETWORK_NAME \
      --subnet-mode=custom \
      --mtu=8896
    
  3. Create a primary subnet for your GKE or Compute Engine resources.

    gcloud compute networks subnets create SUBNET_NAME \
      --network=NETWORK_NAME \
      --range=10.128.0.0/20 \
      --region=REGION
    
  4. Allocate an IP range for private services access.

    This internal IP range is used for the private services access connection, which peers your VPC network with the Google-managed network where Managed Lustre resources are provisioned. This allocated range is used to provide IPs for Managed Lustre instances, and must not overlap with any subnets in your VPC network.

    Each Managed Lustre instance requires a contiguous CIDR block with a prefix length of at least 23.

    We recommend creating a larger IP range of /20 to allow for the creation of multiple Managed Lustre instances or the use of other Google Cloud services.

    gcloud compute addresses create IP_RANGE_NAME \
      --global \
      --purpose=VPC_PEERING \
      --prefix-length=20 \
      --description="Managed Lustre VPC Peering" \
      --network=NETWORK_NAME
    
  5. Get the CIDR block associated with the range you created in the previous step.

    CIDR_BLOCK=$(
      gcloud compute addresses describe IP_RANGE_NAME \
        --global  \
        --format="value[separator=/](address, prefixLength)"
    )
    
  6. Create a firewall rule to allow TCP traffic from the IP range you created.

    gcloud compute firewall-rules create FIREWALL_NAME \
      --allow=tcp:988,tcp:6988 \
      --network=NETWORK_NAME \
      --source-ranges=$CIDR_BLOCK
    
  7. Connect the peering.

    gcloud services vpc-peerings connect \
      --network=NETWORK_NAME \
      --ranges=IP_RANGE_NAME \
      --service=servicenetworking.googleapis.com
    

Create additional subnets for multi-NIC

If you plan to use multiple network interface cards (multi-NIC) to aggregate bandwidth, you must create a separate subnet within your VPC network for each NIC.

To benefit from multi-NIC, you must use Compute Engine machine types with multiple physical NICs that are attached to regular VPCs. NICs that attach to VPCs with RDMA network profiles cannot be used to increase general networking bandwidth. See Networking and GPU machines for additional details.

To create a subnet for an additional physical NIC:

gcloud compute networks subnets create SUBNET_NAME_2 \
  --network=NETWORK_NAME \
  --range=10.130.0.0/20 \
  --region=REGION

Repeat this step for each additional NIC. Ensure that the IP ranges for each subnet don't overlap each other.

VPC Service Controls

Managed Lustre supports VPC Service Controls (VPC-SC). See Secure instances with a service perimeter for details.

What's next