Create GPU VMs in bulk

You can create a group of virtual machines (VMs) that have attached graphical processing units (GPUs) by using the bulk creation process. With the bulk creation process, you get upfront validation where the request fails fast if it is not feasible. Also, if you use the region flag, the bulk creation API automatically chooses the zone that has the capacity to fulfill the request.

To learn more about bulk creation, see About bulk creation of VMs. To learn more about creating VMs with attached GPUs, see Overview of creating an instance with attached GPUs.

Before you begin

  • To review limitations and additional prerequisite steps for creating instances with attached GPUs, such as selecting an OS image and checking GPU quota, see Overview of creating an instance with attached GPUs.
  • To review limitations for bulk creation, see About bulk creation of VMs.
  • If you haven't already, set up authentication. Authentication verifies your identity for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:

    Select the tab for how you plan to use the samples on this page:

    gcloud

    1. Install the Google Cloud CLI. After installation, initialize the Google Cloud CLI by running the following command:

      gcloud init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    2. Set a default region and zone.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI. After installation, initialize the Google Cloud CLI by running the following command:

      gcloud init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Required roles

To get the permissions that you need to create VMs, ask your administrator to grant you the Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to create VMs:

  • compute.instances.create on the project
  • To use a custom image to create the VM: compute.images.useReadOnly on the image
  • To use a snapshot to create the VM: compute.snapshots.useReadOnly on the snapshot
  • To use an instance template to create the VM: compute.instanceTemplates.useReadOnly on the instance template
  • To specify a subnet for your VM: compute.subnetworks.use on the project or on the chosen subnet
  • To specify a static IP address for the VM: compute.addresses.use on the project
  • To assign an external IP address to the VM when using a VPC network: compute.subnetworks.useExternalIp on the project or on the chosen subnet
  • To assign a legacy network to the VM: compute.networks.use on the project
  • To assign an external IP address to the VM when using a legacy network: compute.networks.useExternalIp on the project
  • To set VM instance metadata for the VM: compute.instances.setMetadata on the project
  • To set tags for the VM: compute.instances.setTags on the VM
  • To set labels for the VM: compute.instances.setLabels on the VM
  • To set a service account for the VM to use: compute.instances.setServiceAccount on the VM
  • To create a new disk for the VM: compute.disks.create on the project
  • To attach an existing disk in read-only or read-write mode: compute.disks.use on the disk
  • To attach an existing disk in read-only mode: compute.disks.useReadOnly on the disk

You might also be able to get these permissions with custom roles or other predefined roles.

Overview

When creating VMs with attached GPUs using the bulk creation method, you can choose to create VMs in a region (such as us-central1) or in a specific zone such as (us-central1-a).

If you choose to specify a region, Compute Engine places the VMs in any zone within the region that supports GPUs.

Machine types

The accelerator-optimized machine family contains multiple machine types.

Each accelerator-optimized machine type has a specific model of NVIDIA GPUs attached to support the recommended workload type.

AI and ML workloads Graphics and visualization
Accelerator-optimized A series machine types are designed for high performance computing (HPC), artificial intelligence (AI), and machine learning (ML) workloads.

For these machine types, the GPU model is automatically attached to the instance.

Accelerator-optimized G series machine types are designed for workloads such as NVIDIA Omniverse simulation workloads, graphics-intensive applications, video transcoding, and virtual desktops. These machine types support NVIDIA RTX Virtual Workstations (vWS).

For these machine types, the GPU model is automatically attached to the instance.

  • A4X (NVIDIA GB200 Superchips)
    (nvidia-gb200)
  • A4 (NVIDIA B200)
    (nvidia-b200)
  • A3 Ultra (NVIDIA H200)
    (nvidia-h200-141gb)
  • A3 Mega (NVIDIA H100)
    (nvidia-h100-mega-80gb)
  • A3 High (NVIDIA H100)
    (nvidia-h100-80gb)
  • A3 Edge (NVIDIA H100)
    (nvidia-h100-80gb)
  • A2 Ultra (NVIDIA A100 80GB)
    (nvidia-a100-80gb)
  • A2 Standard (NVIDIA A100)
    (nvidia-a100-40gb)
  • G4 (NVIDIA RTX PRO 6000)
    (nvidia-rtx-pro-6000)
    (nvidia-rtx-pro-6000-vws)
  • G2 (NVIDIA L4)
    (nvidia-l4)
    (nvidia-l4-vws)

Create groups of A4X, A4, and A3 Ultra

To create instances in bulk for the A4X, A4, and A3 Ultra machine series, see the Deployment options overview in the AI Hypercomputer documentation.

Create groups of A3, A2, G4, and G2 VMs

This section explains you can create instances in bulk for the A3 High, A3 Mega, A3 Edge, A2 Standard, A2 Ultra, G4, and G2 machine series by using Google Cloud CLI, or REST.

gcloud

To create a group of VMs, use the gcloud compute instances bulk create command. For more information about the parameters and how to use this command, see Create VMs in bulk.

Example

This example creates two VMs that have attached GPUs by using the following specifications:

gcloud compute instances bulk create \
    --name-pattern="my-test-vm-#" \
    --region=REGION \
    --count=2 \
    --machine-type=MACHINE_TYPE \
    --boot-disk-size=200 \
    --image=IMAGE \
    --image-project=IMAGE_PROJECT \
    --on-host-maintenance=TERMINATE

Replace the following:

If successful, the output is similar to the following:

NAME          ZONE
my-test-vm-1  us-central1-b
my-test-vm-2  us-central1-b
Bulk create request finished with status message: [VM instances created: 2, failed: 0.]

Optional flags

To further configure your instance to meet your workload or operating system needs, include one or more of the following flags when you run the gcloud compute instances bulk create command.

Feature Description
Provisioning model Sets the provisioning model for the instance. Specify either SPOT or FLEX_START. FLEX_START isn't supported for G4 instances. If you don't specify a model, then the standard model is used. For more information, see Compute Engine instances provisioning models.
--provisioning-model=PROVISIONING_MODEL
Virtual workstation Specifies an NVIDIA RTX Virtual Workstations (vWS) for graphics workloads. This feature is supported only for G4 and G2 instances.
--accelerator=type=VWS_ACCELERATOR_TYPE,count=VWS_ACCELERATOR_COUNT

Replace the following:

  • For VWS_ACCELERATOR_TYPE, choose from one of the following:
    • For G4 instances, specify nvidia-rtx-pro-6000-vws
    • For G2 instances, specify nvidia-l4-vws
  • For VWS_ACCELERATOR_COUNT, specify the number of virtual GPUs that you need.
Local SSD Attaches one or more Local SSDs to your instance. Local SSDs can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks.
    --local-ssd=interface=nvme \
    --local-ssd=interface=nvme \
    --local-ssd=interface=nvme ...
For the maximum number of Local SSD disks that you can attach per VM instance, see Local SSD limits.
Network interface Attaches multiple network interfaces to your instance. For g4-standard-384 instances, you can attach up to two network interfaces. You can use this flag to create an instance with dual network interfaces (2x 200 Gbps). Each network interface must be in a unique VPC network.

   --network-interface=network=VPC_NAME_1,subnet=SUBNET_NAME_1,nic-type=GVNIC \
   --network-interface=network=VPC_NAME_2,subnet=SUBNET_NAME_2,nic-type=GVNIC
   

Dual network interfaces are only supported on g4-standard-384 machine types.

Replace the following:

  • VPC_NAME: the name of your VPC network.
  • SUBNET_NAME: the name of the subnet that is part of the specified VPC network.

REST

Use the instances.bulkInsert method with the required parameters to create multiple VMs in a zone. For more information about the parameters and how to use this command, see Create VMs in bulk.

Example

This example creates two VMs that have attached GPUs by using the following specifications:

  • VM names: my-test-vm-1, my-test-vm-2
  • Each VM has two GPUs attached, specified by using the appropriate accelerator-optimized machine type

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instances/bulkInsert
    {
    "namePattern":"my-test-vm-#",
    "count":"2",
    "instanceProperties": {
      "machineType":MACHINE_TYPE,
      "disks":[
        {
          "type":"PERSISTENT",
          "initializeParams":{
            "diskSizeGb":"200",
            "sourceImage":SOURCE_IMAGE_URI
          },
          "boot":true
        }
      ],
      "name": "default",
      "networkInterfaces":
      [
        {
          "network": "projects/PROJECT_ID/global/networks/default"
        }
      ],
      "scheduling":{
        "onHostMaintenance":"TERMINATE",
        ["automaticRestart":true]
      }
    }
    }
    

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region for the VMs. This region must support your selected GPU model.
  • MACHINE_TYPE: the machine type that you selected. Choose from one of the following:

  • SOURCE_IMAGE_URI: the URI for the specific image or image family that you want to use.

    For example:

    • Specific image: "sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-8-optimized-gcp-v20220719"
    • Image family: "sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-8-optimized-gcp".

    When you specify an image family, Compute Engine creates a VM from the most recent, non-deprecated OS image in that family. For more information about when to use image families, see Image family best practices.

Optional flags

To further configure your instance to meet your workload or operating system needs, include one or more of the following flags when you run the instances.bulkInsert method.

Feature Description
Provisioning model To lower your costs, you can specify a different provisioning model by adding the "provisioningModel": "PROVISIONING_MODEL" field to the scheduling object in your request. If you specify to create Spot VMs, then the onHostMaintenance and automaticRestart fields are ignored. For more information, see Compute Engine instances provisioning models.
    "scheduling":
     {
       "onHostMaintenance": "terminate",
       "provisioningModel": "PROVISIONING_MODEL"
     }
  

Replace PROVISIONING_MODEL with one of the following:

  • STANDARD: (Default) A standard instance.
  • SPOT: A Spot VM.
  • FLEX_START: A Flex Start VM. Flex-start VMs run for up to seven days and can help you acquire high-demand resources like GPUs at a discounted price. This provisioning model isn't supported for G4 instances.
Virtual workstation Specifies an NVIDIA RTX Virtual Workstations (vWS) for graphics workloads. This feature is supported only for G4 and G2 instances.
   "guestAccelerators":
     [
       {
         "acceleratorCount": VWS_ACCELERATOR_COUNT,
         "acceleratorType": "projects/PROJECT_ID/zones/ZONE/acceleratorTypes/VWS_ACCELERATOR_TYPE"
       }
     ]
    

Replace the following:

  • For VWS_ACCELERATOR_TYPE, choose from one of the following:
    • For G4 instances, specify nvidia-rtx-pro-6000-vws
    • For G2 instances, specify nvidia-l4-vws
  • For VWS_ACCELERATOR_COUNT, specify the number of virtual GPUs that you need.
Local SSD Attaches one or more Local SSDs to your instance. Local SSDs can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks.
   {
     "type": "SCRATCH",
     "autoDelete": true,
     "initializeParams": {
       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/local-nvme-ssd"
     }
   }
  
For the maximum number of Local SSD disks that you can attach per VM instance, see Local SSD limits.
Network interface Attaches multiple network interfaces to your instance. For g4-standard-384 instances, you can attach up to two network interfaces. This creates an instance with dual network interfaces (2x 200 Gbps). Each network interface must be in a unique VPC network.

   "networkInterfaces":
   [
     {
       "network": "projects/PROJECT_ID/global/networks/VPC_NAME_1",
       "subnetwork": "projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME_1",
       "nicType": "GVNIC"
     },
     {
       "network": "projects/PROJECT_ID/global/networks/VPC_NAME_2",
       "subnetwork": "projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME_2",
       "nicType": "GVNIC"
     }
   ]
  

Dual network interfaces are only supported on g4-standard-384 machine types.

Replace the following:

  • VPC_NAME: the name of your VPC network.
  • SUBNET_NAME: the name of the subnet that is part of the specified VPC network.

Create groups of N1-general purpose VMs

You create a group of VMs with attached GPUs by using either the Google Cloud CLI, or REST.

This section describes how to create multiple VMs using the following GPU types:

NVIDIA GPUs:

  • NVIDIA T4: nvidia-tesla-t4
  • NVIDIA P4: nvidia-tesla-p4
  • NVIDIA P100: nvidia-tesla-p100
  • NVIDIA V100: nvidia-tesla-v100

NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):

  • NVIDIA T4 Virtual Workstation: nvidia-tesla-t4-vws
  • NVIDIA P4 Virtual Workstation: nvidia-tesla-p4-vws
  • NVIDIA P100 Virtual Workstation: nvidia-tesla-p100-vws

    For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your instance.

gcloud

To create a group of VMs, use the gcloud compute instances bulk create command. For more information about the parameters and how to use this command, see Create VMs in bulk.

Example

The following example creates two VMs with attached GPUs using the following specifications:

  • VM names: my-test-vm-1, my-test-vm-2
  • VMs created in any zone in us-central1 that supports GPUs
  • Each VM has two T4 GPUs attached, specified by using the accelerator type and accelerator count flags
  • Each VM has GPU drivers installed
  • Each VM uses the Deep Learning VM image pytorch-latest-gpu-v20211028-debian-10
gcloud compute instances bulk create \
    --name-pattern="my-test-vm-#" \
    --count=2 \
    --region=us-central1 \
    --machine-type=n1-standard-2 \
    --accelerator type=nvidia-tesla-t4,count=2 \
    --boot-disk-size=200 \
    --metadata="install-nvidia-driver=True" \
    --scopes="https://www.googleapis.com/auth/cloud-platform" \
    --image=pytorch-latest-gpu-v20211028-debian-10 \
    --image-project=deeplearning-platform-release \
    --on-host-maintenance=TERMINATE --restart-on-failure

If successful, the output is similar to the following:

NAME          ZONE
my-test-vm-1  us-central1-b
my-test-vm-2  us-central1-b
Bulk create request finished with status message: [VM instances created: 2, failed: 0.]

REST

Use the instances.bulkInsert method with the required parameters to create multiple VMs in a zone. For more information about the parameters and how to use this command, see Create VMs in bulk.

Example

The following example creates two VMs with attached GPUs using the following specifications:

  • VM names: my-test-vm-1, my-test-vm-2
  • VMs created in any zone in us-central1 that supports GPUs
  • Each VM has two T4 GPUs attached, specified by using the accelerator type and accelerator count flags
  • Each VM has GPU drivers installed
  • Each VM uses the Deep Learning VM image pytorch-latest-gpu-v20211028-debian-10

Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-central1/instances/bulkInsert

{
    "namePattern":"my-test-vm-#",
    "count":"2",
    "instanceProperties": {
      "machineType":"n1-standard-2",
      "disks":[
        {
          "type":"PERSISTENT",
          "initializeParams":{
            "diskSizeGb":"200",
            "sourceImage":"projects/deeplearning-platform-release/global/images/pytorch-latest-gpu-v20211028-debian-10"
          },
          "boot":true
        }
      ],
      "name": "default",
      "networkInterfaces":
      [
        {
          "network": "projects/PROJECT_ID/global/networks/default"
        }
      ],
      "guestAccelerators":
      [
        {
          "acceleratorCount": 2,
          "acceleratorType": "nvidia-tesla-t4"
        }
      ],
      "scheduling":{
        "onHostMaintenance":"TERMINATE",
        "automaticRestart":true
      },
      "metadata":{
        "items":[
          {
            "key":"install-nvidia-driver",
            "value":"True"
          }
        ]
      }
  }
 }

What's next?