Placement policies overview

This document explains the behavior, restrictions, and billing of placement policies.

By default, you manage the location of your Compute Engine instances only by specifying their zones. Placement policies let you further specify the relative placement of your compute instances in a zone. Based on the policy that you apply to your compute instances, you can reduce network latency across compute instances (compact policy) or improve resiliency against location-specific disruptions (spread policy).

To learn how to create and apply placement policies, see the documentation for using compact placement policies and using spread placement policies.

To learn about other ways of controlling instance placement, see the documentation for sole-tenancy and regional managed instance groups (MIGs).

About placement policies

Each compute instance runs on a physical server—a host—that is on a server rack. Each server rack is part of a cluster that is located in a data center for a zone. When you have multiple compute instances in the same zone, Compute Engine places these compute instances in different hosts by default. This placement minimizes the impact of potential power failures. However, when you apply a placement policy to compute instances in the same zone, you can further control the relative locations of those compute instances in the zone based on the needs of your workload.

You can create the following types of placement policies:

  • Compact placement policy. This policy places compute instances close to each other in a zone, which reduces network latency among the compute instances. A compact placement policy is helpful when your compute instances need to communicate often with each other—for example, when running high performance computing (HPC), machine learning (ML), or database server workloads.

    To learn more, see About compact placement policies in this document.

  • Spread placement policy. This policy places compute instances on separate, distinct hardware, which you can use to increase your workload's reliability. Specifically, spreading compute instances helps to reduce the number of compute instances that are simultaneously impacted by location-specific disruptions, such as hardware errors. Additionally, if you use a spread placement policy to overprovision capacity in multiple locations, you can help ensure that you still have sufficient capacity even when one location is disrupted. For this reason, spread placement policies can also be helpful for large-scale, distributed, and replicated workloads, such as Hadoop Distributed File System (HDFS), Cassandra, or Kafka.

    To learn more, see About spread placement policies in this document.

About compact placement policies

When you apply a compact placement policy to compute instances, Compute Engine tries to place the compute instances as close to each other as possible. This placement is subject to the machine type and zone availability of the compute instances, and instance compactness is achieved only on a best-effort basis. If your application is latency-sensitive and requires compute instances to be as close together as possible (maximum compactness) in a zone, then specify a maximum distance value (Preview). Lower maximum distance values ensure closer instance placement, but can result in fewer available machines for compute instance placement.

The following table outlines the supported machine series, maximum number of compute instances, and host maintenance policy for each maximum distance value:

Maximum distance value Description Supported machine series Maximum number of compute instances Supported host maintenance policy
Unspecified (Not recommended) Compute Engine makes best-effort attempts to place the compute instances as close to each other as possible, but with no maximum distance between compute instances in the zone.
  • Accelerator-optimized machines: A4, A3 Ultra, A3 Mega1, A3 High1, A3 Edge1, A2, and G2
  • Compute-optimized machines: H4D, H3, C2D, and C2
  • General-purpose machines: C4D, C4, C3D, C3, N2D, and N2
  • Memory-optimized machines: M4, M3, M2, and M1
  • Storage-optimized machines: Z3-metal
1,500
  • For Z3-metal: Terminate
  • For all other supported machine series: Migrate or Terminate
3 The compute instances are placed in adjacent clusters for low latency.
  • Accelerator-optimized machines: A4, A3 Mega1, A3 High1, A3 Edge1, A2, and G2
  • Compute-optimized machines: H4D, H3, C2D, and C2
  • General-purpose machines: C4D, C4, C3D, and C3
  • Memory-optimized machines: M4
  • Storage-optimized machines: Z3-metal
1,500
  • For Z3-metal: Terminate
  • For all other supported machine series: Migrate or Terminate
2 The compute instances are placed in adjacent racks and experience lower network latency than compute instances placed in adjacent clusters.
  • Accelerator-optimized machines: A4, A3 Ultra, A3 Mega1, A3 High1, A3 Edge1, A2, and G2
  • Compute-optimized machines: H4D, H3, C2D, and C2
  • General-purpose machines: C4D, C4, C3D, and C3
  • Memory-optimized machines: M4
  • Storage-optimized machines: Z3-metal
  • For A3 Ultra, A3 Mega, A3 High, and A3 Edge: 256
  • For all other supported machine series: 150
Terminate
1 The compute instances are placed in the same rack and minimize network latency as much as possible.
  • Accelerator-optimized machines: A3 Edge1, A2, and G2
  • Compute-optimized machines: H4D, H3, C2D, and C2
  • General-purpose machines: C4D, C4, C3D, and C3
  • Memory-optimized machines: M4
  • Storage-optimized machines: Z3-metal
22 Terminate

1 If you want to apply a compact placement policy to an A3 Mega, A3 High, or A3 Edge instance that was created before October 1, 2025, then contact your account team or the sales team.

After you create a compact placement policy and apply it to compute instances, you can verify the physical location of the compute instances in relation to other compute instances that specify the same compact placement policy. For more information, see Verify the physical location of an instance.

About spread placement policies

When creating a spread placement policy, you can specify the number of availability domains—up to eight—to spread its compute instances across. Availability domains provide isolated, distinct hardware to minimize the impact of localized disruptions. However, they're still impacted by shared infrastructure failures, such as data center power outages.

To reduce the proportion of your compute instances that are impacted whenever an availability domain is disrupted, spread your compute instances across at least two availability domains—each additional availability domain further reduces the proportion of your compute instances that are impacted. Alternatively, you might spread your compute instances across a small number of availability domains to try to limit network latency between those compute instances or due to zonal restrictions.

When you apply a spread placement policy to an instance, Compute Engine places the instance in a specific availability domain based on one of the following:

  • Automatic placement. By default, Compute Engine automatically places the instance in a domain based on the number of compute instances the placement policy is already applied to:

    • Eight compute instances or less: If a spread placement policy is already applied to eight compute instances or less, then Compute Engine places your instance in the domain with the fewest compute instances.

    • More than eight compute instances: If a spread placement policy is already applied to more than eight compute instances, then Compute Engine places your instance in a random domain.

  • Specific placement. When creating an instance, updating the properties of an instance, or creating an instance template, you can optionally specify the availability domain in which to place your compute instances. Distributing compute instances across domains is helpful to increase the resiliency of your workload. Placing compute instances in the same domain might help reduce network latency among those compute instances.

When you apply a spread placement policy to an existing instance, the instance might need to be relocated to a different availability domain. During this process, Compute Engine stops or live migrates the instance based on its host maintenance policy.

Restrictions

The following sections outline the restrictions for placement policies.

Restrictions for all placement policies

For all placement policies, the following restrictions apply:

  • Placement policies are regional resources, and they only work in the region where they are located. For example, if you create a placement policy in region us-central1, then you can only apply it to Compute Engine resources located in us-central1 or in a zone in us-central1.

  • You can only apply one placement policy per Compute Engine resource.

  • You can replace or remove placement policies only from compute instances. Replacing or removing placement policies from other Compute Engine resources isn't supported.

  • You can only delete a placement policy if it's not applied to any Compute Engine resource.

  • You can't apply placement policies to the following resources:

    • Reservations that Compute Engine creates to fulfill an approved future reservation.

    • Sole-tenant instances.

    • Flex-start VMs.

Restrictions for compact placement policies

In addition to the restrictions for all placement policies, compact placement policies have the following restrictions:

  • If a compact placement policy specifies a maximum distance value, then this value affects the maximum number of compute instances that you can apply the placement policy to, as well as the machine series and host maintenance policy that the compute instances can use.

  • If you want to apply a compact placement policy to on-demand reservations, then make sure of the following:

    • You can only apply compact placement policies to on-demand, single-project, standalone reservations. Shared reservations and reservations attached to commitments aren't supported.

    • You can't apply compact placement policies that specify a maximum distance value of 1.

    • You can only apply a compact placement policy to one reservation at a time.

Restrictions for spread placement policies

In addition to the restrictions for all placement policies, spread placement policies have the following restrictions:

  • You can apply a spread placement policy to 256 compute instances maximum.

  • You can't apply spread placement policies to reservations.

Billing

There are no additional costs associated with creating, deleting, or applying placement policies to a compute instance.

What's next