resource-policy module to create a Compute Engine resource
policy.
This module lets you define placement policies, such as specifying that
instances should be physically located close together (compact) or spread apart
(spread) to meet your workload latency or availability requirements.
You can apply this policy to other modules, such as the gke-node-pool
module, to control the placement
of the provisioned nodes.
For the complete list of inputs and outputs, see the resource-policy
module
in the Cluster Toolkit GitHub repository.
Before you begin
Before you begin, verify that you meet the following requirements:
- You have installed and configured Cluster Toolkit. For installation instructions, see Set up Cluster Toolkit.
- You have an existing cluster blueprint. You can use and modify an existing
blueprint or create one from scratch. To view a working example of a blueprint
configured for the
resource-policymodule, go to the Cluster blueprint catalog page, click the Select software or resource menu, and then select Compute Engine resource policy. For more information about creating and customizing blueprints, see Cluster blueprint. - The
resource-policymodule does not create a continuous long-running workload or a full cluster. It creates Compute Engine resource policies to define placement constraints for VM or GKE nodes.
Required roles
To get the permissions that
you need to create and manage Compute Engine resource policies,
ask your administrator to grant you the
Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1) IAM role on your project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Create a placement policy
To create a placement policy, add the resource-policy module to your
blueprint. You can then reference this policy in other modules by using the
use keyword.
The following example creates a compact group placement policy with a maximum distance of 2. It then applies this policy to a GKE node pool.
- id: group_placement_1
source: modules/compute/resource-policy
settings:
name: gp-np-1
group_placement_max_distance: 2
- id: node_pool_1
source: modules/compute/gke-node-pool
use: [group_placement_1]
settings:
machine_type: e2-standard-8
outputs: [instructions]
What's next
- To learn how to use this policy with a GKE node pool, see Create a Google Kubernetes node pool.
- For a complete list of all available input fields and output values, see the
resource-policymodule on GitHub.