The pre-existing-gke-cluster module lets you discover a
Google Kubernetes Engine (GKE) cluster that already exists in
Google Cloud. By using this module, you can extract cluster
attributes to uniquely identify the cluster for use by other modules. The module
outputs align with the
gke-cluster module, which lets
you use the pre-existing-gke-cluster module as a direct substitute.
For the complete list of inputs and outputs for this module, see the
pre-existing-gke-cluster
module
page in the Cluster Toolkit GitHub repository.
Before you begin
Before you begin, verify that you meet the following requirements:
- You have installed and configured Cluster Toolkit. For installation instructions, see Set up Cluster Toolkit.
- You have an existing cluster blueprint. You can use and modify an existing
blueprint or create one from scratch. To view a working example of a blueprint
configured for the
pre-existing-gke-clustermodule, go to the Cluster blueprint catalog page, click the Select scheduler menu and then select GKE. For more information about creating and customizing blueprints, see Cluster blueprint.
Required roles
To get the permissions that
you need to discover the existing GKE cluster,
ask your administrator to grant you the
Kubernetes Engine Cluster Viewer (roles/container.clusterViewer) IAM role on your project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Use an existing GKE cluster
The following example demonstrates how to discover an existing GKE cluster
named my-gke-cluster in the us-central1 region.
- id: existing-gke-cluster
source: modules/scheduler/pre-existing-gke-cluster
settings:
project_id: $(vars.project_id)
cluster_name: my-gke-cluster
region: us-central1
- id: compute_pool
source: modules/compute/gke-node-pool
use: [existing-gke-cluster]
By using the use keyword, the gke-node-pool module accepts the
cluster_id input variable. This variable identifies the existing
GKE cluster that hosts the new GKE node pool.
Configure multi-networking
To create network objects in your GKE cluster, you can supply a
multivpc module to the
pre-existing-gke-cluster module instead of applying a manifest manually.
- id: network
source: modules/network/vpc
- id: multinetwork
source: modules/network/multivpc
settings:
network_name_prefix: multivpc-net
network_count: 8
global_ip_address_range: 172.16.0.0/12
subnetwork_cidr_suffix: 16
- id: existing-gke-cluster
source: modules/scheduler/pre-existing-gke-cluster
use: [multinetwork]
settings:
# Multi-networking must be enabled in advance during cluster creation.
cluster_name: $(vars.deployment_name)
What's next
- For the complete list of inputs and outputs for this module, see the
pre-existing-gke-clustermodule page in the Cluster Toolkit GitHub repository.