Create a Google Kubernetes Engine storage class

Use the gke-storage module to create a Kubernetes StorageClass resource that a PersistentVolumeClaim resource can use to dynamically provision Google Cloud storage resources, such as Hyperdisk Balanced.

For the complete list of inputs and outputs for this module, see the gke-storage module page in the Cluster Toolkit GitHub repository.

Before you begin

Before you begin, verify that you meet the following requirements:

  • You have installed and configured Cluster Toolkit. For installation instructions, see Set up Cluster Toolkit.
  • You have an existing cluster blueprint. You can use and modify an existing blueprint or create one from scratch. For a working example of a blueprint configured for the gke-storage module, see the examples/gke-managed-hyperdisk.yaml file. For more information about creating and customizing blueprints, see Cluster blueprint.
  • To view a complete list of blueprints that support the gke-storage module, go to the Cluster blueprint catalog page, click the Select scheduler menu and then select GKE.

Required roles

To get the permissions that you need to create GKE storage classes and provision storage, ask your administrator to grant you the following IAM roles on your project:

For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Add storage to blueprint

To dynamically provision storage, add the gke-storage module to your blueprint.

The following example uses the gke-storage module to create a Google Cloud Managed Lustre StorageClass resource and a Kubernetes PersistentVolumeClaim resource. The example then uses the gke-job-template module to dynamically provision the storage volume by using the PVC.

  - id: gke_cluster
    source: modules/scheduler/gke-cluster
    use: [network]
    settings:
      enable_managed_lustre_csi: true

  - id: private_service_access
    source: modules/network/private-service-access
    use: [network]
    settings:
      prefix_length: 24

  - id: gke_storage
    source: modules/file-system/gke-storage
    use: [gke_cluster, private_service_access]
    settings:
      storage_type: ManagedLustre
      access_mode: ReadWriteMany
      sc_volume_binding_mode: Immediate
      sc_reclaim_policy: Delete
      sc_topology_zones: [$(vars.zone)]
      pvc_count: 2
      capacity_gb: 12000

  - id: job_template
    source: modules/compute/gke-job-template
    use: [gke_storage, compute_pool]

Configure authorized networks

The gke-storage module calls the Kubernetes API to create Kubernetes entities. Therefore, you must authorize the deployment machine to connect to the Kubernetes API.

To authorize the deployment machine, add the master_authorized_networks setting to your gke-cluster module configuration. You must include the IP address of the deployment machine in this block. This configuration lets the deployment machine connect to the cluster.

What's next

  • For the complete list of inputs and outputs for this module, see the gke-storage module page in the Cluster Toolkit GitHub repository.
  • For a complete list of supported modules, see the compatibility matrix on GitHub.