Use the netapp-volume module to create a
Google Cloud NetApp Volumes volume.
NetApp Volumes is a managed Google Cloud service that provides Network File System (NFS) and Server Message Block (SMB) shared file systems to virtual machine (VM) instances. It offers advanced data management capabilities and highly scalable capacity and performance. For more information, see NetApp Volumes overview.
To support NetApp Volumes, Cluster Toolkit uses two modules:
netapp-storage-pool: Provisions a storage pool. Storage pools are pre-provisioned storage capacity containers that host volumes. A pool also defines fundamental properties for all its volumes, such as the region, attached network, service level, Customer-Managed Encryption Key (CMEK) encryption, and Active Directory or Lightweight Directory Access Protocol (LDAP) settings.netapp-volume: Provisions a volume inside an existing storage pool. A volume is a file-system container that you share by using NFS or SMB.
For the complete list of inputs and outputs for this module, see the
netapp-volume
module
page in the Cluster Toolkit GitHub repository.
Before you begin
Before you begin, verify that you meet the following requirements:
- You have installed and configured Cluster Toolkit. For installation instructions, see Set up Cluster Toolkit.
- You have an existing cluster blueprint. You can use and modify an existing blueprint or create one from scratch.
- You have created a NetApp Volumes storage pool by using the
netapp-storage-poolmodule. You must create volumes inside a storage pool and consume capacity from that pool. - For a detailed example of this module, see examples/netapp-volumes.yaml.
Required roles
To get the permissions that you need to create and mount NetApp Volumes volumes, ask your administrator to grant you the following IAM roles on your project:
- NetApp Volumes Admin (
roles/netapp.admin) - Compute Network Admin (
roles/compute.networkAdmin) - Compute Instance Admin (v1) (
roles/compute.instanceAdmin.v1) - Service Account User (
roles/iam.serviceAccountUser)
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Create a volume
The following examples show how to configure the netapp-volume module. Both
examples require an existing storage pool.
Minimal configuration
The following example provisions a 1,024 GiB volume by using a minimal set
of parameters. By default, the module exports to the 10.0.0.0/8,
172.16.0.0/12, and 192.168.0.0/16 IP address ranges with the
no_root_squash option.
- id: home_volume
source: modules/file-system/netapp-volume
use: [netapp_pool]
settings:
volume_name: "eda-home"
capacity_gib: 1024
local_mount: "/eda-home"
protocols: ["NFSV3"]
region: REGION
Replace REGION with the Google Cloud region where
your storage pool is located.
Advanced configuration
The following example defines all available parameters, including custom mount options, export policies, and auto-tiering.
- id: shared_volume
source: modules/file-system/netapp-volume
use: [netapp_pool]
settings:
volume_name: "eda-shared"
capacity_gib: 25000
large_capacity: true
local_mount: "/shared"
mount_options: "rw"
protocols: ["NFSV3","NFSV4"]
region: REGION
unix_permissions: "0777"
export_policy:
- allowed_clients: "10.10.20.8,10.10.20.9"
has_root_access: true
access_type: "READ_WRITE"
nfsv3: false
nfsv4: true
- allowed_clients: "10.0.0.0/8"
has_root_access: false
access_type: "READ_WRITE"
nfsv3: true
nfsv4: false
tiering_policy:
tier_action: "ENABLED"
cooling_threshold_days: 31
description: "Shared volume for EDA job"
labels:
owner: bob
Protocol support
Because Cluster Toolkit provisions Linux-based compute clusters, the
netapp-volume module supports only NFSv3 and NFSv4.1. Server Message Block
(SMB) is not supported.
Large volumes
You can create volumes larger than 15 TiB as Large Volumes. Large volumes can grow up to 3 PiB and scale read performance up to 29 GiBps.
Large volumes provide six IP addresses, which the module exports using the
server_ips output. When you connect to a large volume to a client by using the
use keyword, Cluster Toolkit uses only the first IP address to mount the
volume on clients.
Auto-tiering support
If your storage pool has auto-tiering enabled, then you can enable auto-tiering
on the individual volume by configuring the tiering_policy.
tiering_policy:
tier_action: "ENABLED"
cooling_threshold_days: 31
For more information, see Manage auto-tiering.
Use existing volumes
NetApp Volumes volumes are standard NFS exports. If you want to
use an existing volume that you did not create with Cluster Toolkit, then use the
pre-existing-network-storage module.
- id: homefs
source: modules/file-system/pre-existing-network-storage
settings:
server_ip: SERVER_IP
remote_mount: nfsshare
local_mount: /home
fs_type: nfs
Replace SERVER_IP with the IP address of your NFS server.
FlexCache support
FlexCache technology accelerates data access, reduces WAN latency, and lowers WAN bandwidth costs for read-intensive workloads. When you create a FlexCache volume, you create a remote cache of an existing origin volume that contains only the actively accessed data. For more information, see About FlexCache.
Deploying FlexCache volumes requires manual steps on the ONTAP origin side,
which Cluster Toolkit does not automate. Therefore, the netapp-volume module
does not support deploying FlexCache volumes directly. To use FlexCache, deploy
the volumes manually and then integrate them by using the
pre-existing-network-storage module.
What's next
- For the complete list of inputs and outputs for this module, see the
netapp-volumemodule page in the Cluster Toolkit GitHub repository.