Use NFS volumes as vSphere Datastores in VMware Engine
This document describes how to use NFS volumes as vSphere Datastores in
VMware Engine by creating and managing NFS Datastores backed by
Filestore instances, Google Cloud NetApp Volumes volumes, or third-party
NFS shares using the VMware Engine API or Google Cloud CLI. The API
endpoint is vmwareengine.googleapis.com. API and
gcloud CLI operations for creating, updating, deleting, mounting, and
unmounting Datastores are asynchronous. When you initiate one of these
operations, VMware Engine returns an operation object that you can use
to track the status of your request.
Poll an operation
To track an operation's status, use a GET request or the gcloud CLI.
API
curl -X GET \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/operations/OPERATION_ID
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location of the operation.OPERATION_ID: The ID of the operation being tracked.
gcloud
gcloud vmware operations describe OPERATION_ID --location=LOCATION --project=PROJECT_ID
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location of the operation.OPERATION_ID: The ID of the operation being tracked.
Create an NFS Datastore
To create a Datastore backed by a Filestore instance, a
Google Cloud NetApp Volumes volume, or a third-party NFS share, use the
gcloud CLI or make the following POST request:
POST https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datastores?datastoreId=DATASTORE_ID
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location for the Datastore.DATASTORE_ID: The name of your Datastore.
The request body must be a JSON object containing the details of the NFS volume that will back the Datastore.
description: (Optional) A brief description of your Datastore.nfs_datastore: (Required) A container for the NFS Datastore configuration.
Filestore
The following sections describe how to create a Datastore backed by Filestore using the API or gcloud CLI.
API
For a Datastore backed by Filestore, provide the following in google_file_service:
filestore_instance: (Required) The full resource name of the Filestore instance in the formatprojects/{project}/locations/{location}/instances/{instance}.
Example request body:
{
"description": "Filestore Datastore example",
"nfs_datastore": {
"google_file_service": {
"filestore_instance": "projects/FILESTORE_PROJECT_ID/locations/LOCATION/instances/INSTANCE_NAME"
}
}
}
Replace the following:
FILESTORE_PROJECT_ID: The project ID where your Filestore instance resides.LOCATION: The location of the Filestore instance. This must be the same as the Datastore location specified in the request URL.INSTANCE_NAME: The name of your Filestore instance.
gcloud
gcloud vmware datastores create DATASTORE_ID \
--location=LOCATION --project=PROJECT_ID \
--filestore=projects/FILESTORE_PROJECT_ID/locations/LOCATION/instances/INSTANCE_NAME
Replace the following:
DATASTORE_ID: The name of your Datastore.LOCATION: The location for the Datastore and the Filestore instance.PROJECT_ID: Your Google Cloud project ID.FILESTORE_PROJECT_ID: The project ID where your Filestore instance resides.INSTANCE_NAME: The name of your Filestore instance.
Google Cloud NetApp Volumes
The following sections describe how to create a Datastore backed by Google Cloud NetApp Volumes using the API or gcloud CLI.
API
For a Datastore backed by Google Cloud NetApp Volumes, provide the following in google_file_service:
netapp_volume: (Required) The full resource name of the Google Cloud NetApp Volumes volume in the formatprojects/{project}/locations/{location}/volumes/{volume}.
Example request body:
{
"description": "NetApp Volumes Datastore example",
"nfs_datastore": {
"google_file_service": {
"netapp_volume": "projects/NETAPP_PROJECT_ID/locations/LOCATION/volumes/VOLUME_NAME"
}
}
}
Replace the following:
NETAPP_PROJECT_ID: The project ID where your Google Cloud NetApp Volumes volume resides.LOCATION: The location of the Google Cloud NetApp Volumes volume. This must be the same as the Datastore location specified in the request URL.VOLUME_NAME: The name of your Google Cloud NetApp Volumes volume.
gcloud
gcloud vmware datastores create DATASTORE_ID \
--location=LOCATION --project=PROJECT_ID \
--netapp=projects/NETAPP_PROJECT_ID/locations/LOCATION/volumes/VOLUME_NAME
Replace the following:
DATASTORE_ID: The name of your Datastore.LOCATION: The location for the Datastore and the Google Cloud NetApp Volumes volume.PROJECT_ID: Your Google Cloud project ID.NETAPP_PROJECT_ID: The project ID where your Google Cloud NetApp Volumes volume resides.VOLUME_NAME: The name of your Google Cloud NetApp Volumes volume.
Third-party NFS
The following sections describe how to create a Datastore backed by a third-party NFS share using the API or gcloud CLI.
API
For a Datastore backed by a third-party NFS share, provide the following in nfs_datastore:
third_party_nfs: (Required) Contains configuration for third-party NFS.network: The VPC network name in formatprojects/{project}/global/networks/{network}.file_share: The file share name.servers: A list of server IP addresses.
The request body resembles the following:
{
"description": "Third-party NFS Datastore example",
"nfs_datastore": {
"third_party_nfs": {
"network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",
"file_share": "FILE_SHARE_NAME",
"servers": ["SERVER_ADDRESS_1"]
}
}
}
Replace the following:
PROJECT_ID: Your Google Cloud project ID.NETWORK_NAME: The name of the VPC network for the third-party NFS Datastore.FILE_SHARE_NAME: The file share name for the third-party NFS Datastore.SERVER_ADDRESS_1: A server IP address for the third-party NFS Datastore. Add more addresses to the list if needed.
gcloud
gcloud vmware datastores create DATASTORE_ID \
--third-party-nfs-network=NETWORK_NAME \
--third-party-nfs-file-share=FILE_SHARE_NAME \
--third-party-nfs-servers=SERVER_ADDRESSES \
--location=LOCATION --project=PROJECT_ID
Replace the following:
DATASTORE_ID: The name of your Datastore.NETWORK_NAME: The VPC network name for the third-party NFS Datastore.FILE_SHARE_NAME: The file share name for the third-party NFS Datastore.SERVER_ADDRESSES: A comma-separated list of server IP addresses for the third-party NFS Datastore.LOCATION: The location for the Datastore.PROJECT_ID: Your Google Cloud project ID.
List or get Datastores
To list all Datastores for a given project and location, use the gcloud CLI
or make a GET request:
API
To list all Datastores for a given project and location, make a GET request:
GET https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datastores
To retrieve details about a specific Datastore, make a GET request:
GET https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location of the Datastore.DATASTORE_ID: The name of the Datastore.
gcloud
To list all Datastores for a given project and location, use the gcloud vmware datastores list command:
gcloud vmware datastores list \
--location=LOCATION --project=PROJECT_ID
To retrieve details about a specific Datastore, use the gcloud vmware datastores describe command:
gcloud vmware datastores describe DATASTORE_ID \
--location=LOCATION --project=PROJECT_ID
Replace the following:
LOCATION: The location of the Datastore.PROJECT_ID: Your Google Cloud project ID.DATASTORE_ID: The name of the Datastore.
Mount a Datastore
After you create a Datastore resource, you must mount it to a vSphere
cluster to make it available to ESXi hosts. To mount an NFS Datastore,
use the gcloud CLI or make a POST request to the target cluster:
API
POST https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/privateClouds/PRIVATE_CLOUD_ID/clusters/CLUSTER_ID:mountDatastore
Example request body:
{
"datastore_mount_config": {
"datastore": "projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID",
"datastore_network": {
"subnet": "projects/PROJECT_ID/locations/LOCATION/privateClouds/PRIVATE_CLOUD_ID/subnets/SERVICE_SUBNET_NAME",
"connection_count": 4
},
"access_mode": "READ_WRITE",
"nfs_version": "NFS_V3"
}
}
datastore: The resource name of the Datastore to mount.subnet: The resource name of the service subnet to use for NFS traffic.connection_count: (Optional) The number of connections. Default is4.access_mode: (Optional) The access mode,READ_WRITEorREAD_ONLY. Default isREAD_WRITE.nfs_version: (Optional) The NFS version. Default isNFS_V3.
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location of the resources.PRIVATE_CLOUD_ID: The name of the private cloud.CLUSTER_ID: The name of the cluster.DATASTORE_ID: The name of the Datastore to mount.SERVICE_SUBNET_NAME: The name of the service subnet to use for NFS traffic.
gcloud
gcloud vmware private-clouds clusters mount-datastore CLUSTER_ID \
--location=LOCATION --project=PROJECT_ID \
--private-cloud=PRIVATE_CLOUD_ID \
--datastore=projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID \
--subnet=SERVICE_SUBNET_NAME
Alternatively, you can provide network configuration details using a JSON file with the --datastore-network flag:
gcloud vmware private-clouds clusters mount-datastore CLUSTER_ID \
--location=LOCATION --project=PROJECT_ID \
--private-cloud=PRIVATE_CLOUD_ID \
--datastore=projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID \
--datastore-network=network-config.json
Where network-config.json contains:
{
"subnet": "SERVICE_SUBNET_NAME",
"mtu": 1500,
"connection-count": 4
}
Replace the following:
CLUSTER_ID: The name of the cluster.LOCATION: The location of the resources.PROJECT_ID: Your Google Cloud project ID.PRIVATE_CLOUD_ID: The name of the private cloud.DATASTORE_ID: The name of the Datastore to mount.SERVICE_SUBNET_NAME: The name of the service subnet to use for NFS traffic.
After a successful mount operation, you can view the mounted Datastore
configuration in the cluster resource. The cluster resource includes a
DatastoreMountConfig entry that corresponds to the mount. For example:
...
datastoreMountConfig:
- accessMode: READ_WRITE
datastore: projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID
datastoreNetwork:
connectionCount: 4
mtu: 1500
networkPeering: projects/PROJECT_ID/locations/global/networkPeerings/PEERING_NAME
subnet: projects/PROJECT_ID/locations/LOCATION/privateClouds/PRIVATE_CLOUD_ID/subnets/SUBNET_NAME
fileShare: FILE_SHARE_NAME
nfsVersion: NFS_V3
servers:
- SERVER_IP
...
After a successful mount operation, the Datastore resource's clusters list is
updated. You can describe a Datastore to see which clusters it is mounted on.
API
GET https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID
gcloud
gcloud vmware datastores describe DATASTORE_ID --location=LOCATION --project=PROJECT_ID
After describing a Datastore, look for the clusters field in the response to
see which clusters the Datastore is mounted on. The following example output
shows a Datastore mounted on one cluster:
{
"name": "projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID",
...
"clusters": [
"projects/PROJECT_ID/locations/LOCATION/privateClouds/PRIVATE_CLOUD_ID/clusters/CLUSTER_ID"
],
...
}
Update a Datastore
Only the description field of a Datastore can be updated. To update a
Datastore, use the gcloud CLI or make a PATCH request:
API
PATCH https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID
Example request body:
{
"description": "New datastore description"
}
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location of the Datastore.DATASTORE_ID: The ID of the Datastore.
gcloud
gcloud vmware datastores update DATASTORE_ID \
--location=LOCATION --project=PROJECT_ID \
--description="DESCRIPTION"
Replace the following:
DATASTORE_ID: The name of the Datastore.LOCATION: The location of the Datastore.PROJECT_ID: Your Google Cloud project ID.DESCRIPTION: A description for the Datastore.
Unmount a Datastore
To unmount an NFS Datastore from a cluster, use the gcloud CLI or
make a POST request:
API
POST https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/privateClouds/PRIVATE_CLOUD_ID/clusters/CLUSTER_ID:unmountDatastore
Example request body:
{
"datastore": "projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID"
}
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location of the resources.PRIVATE_CLOUD_ID: The name of the private cloud.CLUSTER_ID: The name of the cluster.DATASTORE_ID: The name of the Datastore to unmount.
gcloud
gcloud vmware private-clouds clusters unmount-datastore CLUSTER_ID \
--location=LOCATION --project=PROJECT_ID \
--private-cloud=PRIVATE_CLOUD_ID \
--datastore=projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID
Replace the following:
CLUSTER_ID: The name of the cluster.LOCATION: The location of the resources.PROJECT_ID: Your Google Cloud project ID.PRIVATE_CLOUD_ID: The name of the private cloud.DATASTORE_ID: The name of the Datastore to unmount.
Delete a Datastore
To delete a Datastore resource, use the gcloud CLI or make a DELETE
request. The Datastore must not be mounted to any cluster.
API
DELETE https://vmwareengine.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datastores/DATASTORE_ID
Replace the following:
PROJECT_ID: Your Google Cloud project ID.LOCATION: The location of the Datastore.DATASTORE_ID: The name of the Datastore to delete.
gcloud
gcloud vmware datastores delete DATASTORE_ID \
--location=LOCATION --project=PROJECT_ID
Replace the following:
DATASTORE_ID: The name of the Datastore to delete.LOCATION: The location of the Datastore.PROJECT_ID: Your Google Cloud project ID.
Troubleshooting
The following tables list common errors during Datastore creation and mounting:
Create Datastore errors
The following table describes errors that you might encounter when creating Datastores:
| Error message | Cause | Solution |
|---|---|---|
| The filestore NFS file-server instance cannot be empty. | The filestore_instance field in the request body is empty. |
Provide the full resource name of your Filestore instance. |
| The netapp NFS file-server volume cannot be empty. | The netapp_volume field in the request body is empty. |
Provide the full resource name of your Google Cloud NetApp Volumes volume. |
| Invalid Field format for field type filestore_instance | The filestore_instance field does not follow the required format. |
Ensure the resource name is in the format projects/{project}/locations/{location}/instances/{instance}. |
| Invalid Field format for field type netapp_volume… | The netapp_volume field does not follow the required format. |
Ensure the resource name is in the format projects/{project}/locations/{location}/volumes/{volume}. |
| Datastore and NFS volume are in different locations. | The Filestore instance or Google Cloud NetApp Volumes volume is in a different location than the Datastore you are trying to create. | Ensure both the NFS volume and the Datastore are in the same location. |
| User missing required permissions "file.instances.get" | The service account does not have the necessary IAM permissions to access the Filestore instance. | Grant the roles/file.viewer role to the VMware Engine service agent. |
| Permission 'netapp.volumes.get' denied on resource… | The service account does not have the necessary IAM permissions to access the Google Cloud NetApp Volumes volume. | Grant the roles/netapp.viewer role to the VMware Engine service agent. |
| The Filestore instance ... does not exist. | The specified Filestore instance couldn't be found. | Verify that the Filestore instance exists and that the resource name is correct. |
| The Netapp volume ... does not exist. | The specified Google Cloud NetApp Volumes volume couldn't be found. | Verify that the Google Cloud NetApp Volumes volume exists and that the resource name is correct. |
| The Filestore instance has an unsupported tier | The Filestore instance uses a tier that this feature doesn't support. | Create a new Filestore instance with a supported tier: Zonal or Regional. |
| The Filestore instance has an unsupported NFS version | The Filestore instance uses an unsupported NFS version. | Create a new Filestore instance with NFS version 3. |
| The Netapp volume ... has an unsupported NFS version … | The Google Cloud NetApp Volumes volume is using an unsupported NFS version. | Create a new Google Cloud NetApp Volumes volume with NFS version 3. |
| The Netapp volume ... has delete protection disabled. | The Google Cloud NetApp Volumes volume has delete protection disabled. | Enable delete protection on the Google Cloud NetApp Volumes volume. |
| Cannot create Datastore. Resource ... with the same configuration already exists. | A Datastore with the same name and configuration already exists. | Choose a different name for your Datastore or modify the configuration. |
Mount and unmount Datastore errors
The following table describes errors that you might encounter when mounting or unmounting Datastores:
| Error message | Cause | Solution |
|---|---|---|
| DatastoreFormat validation failed. | The specified Datastore format is not supported or is invalid. | Ensure the Datastore format is compatible with VMware Engine (for example, NFSv3). |
| Invalid MTU range, should be 1300 to 9000 | The MTU (Maximum Transmission Unit) value provided for the Datastore network is outside the acceptable range of 1300 to 9000. | Specify an MTU value between 1300 and 9000. |
| Datastore project is not equal to cluster project | The Google Cloud project ID of the Datastore does not match the Google Cloud project ID of the vSphere cluster. | Ensure the Datastore and the cluster belong to the same Google Cloud project. |
| Invalid MTU, MTU should be consistent with the MTU of existing mounted Datastore in the cluster | The MTU of the new Datastore network is inconsistent with the MTU of other NFS Datastores already mounted on the same cluster. | Align the MTU of the new Datastore with the MTU of existing mounted Datastores in the cluster. |
| Datastore should be present and should be in Ready state | The specified Datastore resource does not exist or is not in the READY state. |
Verify that the Datastore has been created successfully and its status is READY using the Get or List Datastore API. |
| For First party, referenced filestore or netapp should be present and should be in ready state | The underlying Filestore instance or Google Cloud NetApp Volumes volume is either missing or not in a READY state. |
Ensure the referenced NFS volume exists and is in a READY state in its Google Cloud project. |
| Network peering should exist in active state between file share VPC and VMware Engine network of cluster's private cloud | A VPC Network Peering connection is required between the VPC network where the NFS volume resides and the VMware Engine network of the private cloud, and this connection is either missing or not in an ACTIVE state. |
Confirm that an active VPC Network Peering connection exists between the file share's VPC and the VMware Engine network of your private cloud. |
| Mount operation fails on legacy networks | For legacy networks, the private connection to the NFS volume's tenant project is missing or inactive. | Ensure that an active private connection to the tenant project exists before you attempt to mount the Datastore. Don't delete a private connection that a mounted Datastore is using. |
| For First party, export option should added to allow pc subnet used for mount | The export policy on the NFS volume does not include the private cloud's service subnet for access. | Modify the export policy of your NFS volume to allow access from the private cloud's service subnet that will be used for mounting. |
| Subnet should be present with valid ip CIDR configured to it | The service subnet specified for the Datastore network is either missing or does not have a valid IP CIDR range configured. | Ensure the designated service subnet exists and has a properly configured IP CIDR range, sufficient to allocate IPs to all ESXi hosts in the cluster. |
| Invalid Datastore format | The specified Datastore resource name is not in a recognized or correct format, preventing the unmount operation. | Verify that the Datastore resource name provided in the unmount request is accurate and follows the format projects/{project}/locations/{location}/datastores/{datastore_id}. |
| Datastore not mounted on cluster | The Datastore you are attempting to unmount is not mounted on the specified cluster. | Before attempting to unmount, confirm that the Datastore is mounted on the target vSphere cluster. |