Connect NFS clients

This page provides instructions for how to connect NFS clients.

Before you begin

Install NFS client tools based on your Linux distribution type to prepare your client:

RedHat

Run the following command:

sudo yum install -y nfs-utils

SuSe

Run the following command:

sudo yum install -y nfs-utils

Debian

Run the following command:

sudo apt-get install nfs-common

Ubuntu

Run the following command:

sudo apt-get install nfs-common

Volume access control using export policies

Volume access control in NFSv3 and NFSv4.1 is based on the client's IP address. The volume's export policy contains up to 20 export rules. Each rule is a comma-separated list of IPs or network CIDRs which define Allowed Clients enabled to mount the volume. A rule also defines the type of access the clients have such as Read & Write or Read Only.

Use the following tabs to review policies based on NFS versions:

NFS without Kerberos

All NFS versions without Kerberos use the AUTH_SYS security flavor. In this mode, you must tightly manage the export rules to allow only clients you trust and which can ensure user ID and group ID integrity.

As a security measure, NFS servers automatically map NFS calls with UID=0 (root) to UID=65534 (anonymous), which has limited permissions on the file system. During volume creation, you can enable the root access option to control this behavior. If you enable root access, user ID 0 stays 0. As a best practice, create a dedicated export rule which enables root access for your trusted administrator hosts and disable root access for all other clients.

NFSv4.1 with Kerberos

NFSv4.1 with Kerberos use export policies and additional authentication using Kerberos to access volumes. You can configure export rules to apply for the following:

  • Kerberos only (krb5)

  • Kerberos signing (krb5i)

  • Kerberos privacy (krb5p)

Best practices for export policies

We recommend the following best practices for export policies:

  • Order the export rules from most specific to least specific.

  • Export only to the trusted clients, such as specific clients or CIDRs with the trusted clients.

  • Limit the root access to a small group of trusted administration clients.

Rule Allowed clients Access Root access Description
1 10.10.5.3,
10.10.5.9
Read & Write On Administration clients. Root user stays root and can manage
all the file permissions.
2 10.10.5.0/24 Read & Write Off All other clients from 10.10.5.0/24 network are allowed to mount,
but root access gets mapped to nobody.
3 10.10.6.0/24 Read-Only Off Another network is allowed to read data from the volume, but
no writes.

After a client mounts a volume, the file level access determines what a user is allowed to do. For more information, see NFS file-level access control for UNIX-style volumes.

User ID squashing

NFS export policies provide controls for user and group ID squashing, which lets you remap user and group IDs to an anonymous user ID for security purposes.

Root squashing

NFS servers improve security by remapping the root user (UID=0) to nobody (UID=65534), which makes root an unprivileged user for file access on the volume. This feature is known as root squashing. The option to disable it and retain root's privileges is called no_root_squash on NFS servers.

By default, volumes without a defined export policy are inaccessible to client IP addresses. When you create an export policy rule in the Google Cloud console, the default settings include Read & Write access and root squash. The Google Cloud API, Google Cloud CLI, and Terraform previously supported control over root squashing using the has-root-access parameter. While has-root-access is still accepted, it has been replaced by the squash-mode parameter.

User and group ID squashing

The squash-mode parameter provides control over squashing both user and group IDs to an anonymous UID, which can be useful for public SFTP dropbox directories. This parameter also replaces the has-root-access parameter and is supported across the API, Google Cloud CLI, and Terraform.

The squash-mode parameter accepts the following values:

  • no-root-squash: in this mode, the root user remains root and doesn't get remapped to nobody (UID=65534).

  • root-squash: this setting remaps the root user to nobody.

  • all-squash: this option provides anonymous access for all users, including root. All users are remapped to the UID and GID specified by the anon-uid parameter. When using all-squash, you must also specify anon-uid, and set access-type to READ_WRITE.

Considerations

Consider the following for export policy rules with squash mode:

  • An export policy supports only one all-squash rule.

  • When all-squash is enabled, the root user is squashed to anonymous. This can be overridden by a higher priority rule that uses no-root-squash.

  • Volume replication isn't supported for volumes with a squash-mode style export policy rule.

  • For the Flex service level, all-squash doesn't change ownership of the volume's root inode automatically. To achieve this, add a no-root-squash export rule, allowing the root user to use chown to change ownership of the root inode to the required UID.

  • If both has-root-access and squash-mode are specified for an export rule, squash-mode takes precedence and the value of has-root-access is ignored.

Edit a volume

Use the following instructions to update a volume's export policy with squash-mode using the Google Cloud CLI:

gcloud

Update a volume with an export policy using squash-mode:

gcloud netapp volumes update VOLUME_ID \
  --project=PROJECT_ID \
  --location=LOCATION \
  --export-policy=access-type=ACCESS_TYPE,squash-mode=SQUASH_MODE,anon-uid=ANON_UID,allowed-clients=ALLOWED_CLIENTS_IP_ADDRESSES

Replace the following information:

  • VOLUME_ID: the ID of the volume.

  • PROJECT_ID: the name of the project the volume is in.

  • LOCATION: the location of the volume.

  • ACCESS_TYPE: the access type must be either one of READ_WRITE, READ_ONLY, or READ_NONE.

  • SQUASH_MODE: the export rule must be either one of NO_ROOT_SQUASH, ROOT_SQUASH, or ALL_SQUASH.

  • ANON_UID: the UID number to be squashed to.

  • ALLOWED_CLIENTS_IP_ADDRESSES: a list of allowed clients IP addresses or ranges separated by comma.

Export policy parameters can be repeated to include multiple rules.

The following example shows where an export policy have both root-squash and all-squash rules:

gcloud netapp volumes update my_volume --location=us-east4 \
--export-policy=allowed-clients=10.0.1.18,nfsv3=true, \
access-type=READ_WRITE,squash-mode=NO_ROOT_SQUASH \
--export-policy=allowed-clients=10.0.2.0/24,nfsv3=true, \
access-type=READ_WRITE,squash-mode=ALL_SQUASH,anon-uid=2000

For more information about additional optional flags, see Google Cloud SDK documentation on volumes export policy.

Mount instructions for NFS clients

Use the following instructions to get mount instructions for NFS clients using either the Google Cloud console or Google Cloud CLI:

Console

  1. Go to the NetApp Volumes page in the Google Cloud console.

    Go to NetApp Volumes

  2. Click Volumes.

  3. Click Show more.

  4. Select Mount instructions.

  5. Follow the mount instructions shown in the Google Cloud console.

  6. Identify the mount command and use the mount options unless your workload has specific mount option requirements.

    NFSv3 only: if your application doesn't use locks or you didn't configure your clients to enable NSM communication, we recommend that you add the nolock mount option.

gcloud

Look up the mount instructions for a volume:

 gcloud netapp volumes describe VOLUME_NAME \
    --project=PROJECT_ID \
    --location=LOCATION \
    --format="value(mountOptions.instructions)"

Replace the following information:

  • VOLUME_NAME: the name of the volume.

  • PROJECT_ID: the name of the project the volume is in.

  • LOCATION: the location of the volume.

For more information on additional optional flags, see Google Cloud SDK documentation on volumes.

Additional NFSv4.1 instructions

When you enable NFSv4.1, volumes with service levels Standard, Premium, and Extreme automatically enable NFSv4.2 too. The Linux mount command always mounts the highest available NFS version, unless you specify the version to mount. If you want to mount with NFSv4.1, use the -o vers=4.1 parameter in your mount command.

In NFSv3, users and groups are identified by user IDs (UID) and group IDs (GID) sent over the NFSv3 protocol. Its important to make sure that the same UID and GID represent the same user and group on all clients accessing the volume. NFSv4 removed the need for explicit UID and GID mapping by using security identifiers. Security identifiers are strings formatted as <username|groupname>@<full_qualified_domain>. An example of a security identifier is bob@example.com. The client needs to translate the UIDs and GIDs used internally into a security identifier before sending an NFSv4 request to the server. The server needs to translate the security identifiers into UIDs and GIDs for an incoming request and the other way around for its response. The advantage of using translations is that every client and the server can use different internal UIDs and GIDs. However, the disadvantage is that all clients and the server need to maintain a mapping list between UIDs and GIDs, and user and group names. The mapping information on clients can come from local files like /etc/passwd and /etc/groups or an LDAP directory. The configuration of this mapping is managed by rpc.idmapd, which must run on your client.

On NetApp Volumes, the LDAP must provide mapping information, with Active Directory being the only supported RFC2307bis compatible LDAP server. When using Kerberos for NFSv4, the security identifier stores Kerberos principals in the format username@DOMAINNAME, where DOMAINNAME (in capital letters) becomes the realm name.

Numeric IDs

For users that don't want to configure the name mappings and instead use NFSv4 as a replacement for NFSv3, NFSv4 has introduced an option called numeric ID, which sends UID and GID encoded text strings as security identifiers. This simplifies the configuration process for users.

You can check your client setting using the following command:

     cat /sys/module/nfs/parameters/nfs4_disable_idmapping
   

The default value is Y, which enables numeric IDs. NetApp Volumes supports the use of numeric IDs.

Configure rpc.idmapd on NFS client

Regardless of the type of IDs or security identifiers you use, it is necessary to configure rpc.idmapd on your NFS client. If you followed the installation instructions for client utilities in the Before you begin section, it should already be installed but might not be running. Some distributions start it automatically using systemd when you mount the first NFS volumes. The minimum configuration required for rpc.idmapd is to set up the domain setting. Otherwise, the user root will be displayed as nobody with UID=65534 or 4294967295.

Use the following instructions to configure rpc.idmapd on your NFS client:

  1. On your client, open the file /etc/idmapd.conf and change the domain parameter to one of the following:

    • If your volume isn't enabled for LDAP, domain = defaultv4iddomain.com.

    • If your volume is enabled for LDAP, domain = <FDQN_of_Windows_Domain>.

  2. Activate the changes to rpc.idmapd by running the following command:

     nfsidmap -c

NFSv4.2 support

The Standard, Premium, and Extreme service levels now support the NFSv4.2 protocol in addition to NFSv4.1 on volumes that already have NFSv4.1 enabled.

When mounting an NFS volume, the Linux mount command automatically selects the highest available NFS version. Mounting an NFSv4.1 enabled volume automatically defaults to NFSv4.2 unless the vers=4.1 mount option is explicitly specified.

NetApp Volumes support NFS extended attributes xattrs with NFSv4.2. The usage and limitations of xattrs, as detailed in TR-4962, are also applicable.

Connect Linux to LDAP

If you are using NFSv3 extended groups or NFSv4.1 with security identifiers, you configured NetApp Volumes to use your Active Directory as LDAP server using an Active Directory attached to a storage pool.

To maintain consistent user information between NFS client and server, you might need to configure your client to use Active Directory as LDAP name service for user and group information.

Use the following resources to configure LDAP:

When using Kerberized NFS, you might need to use the deployment guides mentioned in this section to configure LDAP and ensure consistency between the client and server.

What's next

Connect large capacity volumes with multiple storage endpoints.