Create an external replication

This page describes how to create an external replication.

Before you begin

Before setting up an external replication, we recommend that you review the external replication workflow. The external replication process starts by creating a destination volume and specifying the source system details. This action creates a destination volume resource and a replication child resource within NetApp Volumes for managing the replication.

Considerations

  • The following features aren't supported for destination volumes during the external replication process:

    • Auto-tiering

    • Volume replication

    • Flex service level

  • You must use manual backups when doing an integrated backup of NetApp Volumes based destination volumes. If you try to assign a backup policy to a destination volume, it will fail.

  • Select the correct storage pool and make sure that the destination volume is large enough to accommodate the logical size (not the physical size) used by your ONTAP source volume.

  • Specify the correct share name and protocol types. The share name must match the source, and the protocol types must be chosen carefully as they can't be changed after volume creation. The protocol settings you choose also map to volume security styles. Make sure these settings are consistent.

  • Before creating an external replication, make sure you have CLI access and the necessary permissions on the source ONTAP system. You need to run CLI commands on the source ONTAP system within one hour of the replication process.

Prerequisites for external replication

External replication and volume migration share the same prerequisites.

Create an external replication

Use the following instructions to create an external replication using the Google Cloud console or Google Cloud CLI.

Console

  1. Go to the NetApp Volumes page in the Google Cloud console.

    Go to NetApp Volumes

  2. Click External replications from the Data protection menu.

  3. Click Replicate external volume.

  4. In the Prerequisites section, review the prerequisites and click Next.

  5. In the External source details section, complete the following steps:

    1. Enter the name of your source cluster in the Cluster name field.

    2. Enter the name of the Storage Virtual Machine (SVM), also known as vserver, that hosts the source volume in the Storage VM name field.

    3. Enter the name of the source volume in the Volume name field.

    4. Enter the Intercluster-LIF (IC-LIF) IP address in the Inter-cluster IP field. Each node of the source cluster needs an IC-LIF. Specify all IC-LIFs as a comma-separated list.

    5. Optional: enter a description for the source ONTAP cluster location in the Location field.

  6. Optional: in the Volume style section, click FlexGroup volume checkbox to create a large capacity destination volume in NetApp Volumes.

    1. Enter the number of constituent volumes on the source volume in the Constituent volume count field.

    2. Click Next.

Configure destination volume details

  1. In the Create destination volume section, complete the following steps:

    1. Enter the name of the destination volume in the Destination volume name field.

    2. Optional: enter a description for the volume in the Description field.

  2. In the Storage pool details section, complete the following steps:

    1. Click Select storage pool.

    2. From the list of storage pools displayed, select the required storage pool.

    3. Click Select.

      If the storage pools in the list don't have the settings you want, click Create new storage pool.

  3. In the Volume details section, enter the share name of the volume in the Share name field. The share name must be unique within a location. It's recommended to use the destination volume name as the share name.

  4. In the Capacity configuration section, complete the following steps:

    1. Click Enable large capacity checkbox.

    2. Enter the volume capacity in the Capacity field.

  5. Optional: if the selected storage pool allows auto-tiering:

    1. Click Enable auto-tiering checkbox if you want to enable auto-tiering for the volume.

    2. Specify a cooling threshold between 2 to 183 days in the Cooling threshold days field. The default cooling threshold value is 31 days.

  6. In the Protocol configuration section, select the same protocol as the source volume. For some protocols, various options are displayed. For more information about protocol options, see Create a new volume.

  7. Optional: in the Snapshot configuration section, complete the following steps:

    1. Click Make snapshot directory visible checkbox to enable file system access to snapshot versions by clients. For more information, see NetApp Volumes volume snapshots overview.

    2. Select Allow scheduled snapshots to configure the volume to automatically take snapshots. You can specify the number of snapshots to keep at hourly, daily, weekly, and monthly snapshot intervals. Times are specified in UTC. If you reach the maximum number of snapshots, the oldest snapshot is deleted.

    3. Review your snapshot selections.

  8. Click Next.

Define replication schedule

  1. In the Replication schedule section, complete the following steps:

    1. Enter the name of your replication in the Replication name field.

    2. Optional: enter a description for your replication in the Description field.

    3. Click the Replication schedule drop-down list, and select the following schedule frequency for replicating data from the source volume to the destination volume.

      • Every 10 mins

      • Daily

      • Hourly

      The default is HOURLY. Large capacity volumes don't support EVERY_10_MINUTES option.

    4. Optional: click Add label to enter relevant labels for reporting and querying purposes.

  2. Click Next.

  3. Review your settings and click Create to start the replication process.

After creating the replication process, you are redirected to the volume details view. Click the Replication tab to monitor the replication status.

You must authenticate the SnapMirror connection between your source ONTAP system and NetApp Volumes. Run the cluster peer create command on the source ONTAP cluster. If no prior peering exists, the Replication tab displays Pending cluster peering.

If you click Configure peering, a side page with instructions is displayed. Follow these instructions, and click Check peering. After a successful peering, the side page disappears, and the transfer status of the replication changes to Preparing. The baseline transfer is now running. A baseline transfer can take minutes, hours, or days depending on the amount of data to be transferred and the network speed. Once the baseline transfer is complete, the transfer status switches to Mirrored.

gcloud

To create an external replication:

gcloud netapp volumes create VOLUME_NAME --location=LOCATION \
  --capacity=CAPACITY --protocols=PROTOCOL \
  --share-name=SHARE_NAME --storage-pool=STORAGE_POOL \
  --hybrid-replication-parameters=hybrid-replication-type=ONPREM_REPLICATION,peer-cluster-name=PEER_CLUSTER_NAME,peer-ip-addresses=PEER_IP_ADDRESSES,peer-svm-name=PEER_SVM_NAME,peer-volume-name=PEER_VOLUME_NAME,replication=REPLICATION,replication-schedule=REPLICATION_SCHEDULE,cluster-location=CLUSTER_LOCATION,description=DESCRIPTION,labels=LABELS

The hybrid-replication-parameters block starts a replication workflow.

Replace the following information:

  • VOLUME_NAME: the name of the volume. This name must be unique per location.

  • LOCATION: the location for the volume.

  • CAPACITY: the capacity of the volume. It defines the capacity that NAS clients see.

  • PROTOCOLS: the NAS protocols the volume is exported with.

  • SHARE_NAME: the NFS export path or SMB share name of the volume.

  • STORAGE_POOL: the storage pool to create the volume in.

  • HYBRID_REPLICATION_TYPE: for external replication, specify ONPREM_REPLICATION.

  • PEER_CLUSTER_NAME: the name of the ONTAP cluster hosting the source volumes.

  • PEER_IP_ADDRESSES: the InterCluster-LIF IP addresses of the ONTAP cluster. The source cluster must provide one IC-LIF per node, separated by # signs. Make sure to specify them all.

    The following example shows you how to add multiple IC-LIF IP addresses of the ONTAP cluster:

    peer-ip-addresses=10.0.0.25#10.0.0.26
    
  • PEER_SVM_NAME: the name of the storage virtual machine (SVM), also known as vserver that owns the source volume.

  • PEER_VOLUME_NAME: the name of the source volume.

  • REPLICATION: the name of the replication resource to be created.

  • LARGE_VOLUME_CONSTITUENT_COUNT: this parameter is only required when your source volume is a FlexGroup. For more information, see FlexGroups and Large Volumes before you proceed.

    To create a large volume, specify --large-volume true and --multiple-endpoints true as create parameters too.

  • REPLICATION_SCHEDULE: Optional: you can set the replication schedule to one of the following intervals:

    • EVERY_10_MINUTES

    • HOURLY

    • DAILY

    The default is HOURLY. Large volumes won't offer EVERY_10_MINUTES.

  • CLUSTER_LOCATION: Optional: the description of the source cluster location.

  • DESCRIPTION: Optional: the description text for the replication resource.

  • LABELS: Optional: labels for the replication resource.

    The following example shows how to specify key-value pairs for the labels parameter:

    labels=KEY1:VALUE1#KEY2:VALUE2
    

Example invocation:

$ gcloud netapp volumes create ok-destination --location australia-southeast1 \
--capacity 100 --protocols=nfsv3 \
--share-name ok-destination --storage-pool okrause-pool \
--hybrid-replication-parameters=hybrid-replication-type=ONPREM_REPLICATION,peer-cluster-name=au2se1cvo2sqa,peer-ip-addresses=10.0.0.25#10.0.0.26,peer-svm-name=svm_au2se1cvo2sqa,peer-volume-name=okrause_source,replication=okrause-replication,replication-schedule=HOURLY

To meet your volume requirements, specify all applicable optional parameters. For example, an NFS volume might require an export policy.

Look up all options:

gcloud netapp volumes create --help

After creating the destination volume and the replication resource, NetApp Volumes tries to peer with your source ONTAP system. This peering process serves as an authentication and authorization step, and protects your source cluster from malicious SnapMirror requests. Therefore, make sure you only peer with trusted systems.

Look up the next steps:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION

The current authentication status can be printed at any time. However, the state changes might take up to five minutes after an action advances the process to the next step.

A successful peering consists of the following steps:

  • The NetApp Volumes destination volume pings your source system using the specified peer-ip-addresses.

  • If cluster peering isn't already established, NetApp Volumes prints the cluster peering commands you must run on the source system.

  • Also, if SVM peering isn't already established, NetApp Volumes prints the vserver peering commands you must run on the source system.

The steps that have been completed previously are skipped, and the process automatically continues with the next step.

Network connectivity check

NetApp Volumes tries to send an ICMP (ping) request to the IC-LIFs you specified under peer-ip-addresses. If it fails, stateDetails displays Cluster peering failed, please try again, indicating a network issue. For more information, see Network connection to Google Cloud project. You can't proceed further until you establish a network connectivity between the source system and NetApp Volumes. For debugging purposes, try to ping the gateway IP of the /27 CIDR that hosts the NetApp Volumes IC-LIFs.

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION \
 --format="table(hybridPeeringDetails.subnetIp)"

This prints the CIDR. Ping the first IP of that network from the source ONTAP system, using one of your source IC-LIFs.

Example:

ONTAP> ping -lif=YOUR_IC_LIF -vserver=VSERVER_HOSTING_SOURCE_VOLUME -destination=FIRST_IP_OF_SUBNET_IP

Cluster peering:

If ICMP works, the process proceeds to cluster peering. The status PENDING_CLUSTER_PEERING displays if peering has not yet been established.

Look up cluster-peering instructions:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION \
 --format="table(hybridPeeringDetails.command,hybridPeeringDetails.passphrase)"

This process outputs the command and required passphrase for execution. Copy and paste the cluster peer create command onto your source cluster and run it. You will be prompted to enter the passphrase twice.

SVM peering:

The cluster peer create command from the previous step is expected to also perform the SVM peering automatically. If this doesn't occur, the state changes to PENDING_SVM_PEERING after a few seconds.

Verify the SVM peering:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION

If the state is PENDING_SVM_PEERING, run the vserver peering command:

gcloud netapp volumes replications list --volume=DESTINATION_VOLUME --location=REGION \
 --format="table(hybridPeeringDetails.command)"

After a few seconds, the state changes to Ready, and mirrorState to Preparing which indicates that the baseline transfer has started. After the baseline transfer is finished, the mirrorState changes to Mirrored. Incremental transfers are initiated based on the defined replication schedule, indicated by mirrorState as Transferring.

For more information about additional optional flags, see Google Cloud SDK documentation on external replication creation.

What's next

Manage external replications.