This page shows you how to back up and restore Filestore storage using Kubernetes volume snapshots.
Creating a Kubernetes volume snapshot is equivalent to creating a Filestore backup. For more information, see About Kubernetes volume snapshots.
Requirements
To use volume snapshots on GKE, you must meet the following requirements:
You must deploy the Filestore CSI driver. Only the following Filestore service tiers are supported:
- Basic HDD with GKE version 1.21 or later
- Basic HDD (100 GiB to 63.9 TiB) with GKE version 1.33 or later
- Basic SSD with GKE version 1.21 or later
- Zonal (1 TiB to 9.75 TiB) with GKE version 1.31 or later
- Zonal (10 TiB to 100 TiB) with GKE version 1.27 or later
- Regional with GKE version 1.33.4-gke.1172000 or later
- Enterprise with GKE version 1.25 or later
Use control plane versions 1.17 or later. To use the Filestore CSI driver in a
VolumeSnapshot, use the GKE version number applicable to your service tier.
- Have an existing
PersistentVolumeClaimto use for a snapshot. ThePersistentVolumeyou use for a snapshot source must be managed by a CSI driver. You can verify that you're using a CSI driver by checking that thePersistentVolumespec has acsisection withdriver: pd.csi.storage.gke.ioorfilestore.csi.storage.gke.io. If thePersistentVolumeis dynamically provisioned by the CSI driver as described in the following sections, it's managed by the CSI driver.
Limitations
Snapshot volumes have the same size restrictions as regular volumes. For example, Filestore snapshots must be greater than or equal to 1 TiB in size for the basic HDD tier.
The Filestore CSI driver does not support dynamic provisioning or backup workflows for the Regional Filestore service tier:
You can back up only one share per instance at a time. This means you can't create a multishare backup or restore a backup to an instance with multiple shares. However, backup requests from two different shares on two different Filestore instances will run at the same time.
You can restore a backup of a basic instance to the source instance of the same service tier, an already existing instance, or a new instance. If you choose a new instance, you can choose between basic HDD and basic SSD instance regardless of the source instance tier.
You can't restore zonal, regional, and enterprise instances to a source or existing instance, only to a new instance. The new instance tier doesn't have to match the source instance tier. For example, you can restore a backup of a regional instance to a zonal instance. The provisioned capacity of the new instance must be equal to or exceed the provisioned capacity of the source instance.
For a complete list of feature limitations, see Filestore backup feature limitations.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.
Creating and using a volume snapshot
The examples in this document show you how to do the following tasks:
- Create a
PersistentVolumeClaimandDeployment. - Add a file to the
PersistentVolumethat theDeploymentuses. - Create a
VolumeSnapshotClassto configure the snapshot. - Create a volume snapshot of the
PersistentVolume. - Delete the test file.
- Restore the
PersistentVolumeto the snapshot you created. - Verify that the restoration worked.
To use a volume snapshot, you must complete the following steps:
- Create a
VolumeSnapshotClassobject to specify the CSI driver and deletion policy for your snapshot. - Create a
VolumeSnapshotobject to request a snapshot of an existingPersistentVolumeClaim. - Reference the
VolumeSnapshotin aPersistentVolumeClaimto restore a volume to that snapshot or create a new volume using the snapshot.
Create a PersistentVolumeClaim and a Deployment
To create the
PersistentVolumeClaimobject, save the following manifest asmy-pvc.yaml:Filestore
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: storageClassName: enterprise-rwx accessModes: - ReadWriteMany resources: requests: storage: 1TiThis example creates an enterprise tier Filestore PVC. To learn more, see Access Filestore instances with the Filestore CSI driver.
For
spec.storageClassName, you can specify any storage class that uses a supported CSI driver.Apply the manifest:
kubectl apply -f my-pvc.yamlTo create a
Deployment, save the following manifest asmy-deployment.yaml:apiVersion: apps/v1 kind: Deployment metadata: name: hello-app spec: selector: matchLabels: app: hello-app template: metadata: labels: app: hello-app spec: containers: - name: hello-app image: google/cloud-sdk:slim args: [ "sleep", "3600" ] volumeMounts: - name: sdk-volume mountPath: /usr/share/hello/ volumes: - name: sdk-volume persistentVolumeClaim: claimName: my-pvcApply the manifest:
kubectl apply -f my-deployment.yamlCheck the status of the
Deployment:kubectl get deployment hello-appIt might take some time for the
Deploymentto become ready. You can run the preceding command until you see an output similar to the following:NAME READY UP-TO-DATE AVAILABLE AGE hello-app 1/1 1 1 2m55s
Add a test file to the volume
List the
Podsin theDeployment:kubectl get pods -l app=hello-appThe output is similar to the following:
NAME READY STATUS RESTARTS AGE hello-app-6d7b457c7d-vl4jr 1/1 Running 0 2m56sCreate a test file in a
Pod:kubectl exec POD_NAME \ -- sh -c 'echo "Hello World!" > /usr/share/hello/hello.txt'Replace
POD_NAMEwith the name of thePod.Verify that the file exists:
kubectl exec POD_NAME \ -- sh -c 'cat /usr/share/hello/hello.txt'The output is similar to the following:
Hello World!
Create a VolumeSnapshotClass object
Create a VolumeSnapshotClass object to specify the CSI driver and
deletionPolicy for your volume snapshot. You can reference
VolumeSnapshotClass objects when you create VolumeSnapshot objects.
Save the following manifest as
volumesnapshotclass.yaml.Filestore
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: my-snapshotclass driver: filestore.csi.storage.gke.io parameters: type: backup deletionPolicy: DeleteIn this example:
- The
driverfield is used by the CSI driver to provision the snapshot. In this example,filestore.csi.storage.gke.iouses the Filestore CSI driver. - The
deletionPolicyfield tells GKE what to do with theVolumeSnapshotContentobject and the underlying snapshot when the boundVolumeSnapshotobject is deleted. SpecifyDeleteto delete theVolumeSnapshotContentobject and the underlying snapshot. SpecifyRetainif you want to keep theVolumeSnapshotContentand the underlying snapshot.
- The
Apply the manifest:
kubectl apply -f volumesnapshotclass.yaml
Create a VolumeSnapshot
A VolumeSnapshot object is a request for a snapshot of an existing
PersistentVolumeClaim object. When you create a VolumeSnapshot object,
GKE automatically creates and binds it with a
VolumeSnapshotContent object, which is a resource in your cluster like a
PersistentVolume object.
Save the following manifest as
volumesnapshot.yaml.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot spec: volumeSnapshotClassName: my-snapshotclass source: persistentVolumeClaimName: my-pvcApply the manifest:
kubectl apply -f volumesnapshot.yamlAfter you create a
Volumesnapshot, GKE creates a correspondingVolumeSnapshotContentobject in the cluster. This object stores the snapshot and bindings ofVolumeSnapshotobjects. You don't interact withVolumeSnapshotContentsobjects directly.Confirm that GKE created the
VolumeSnapshotContentsobject:kubectl get volumesnapshotcontentsThe output is similar to the following:
NAME AGE snapcontent-cee5fb1f-5427-11ea-a53c-42010a1000da 55s
After the Volume snapshot content is created, the CSI driver you specified in
the VolumeSnapshotClass creates a snapshot on the corresponding storage
system. After GKE creates a snapshot on the storage system and
binds it to a VolumeSnapshot object on the cluster, the snapshot is ready to
use. You can check the status by running the following command:
kubectl get volumesnapshot \
-o custom-columns='NAME:.metadata.name,READY:.status.readyToUse'
If the snapshot is ready to use, the output is similar to the following:
NAME READY
my-snapshot true
Delete the test file
Delete the test file that you created:
kubectl exec POD_NAME \ -- sh -c 'rm /usr/share/hello/hello.txt'Verify that the file no longer exists:
kubectl exec POD_NAME \ -- sh -c 'cat /usr/share/hello/hello.txt'The output is similar to the following:
cat: /usr/share/hello/hello.txt: No such file or directory
Restore the volume snapshot
You can reference a VolumeSnapshot in a PersistentVolumeClaim to provision
a new volume with data from an existing volume.
To reference a VolumeSnapshot in a PersistentVolumeClaim, add the
dataSource field to your PersistentVolumeClaim.
In this example, you reference the VolumeSnapshot that you created in a new
PersistentVolumeClaim and update the Deployment to use the new claim.
Save the following manifest as
pvc-restore.yaml:Filestore
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: dataSource: name: my-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io storageClassName: enterprise-rwx accessModes: - ReadWriteMany resources: requests: storage: 1TiApply the manifest:
kubectl apply -f pvc-restore.yamlUpdate the
my-deployment.yamlfile to use the newPersistentVolumeClaim:... volumes: - name: my-volume persistentVolumeClaim: claimName: pvc-restoreApply the updated manifest:
kubectl apply -f my-deployment.yaml
Check that the snapshot restored successfully
Get the name of the new
Podthat GKE creates for the updatedDeployment:kubectl get pods -l app=hello-app
Verify that the test file exists:
kubectl exec NEW_POD_NAME \
-- sh -c 'cat /usr/share/hello/hello.txt'
Replace NEW_POD_NAME with the name of the new Pod
that GKE created.
The output is similar to the following:
Hello World!
Import a pre-existing snapshot
You can use an existing volume snapshot created outside the current cluster
to manually provision the VolumeSnapshotContents object. For example, you can
populate a volume in GKE with a snapshot of another
Google Cloud resource created in a different cluster.
Locate the name of your snapshot.
Google Cloud console
Google Cloud CLI
Run the following command:
gcloud compute snapshots listThe output is similar to the following:
NAME DISK_SIZE_GB SRC_DISK STATUS snapshot-5e6af474-cbcc-49ed-b53f-32262959a0a0 1 us-central1-b/disks/pvc-69f80fca-bb06-4519-9e7d-b26f45c1f4aa READYSave the following
VolumeSnapshotmanifest asrestored-snapshot.yaml.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: restored-snapshot spec: volumeSnapshotClassName: my-snapshotclass source: volumeSnapshotContentName: restored-snapshot-contentApply the manifest:
kubectl apply -f restored-snapshot.yamlSave the following
VolumeSnapshotContentmanifest asrestored-snapshot-content.yaml. Replace thesnapshotHandlefield with your project ID and snapshot name. BothvolumeSnapshotRef.nameandvolumeSnapshotRef.namespacemust point to the previously createdVolumeSnapshotfor the bi-directional binding to be valid.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: restored-snapshot-content spec: deletionPolicy: Retain driver: filestore.csi.storage.gke.io source: snapshotHandle: projects/PROJECT_ID/global/snapshots/SNAPSHOT_NAME volumeSnapshotRef: kind: VolumeSnapshot name: restored-snapshot namespace: defaultApply the manifest:
kubectl apply -f restored-snapshot-content.yamlSave the following
PersistentVolumeClaimmanifest asrestored-pvc.yaml. The Kubernetes storage controller will find aVolumeSnapshotnamedrestored-snapshotand then try to find, or dynamically create, aPersistentVolumeas the data source. You can then utilize this PVC in a Pod to access the restored data.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restored-pvc spec: dataSource: name: restored-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io storageClassName: enterprise-rwx accessModes: - ReadWriteOnce resources: requests: storage: 1GiApply the manifest:
kubectl apply -f restored-pvc.yamlSave the following
Podmanifest asrestored-pod.yamlreferring to thePersistentVolumeClaim. The CSI driver will provision aPersistentVolumeand populate it from the snapshot.apiVersion: v1 kind: Pod metadata: name: restored-pod spec: containers: - name: busybox image: busybox args: - sleep - "3600" volumeMounts: - name: source-data mountPath: /demo/data volumes: - name: source-data persistentVolumeClaim: claimName: restored-pvc readOnly: falseApply the manifest:
kubectl apply -f restored-pod.yamlVerify that the file has been restored:
kubectl exec restored-pod -- sh -c 'cat /demo/data/hello.txt'
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.
Delete the
VolumeSnapshot:kubectl delete volumesnapshot my-snapshotDelete the
VolumeSnapshotClass:kubectl delete volumesnapshotclass my-snapshotclassDelete the
Deployment:kubectl delete deployments hello-appDelete the
PersistentVolumeClaimobjects:kubectl delete pvc my-pvc pvc-restore
What's next
- Read the Kubernetes Volume Snapshot documentation.
- Learn about volume expansion.
- Learn how to manually install a CSI driver.
- Learn about Filestore as a file storage option for GKE.