This page shows you how to back up and restore Persistent Disk storage using volume snapshots.
For an introduction, see About Kubernetes volume snapshots.
Requirements
To use volume snapshots on GKE, you must meet the following requirements:
Use a CSI driver that supports snapshots. The in-tree Persistent Disk driver does not support snapshots. To create and manage snapshots, you must use the same CSI driver as the underlying
PersistentVolumeClaim(PVC).For Persistent Disk (PD) volume snapshots, use the Compute Engine Persistent Disk CSI driver. The Compute Engine Persistent Disk CSI driver is installed by default on new Linux clusters running GKE version 1.18.10-gke.2100 or later, or version 1.19.3-gke.2100 or later. You can also enable the Compute Engine Persistent Disk CSI driver on an existing cluster.
For a list of all CSI drivers that support snapshots, see the Other features column in Drivers in the Kubernetes documentation.
Use control plane versions 1.17 or later. To use the Compute Engine Persistent Disk CSI driver in a
VolumeSnapshot, use GKE versions 1.17.6-gke.4 or later.
- Have an existing
PersistentVolumeClaimto use for a snapshot. ThePersistentVolumeyou use for a snapshot source must be managed by a CSI driver. You can verify that you're using a CSI driver by checking that thePersistentVolumespec has acsisection withdriver: pd.csi.storage.gke.ioorfilestore.csi.storage.gke.io. If thePersistentVolumeis dynamically provisioned by the CSI driver as described in the following sections, it's managed by the CSI driver.
Limitations
All restrictions for creating a disk snapshot on Compute Engine also apply to GKE.
Best practices
Be sure to follow best practices for Compute Engine disk snapshots
when using Persistent Disk Volume snapshots on GKE.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.
Creating and using a volume snapshot
The examples in this document show you how to do the following tasks:
- Create a
PersistentVolumeClaimandDeployment. - Add a file to the
PersistentVolumethat theDeploymentuses. - Create a
VolumeSnapshotClassto configure the snapshot. - Create a volume snapshot of the
PersistentVolume. - Delete the test file.
- Restore the
PersistentVolumeto the snapshot you created. - Verify that the restoration worked.
To use a volume snapshot, you must complete the following steps:
- Create a
VolumeSnapshotClassobject to specify the CSI driver and deletion policy for your snapshot. - Create a
VolumeSnapshotobject to request a snapshot of an existingPersistentVolumeClaim. - Reference the
VolumeSnapshotin aPersistentVolumeClaimto restore a volume to that snapshot or create a new volume using the snapshot.
Create a PersistentVolumeClaim and a Deployment
To create the
PersistentVolumeClaimobject, save the following manifest asmy-pvc.yaml:Persistent Disk
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: storageClassName: standard-rwo accessModes: - ReadWriteOnce resources: requests: storage: 1GiThis example uses the
standard-rwostorage class installed by default with the Compute Engine Persistent Disk CSI driver. To learn more, see Using the Compute Engine Persistent Disk CSI driver.For
spec.storageClassName, you can specify any storage class that uses a supported CSI driver.Apply the manifest:
kubectl apply -f my-pvc.yamlTo create a
Deployment, save the following manifest asmy-deployment.yaml:apiVersion: apps/v1 kind: Deployment metadata: name: hello-app spec: selector: matchLabels: app: hello-app template: metadata: labels: app: hello-app spec: containers: - name: hello-app image: google/cloud-sdk:slim args: [ "sleep", "3600" ] volumeMounts: - name: sdk-volume mountPath: /usr/share/hello/ volumes: - name: sdk-volume persistentVolumeClaim: claimName: my-pvcApply the manifest:
kubectl apply -f my-deployment.yamlCheck the status of the
Deployment:kubectl get deployment hello-appIt might take some time for the
Deploymentto become ready. You can run the preceding command until you see an output similar to the following:NAME READY UP-TO-DATE AVAILABLE AGE hello-app 1/1 1 1 2m55s
Add a test file to the volume
List the
Podsin theDeployment:kubectl get pods -l app=hello-appThe output is similar to the following:
NAME READY STATUS RESTARTS AGE hello-app-6d7b457c7d-vl4jr 1/1 Running 0 2m56sCreate a test file in a
Pod:kubectl exec POD_NAME \ -- sh -c 'echo "Hello World!" > /usr/share/hello/hello.txt'Replace
POD_NAMEwith the name of thePod.Verify that the file exists:
kubectl exec POD_NAME \ -- sh -c 'cat /usr/share/hello/hello.txt'The output is similar to the following:
Hello World!
Create a VolumeSnapshotClass object
Create a VolumeSnapshotClass object to specify the CSI driver and
deletionPolicy for your volume snapshot. You can reference
VolumeSnapshotClass objects when you create VolumeSnapshot objects.
Save the following manifest as
volumesnapshotclass.yaml.Persistent Disk
Use the
v1API version for clusters running versions 1.21 or later.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: my-snapshotclass driver: pd.csi.storage.gke.io deletionPolicy: DeleteIn this example:
The
driverfield is used by the CSI driver to provision the snapshot. In this example,pd.csi.storage.gke.iouses the Compute Engine Persistent Disk CSI driver.The
deletionPolicyfield tells GKE what to do with theVolumeSnapshotContentobject and the underlying snapshot when the boundVolumeSnapshotobject is deleted. SpecifyDeleteto delete theVolumeSnapshotContentobject and the underlying snapshot. SpecifyRetainif you want to keep theVolumeSnapshotContentand the underlying snapshot.To use a custom storage location, add a
storage-locationsparameter to the snapshot class. To use this parameter, your clusters must use version 1.21 or later.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: my-snapshotclass parameters: storage-locations: us-east2 driver: pd.csi.storage.gke.io deletionPolicy: DeleteTo create a disk image, add the following to the
parametersfield:parameters: snapshot-type: images image-family: IMAGE_FAMILYReplace
IMAGE_FAMILYwith the name of your preferred image family, such aspreloaded-data.
Apply the manifest:
kubectl apply -f volumesnapshotclass.yaml
Create a VolumeSnapshot
A VolumeSnapshot object is a request for a snapshot of an existing
PersistentVolumeClaim object. When you create a VolumeSnapshot object,
GKE automatically creates and binds it with a
VolumeSnapshotContent object, which is a resource in your cluster like a
PersistentVolume object.
Save the following manifest as
volumesnapshot.yaml.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot spec: volumeSnapshotClassName: my-snapshotclass source: persistentVolumeClaimName: my-pvcApply the manifest:
kubectl apply -f volumesnapshot.yamlAfter you create a
Volumesnapshot, GKE creates a correspondingVolumeSnapshotContentobject in the cluster. This object stores the snapshot and bindings ofVolumeSnapshotobjects. You don't interact withVolumeSnapshotContentsobjects directly.Confirm that GKE created the
VolumeSnapshotContentsobject:kubectl get volumesnapshotcontentsThe output is similar to the following:
NAME AGE snapcontent-cee5fb1f-5427-11ea-a53c-42010a1000da 55s
After the Volume snapshot content is created, the CSI driver you specified in
the VolumeSnapshotClass creates a snapshot on the corresponding storage
system. After GKE creates a snapshot on the storage system and
binds it to a VolumeSnapshot object on the cluster, the snapshot is ready to
use. You can check the status by running the following command:
kubectl get volumesnapshot \
-o custom-columns='NAME:.metadata.name,READY:.status.readyToUse'
If the snapshot is ready to use, the output is similar to the following:
NAME READY
my-snapshot true
Delete the test file
Delete the test file that you created:
kubectl exec POD_NAME \ -- sh -c 'rm /usr/share/hello/hello.txt'Verify that the file no longer exists:
kubectl exec POD_NAME \ -- sh -c 'cat /usr/share/hello/hello.txt'The output is similar to the following:
cat: /usr/share/hello/hello.txt: No such file or directory
Restore the volume snapshot
You can reference a VolumeSnapshot in a PersistentVolumeClaim to provision
a new volume with data from an existing volume or restore a volume to a
state that you captured in the snapshot.
To reference a VolumeSnapshot in a PersistentVolumeClaim, add the
dataSource field to your PersistentVolumeClaim. The same process is used
whether the VolumeSnapshotContents refers to a disk image or a snapshot.
In this example, you reference the VolumeSnapshot that you created in a new
PersistentVolumeClaim and update the Deployment to use the new claim.
Verify if you're using a disk or image snapshot, which differ as follows:
- Disk snapshots: Take snapshots frequently and restore infrequently.
- Image snapshots: Take snapshots infrequently and restore frequently. Image snapshots may also be slower to create than disk snapshots.
For details see Snapshot frequency limits. Knowing your snapshot type helps if you need to troubleshoot any issues.
Inspect the
VolumeSnapshot:kubectl describe volumesnapshot SNAPSHOT_NAMEThe
volumeSnapshotClassNamefield specifies the snapshot class.kubectl describe volumesnapshotclass SNAPSHOT_CLASS_NAMEThe
snapshot-typeparameter will specifysnapshotsorimages. If it is not given, the default issnapshots.If there is no snapshot class (for instance, if the snapshot was statically created), inspect the
VolumeSnapshotContents.sh kubectl describe volumesnapshotcontents SNAPSHOT_CONTENTS_NAMEThe format of a snapshot handle in the output tells you the type of snapshot, as follows: *projects/PROJECT_NAME/global/snapshots/SNAPSHOT_NAME: disk snapshotprojects/PROJECT_NAME/global/images/IMAGE_NAME: image snapshot
Save the following manifest as
pvc-restore.yaml:Persistent Disk
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: dataSource: name: my-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io storageClassName: standard-rwo accessModes: - ReadWriteOnce resources: requests: storage: 1GiApply the manifest:
kubectl apply -f pvc-restore.yamlUpdate the
my-deployment.yamlfile to use the newPersistentVolumeClaim:... volumes: - name: my-volume persistentVolumeClaim: claimName: pvc-restoreApply the updated manifest:
kubectl apply -f my-deployment.yaml
Check that the snapshot restored successfully
Get the name of the new
Podthat GKE creates for the updatedDeployment:kubectl get pods -l app=hello-app
Verify that the test file exists:
kubectl exec NEW_POD_NAME \
-- sh -c 'cat /usr/share/hello/hello.txt'
Replace NEW_POD_NAME with the name of the new Pod
that GKE created.
The output is similar to the following:
Hello World!
Import a pre-existing snapshot
You can use an existing volume snapshot created outside the current cluster
to manually provision the VolumeSnapshotContents object. For example, you can
populate a volume in GKE with a snapshot of another
Google Cloud resource created in a different cluster.
Locate the name of your snapshot.
Google Cloud console
Google Cloud CLI
Run the following command:
gcloud compute snapshots listThe output is similar to the following:
NAME DISK_SIZE_GB SRC_DISK STATUS snapshot-5e6af474-cbcc-49ed-b53f-32262959a0a0 1 us-central1-b/disks/pvc-69f80fca-bb06-4519-9e7d-b26f45c1f4aa READYSave the following
VolumeSnapshotmanifest asrestored-snapshot.yaml.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: restored-snapshot spec: volumeSnapshotClassName: my-snapshotclass source: volumeSnapshotContentName: restored-snapshot-contentApply the manifest:
kubectl apply -f restored-snapshot.yamlSave the following
VolumeSnapshotContentmanifest asrestored-snapshot-content.yaml. Replace thesnapshotHandlefield with your project ID and snapshot name. BothvolumeSnapshotRef.nameandvolumeSnapshotRef.namespacemust point to the previously createdVolumeSnapshotfor the bi-directional binding to be valid.apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: restored-snapshot-content spec: deletionPolicy: Retain driver: pd.csi.storage.gke.io source: snapshotHandle: projects/PROJECT_ID/global/snapshots/SNAPSHOT_NAME volumeSnapshotRef: kind: VolumeSnapshot name: restored-snapshot namespace: defaultApply the manifest:
kubectl apply -f restored-snapshot-content.yamlSave the following
PersistentVolumeClaimmanifest asrestored-pvc.yaml. The Kubernetes storage controller will find aVolumeSnapshotnamedrestored-snapshotand then try to find, or dynamically create, aPersistentVolumeas the data source. You can then utilize this PVC in a Pod to access the restored data.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restored-pvc spec: dataSource: name: restored-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io storageClassName: standard-rwo accessModes: - ReadWriteOnce resources: requests: storage: 1GiApply the manifest:
kubectl apply -f restored-pvc.yamlSave the following
Podmanifest asrestored-pod.yamlreferring to thePersistentVolumeClaim. The CSI driver will provision aPersistentVolumeand populate it from the snapshot.apiVersion: v1 kind: Pod metadata: name: restored-pod spec: containers: - name: busybox image: busybox args: - sleep - "3600" volumeMounts: - name: source-data mountPath: /demo/data volumes: - name: source-data persistentVolumeClaim: claimName: restored-pvc readOnly: falseApply the manifest:
kubectl apply -f restored-pod.yamlVerify that the file has been restored:
kubectl exec restored-pod -- sh -c 'cat /demo/data/hello.txt'
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.
Delete the
VolumeSnapshot:kubectl delete volumesnapshot my-snapshotDelete the
VolumeSnapshotClass:kubectl delete volumesnapshotclass my-snapshotclassDelete the
Deployment:kubectl delete deployments hello-appDelete the
PersistentVolumeClaimobjects:kubectl delete pvc my-pvc pvc-restore
What's next
- Read the Kubernetes Volume Snapshot documentation.
- Learn about volume expansion.
- Learn how to manually install a CSI driver.
- Learn about block storage (Persistent Disk) for GKE.