This document describes the steps to configure the required BackupRepository for Google Distributed Cloud (GDC) air-gapped Database Service (DBS). This repository, mandatorily named dbs-backup-repository, is a custom resource within Kubernetes that directs the GDC backup service to an S3-compatible object storage bucket for storing database backups.
Proper setup is crucial for enabling backup and restore functionalities for DBS instances such as PostgreSQL, Oracle, and AlloyDB Omni.
Before you begin
Before starting, ensure you have the following prerequisites:
- Project: A project to host the bucket, typically named
database-backups, with only service account access. - Access: Sufficient permissions to interact with the management API server.
- The user must have the following Organization level roles to create the backup repository:
- Bucket Admin (
bucket-admin) - Project Creator (
project-creator) - Organization Backup Admin (
organization-backup-admin)
- Bucket Admin (
- Within the target project:
- Project IAM Admin (
project-iam-admin) - Project Bucket Object Viewer (
project-bucket-object-viewer) - Project Bucket Object Admin (
project-bucket-object-admin) - Project Bucket Admin (
project-bucket-admin) - Namespace Admin (
namespace-admin) - Backup Creator (
backup-creator)
- Project IAM Admin (
- The user must have the following Organization level roles to create the backup repository:
- Tools:
- The GDC console.
- kubectl CLI configured to access the management API server.
Create an object storage bucket
All proceeding steps are performed in the management API server. Define and create a Bucket resource. The recommended name dbs-backups for this storage bucket is assumed for the remainder of this document, located within the backups project namespace. Ensure the bucket does not have any retention policy.
Console
- Sign in to the GDC console for the organization.
- Ensure you are in the
backupsproject. - Navigate to Object Storage > Buckets.
- Click Create Bucket.
- Set the bucket name as
dbs-backups. - Set the description as
Bucket for DBS backups. - Configure the storage class as required. For example,
Standard. - For Security, ensure you do not set a retention policy, as this will cause databases and their backups to be retained errantly.
- Click Create.
API
- Apply the following manifest to the management API server:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: object.gdc.goog/v1
kind: Bucket
metadata:
name: dbs-backups
namespace: backups
spec:
description: "Bucket for DBS backups"
storageClass: "Standard"
EOF
Create service account and set permissions
Create a ProjectServiceAccount and grant it permissions to access the bucket.
Console
- In the
backupsproject, navigate to Identity & Access > Service Accounts. - Click Create Service Account, name it
dbs-backup-sa. - Grant permissions:
- Go to Object Storage > Buckets > dbs-backups > Permissions.
- Click Add principal.
- Select Service Account:
dbs-backup-sa. - Select Role: A role granting read and write object access, such as Storage Object Admin.
- Click Add.
API
- Apply these manifests to the management API server:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: resourcemanager.gdc.goog/v1
kind: ProjectServiceAccount
metadata:
name: dbs-backup-sa
namespace: backups
spec: {}
EOF
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dbs-backups-readwrite-role
namespace: backups
rules:
- apiGroups: ["object.gdc.goog"]
resources: ["bucket"]
resourceNames: ["dbs-backups"]
verbs: ["read-object", "write-object"]
EOF
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dbs-backups-readwrite-rolebinding
namespace: backups
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dbs-backups-readwrite-role
subjects:
- kind: ServiceAccount
name: dbs-backup-sa
namespace: backups
EOF
Identify the service account's credential secret and bucket details
Upon granting bucket access to the dbs-backup-sa, GDC automatically creates a secret in the same namespace, backups, containing the S3 access credentials. You need to find this secret's name.
Console
- Find the secret name:
- Navigate to Kubernetes Engine > Configuration > Secrets in the
backupsproject. - Look for a secret starting with
object-storage-key-std-sa-and check the annotations to confirm theobject.gdc.goog/subjectisdbs-backup-sa. - Note this secret name.
- Navigate to Kubernetes Engine > Configuration > Secrets in the
- Find the bucket details:
- Navigate to Object Storage > Buckets > dbs-backups details page.
- Find and note the
ENDPOINT,REGION, andFULL_BUCKET_NAME.
API
Set environment variables:
export SA_NAMESPACE="backups" export SA_NAME="dbs-backup-sa" export KUBECONFIG=MANAGEMENT_API_SERVERFind the secret name:
export BUCKET_CRED_SECRET_NAME=$(kubectl --kubeconfig=${KUBECONFIG} get secret \ -n "${SA_NAMESPACE}" -l object.gdc.goog/subject-type=ServiceAccount -o json | \ jq -r --arg SA_NAME "${SA_NAME}" \ '.items[] | select(.metadata.annotations["object.gdc.goog/subject"] == $SA_NAME and (.metadata.name |startswith("object-storage-key-std-sa-"))) | .metadata.name') echo "Bucket Credential Secret Name: ${BUCKET_CRED_SECRET_NAME}"This command filters secrets in the
backupsnamespace to find the one annotated for thedbs-backup-saand matching the standard naming convention.Get bucket endpoint and region details:
export BUCKET_NAME=dbs-backups export FULL_BUCKET_NAME=$(kubectl --kubeconfig=${KUBECONFIG} get bucket -n ${SA_NAMESPACE} ${BUCKET_NAME} -o jsonpath='{.status.fullyQualifiedName}') export ENDPOINT=$(kubectl --kubeconfig=${KUBECONFIG} get bucket -n ${SA_NAMESPACE} ${BUCKET_NAME} -o jsonpath='{.status.endpoint}') export REGION=$(kubectl --kubeconfig=${KUBECONFIG} get bucket -n ${SA_NAMESPACE} ${BUCKET_NAME} -o jsonpath='{.status.region}') echo "FULL_BUCKET_NAME: ${FULL_BUCKET_NAME}" echo "ENDPOINT: ${ENDPOINT}" echo "REGION: ${REGION}"'
Create the BackupRepository
Create the BackupRepository resource, referencing the secret identified in the previous section, Identify the service account's credential secret and bucket details. This step must be completed using the kubectl CLI (API).
Create the YAML file such as
backup-repo.yaml, substituting the variables found in the previous section:kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: backup.gdc.goog/v1 kind: BackupRepository metadata: name: dbs-backup-repository # This specific name is required for DBS spec: secretReference: namespace: "backups" # Namespace of the Service Account and the auto-generated secret name: BUCKET_CRED_SECRET_NAME endpoint: ENDPOINT type: "S3" s3Options: bucket: FULL_BUCKET_NAME" region: REGION forcePathStyle: true importPolicy: "ReadWrite" force: true EOFReplace the following:
BUCKET_CRED_SECRET_NAME: the secret name.ENDPOINT: the endpoint of the bucket.FULL_BUCKET_NAME": the fully qualified bucket name.REGION: the region of the bucket.
Verify the backup
Check the repository status to ensure it was set up correctly.
Print the information for your backup repository:
kubectl --kubeconfig MANAGEMENT_API_SERVER get backuprepository dbs-backup-repository -ojson | jq .statusVerify that your output looks similar to the following. A
NoErrormessage is the signal that the repository was set up as expected:NAME TYPE POLICY ERROR dbs-backup-repository S3 ReadWrite NoError status: conditions: - lastTransitionTime: "2025-11-13T00:36:09Z" message: Backup Repository reconciled successfully reason: Ready status: "True" type: Ready initialImportDone: true reconciliationError: NoError sentinelEtag: 9b82fbb7-6ea2-444d-8878-ab91397ae961