Create a backup repository for Database Service

This document describes the steps to configure the required BackupRepository for Google Distributed Cloud (GDC) air-gapped Database Service (DBS). This repository, mandatorily named dbs-backup-repository, is a custom resource within Kubernetes that directs the GDC backup service to an S3-compatible object storage bucket for storing database backups.

Proper setup is crucial for enabling backup and restore functionalities for DBS instances such as PostgreSQL, Oracle, and AlloyDB Omni.

Before you begin

Before starting, ensure you have the following prerequisites:

  • Project: A project to host the bucket, typically named database-backups, with only service account access.
  • Access: Sufficient permissions to interact with the management API server.
    • The user must have the following Organization level roles to create the backup repository:
      • Bucket Admin (bucket-admin)
      • Project Creator (project-creator)
      • Organization Backup Admin (organization-backup-admin)
    • Within the target project:
      • Project IAM Admin (project-iam-admin)
      • Project Bucket Object Viewer (project-bucket-object-viewer)
      • Project Bucket Object Admin (project-bucket-object-admin)
      • Project Bucket Admin (project-bucket-admin)
      • Namespace Admin (namespace-admin)
      • Backup Creator (backup-creator)
  • Tools:

Create an object storage bucket

All proceeding steps are performed in the management API server. Define and create a Bucket resource. The recommended name dbs-backups for this storage bucket is assumed for the remainder of this document, located within the backups project namespace. Ensure the bucket does not have any retention policy.

Console

  1. Sign in to the GDC console for the organization.
  2. Ensure you are in the backups project.
  3. Navigate to Object Storage > Buckets.
  4. Click Create Bucket.
  5. Set the bucket name as dbs-backups.
  6. Set the description as Bucket for DBS backups.
  7. Configure the storage class as required. For example, Standard.
  8. For Security, ensure you do not set a retention policy, as this will cause databases and their backups to be retained errantly.
  9. Click Create.

API

  • Apply the following manifest to the management API server:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: object.gdc.goog/v1
kind: Bucket
metadata:
  name: dbs-backups
  namespace: backups
spec:
  description: "Bucket for DBS backups"
  storageClass: "Standard"
EOF

Create service account and set permissions

Create a ProjectServiceAccount and grant it permissions to access the bucket.

Console

  1. In the backups project, navigate to Identity & Access > Service Accounts.
  2. Click Create Service Account, name it dbs-backup-sa.
  3. Grant permissions:
    1. Go to Object Storage > Buckets > dbs-backups > Permissions.
    2. Click Add principal.
    3. Select Service Account: dbs-backup-sa.
    4. Select Role: A role granting read and write object access, such as Storage Object Admin.
    5. Click Add.

API

  • Apply these manifests to the management API server:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: resourcemanager.gdc.goog/v1
kind: ProjectServiceAccount
metadata:
  name: dbs-backup-sa
  namespace: backups
spec: {}
EOF
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: dbs-backups-readwrite-role
  namespace: backups
rules:
- apiGroups: ["object.gdc.goog"]
  resources: ["bucket"]
  resourceNames: ["dbs-backups"]
  verbs: ["read-object", "write-object"]
EOF
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dbs-backups-readwrite-rolebinding
  namespace: backups
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: dbs-backups-readwrite-role
subjects:
- kind: ServiceAccount
  name: dbs-backup-sa
  namespace: backups
EOF

Identify the service account's credential secret and bucket details

Upon granting bucket access to the dbs-backup-sa, GDC automatically creates a secret in the same namespace, backups, containing the S3 access credentials. You need to find this secret's name.

Console

  1. Find the secret name:
    1. Navigate to Kubernetes Engine > Configuration > Secrets in the backups project.
    2. Look for a secret starting with object-storage-key-std-sa- and check the annotations to confirm the object.gdc.goog/subject is dbs-backup-sa.
    3. Note this secret name.
  2. Find the bucket details:
    1. Navigate to Object Storage > Buckets > dbs-backups details page.
    2. Find and note the ENDPOINT, REGION, and FULL_BUCKET_NAME.

API

  1. Set environment variables:

    export SA_NAMESPACE="backups"
    export SA_NAME="dbs-backup-sa"
    export KUBECONFIG=MANAGEMENT_API_SERVER
    
  2. Find the secret name:

    export BUCKET_CRED_SECRET_NAME=$(kubectl --kubeconfig=${KUBECONFIG} get secret \
        -n "${SA_NAMESPACE}" -l object.gdc.goog/subject-type=ServiceAccount -o json | \
        jq -r --arg SA_NAME "${SA_NAME}" \
        '.items[] | select(.metadata.annotations["object.gdc.goog/subject"] == $SA_NAME and (.metadata.name |startswith("object-storage-key-std-sa-"))) | .metadata.name')
    echo "Bucket Credential Secret Name: ${BUCKET_CRED_SECRET_NAME}"
    

    This command filters secrets in the backups namespace to find the one annotated for the dbs-backup-sa and matching the standard naming convention.

  3. Get bucket endpoint and region details:

    export BUCKET_NAME=dbs-backups
    export FULL_BUCKET_NAME=$(kubectl --kubeconfig=${KUBECONFIG} get bucket -n ${SA_NAMESPACE} ${BUCKET_NAME} -o jsonpath='{.status.fullyQualifiedName}')
    export ENDPOINT=$(kubectl --kubeconfig=${KUBECONFIG} get bucket -n ${SA_NAMESPACE} ${BUCKET_NAME} -o jsonpath='{.status.endpoint}')
    export REGION=$(kubectl --kubeconfig=${KUBECONFIG} get bucket -n ${SA_NAMESPACE} ${BUCKET_NAME} -o jsonpath='{.status.region}')
    
    echo "FULL_BUCKET_NAME: ${FULL_BUCKET_NAME}"
    echo "ENDPOINT: ${ENDPOINT}"
    echo "REGION: ${REGION}"'
    

Create the BackupRepository

Create the BackupRepository resource, referencing the secret identified in the previous section, Identify the service account's credential secret and bucket details. This step must be completed using the kubectl CLI (API).

  • Create the YAML file such as backup-repo.yaml, substituting the variables found in the previous section:

    kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f -
    <<EOF
    apiVersion: backup.gdc.goog/v1
    kind: BackupRepository
    metadata:
      name: dbs-backup-repository # This specific name is required for DBS
    spec:
      secretReference:
        namespace: "backups" # Namespace of the Service Account and the auto-generated secret
        name: BUCKET_CRED_SECRET_NAME
      endpoint: ENDPOINT
      type: "S3"
      s3Options:
        bucket: FULL_BUCKET_NAME"
        region: REGION
        forcePathStyle: true
      importPolicy: "ReadWrite"
      force: true
      EOF
    

    Replace the following:

    • BUCKET_CRED_SECRET_NAME: the secret name.
    • ENDPOINT: the endpoint of the bucket.
    • FULL_BUCKET_NAME": the fully qualified bucket name.
    • REGION: the region of the bucket.

Verify the backup

Check the repository status to ensure it was set up correctly.

  1. Print the information for your backup repository:

    kubectl --kubeconfig MANAGEMENT_API_SERVER get backuprepository dbs-backup-repository -ojson | jq .status
    
  2. Verify that your output looks similar to the following. A NoError message is the signal that the repository was set up as expected:

    NAME                    TYPE   POLICY      ERROR
    dbs-backup-repository   S3     ReadWrite   NoError
    status:
      conditions:
      - lastTransitionTime: "2025-11-13T00:36:09Z"
        message: Backup Repository reconciled successfully
        reason: Ready
        status: "True"
        type: Ready
      initialImportDone: true
      reconciliationError: NoError
      sentinelEtag: 9b82fbb7-6ea2-444d-8878-ab91397ae961