Install AlloyDB Omni on Kubernetes

Select a documentation version:

This page provides an overview of the AlloyDB Omni Kubernetes operator, with instructions for using it to deploy AlloyDB Omni onto a Kubernetes cluster. This page assumes basic familiarity with Kubernetes operation.

For instructions on installing AlloyDB Omni onto a standard Linux environment, see Install AlloyDB Omni.

Overview

To deploy AlloyDB Omni onto a Kubernetes cluster, install the AlloyDB Omni Kubernetes operator, an extension to the Kubernetes API provided by Google.

You configure and control a Kubernetes-based AlloyDB Omni database cluster by pairing declarative manifest files with the kubectl utility, just like any other Kubernetes-based deployment. You don't use the AlloyDB Omni CLI, which is intended for deployments onto individual Linux machines and not on Kubernetes clusters.

Base image

Starting with version 1.5.0, AlloyDB Omni operator Kubernetes images are built upon Red Hat's Universal Base Image (UBI) 9. This transition enhances security, consistency, and compliance for your deployments.

SHA digest image references

To prevent supply attacks and meet OpenShift Certification requirements, the AlloyDB Omni operator uses SHA-256 digests instead of version tags for all container image references.

  • Automatic Upgrades: the AlloyDB Omni operator uses an internal ImageCatalog to manage these digests and to ensure reliable data plane rollbacks during failed upgrades.

  • Enablement: while enabled by default for the OpenShift Certified package, users of the OLM or Helm packages can manually enable digest references by setting the ENABLE_DIGEST_IMAGE_REFS environment variable to true using the Subscription config for OLM or the enableDigestImageRefs value in the Helm chart.

Before you begin

Before you install AlloyDB Omni on a Kubernetes cluster with AlloyDB Omni operator, make sure that you meet the following requirements.

Choose a download or installation option

When you manage workloads on a generic Kubernetes cluster, you can use either Helm or OLM. Helm is a universal package manager that uses Helm charts to install any workload, including operators, across all Kubernetes variants. OLM—the standard, preferred choice on OpenShift platforms— manages operator lifecycles with specialized OLM bundles.

Based on your environment and tooling, choose one of the following deployment methods:

Media Download locations and installation guides Deployment to
AlloyDB Omni operator with Helm Chart Install AlloyDB Omni on Kubernetes Bring-your-own Kubernetes container environment—for example, on-premises, public clouds, GKE, Amazon EKS, and Azure AKS.

Tip: If your CD (continuous delivery) tooling is integrated with Helm, use this option.
AlloyDB Omni operator with OLM bundle OperatorHub.io Bring-your-own Kubernetes container environment—for example, on-premises, public clouds, Google Kubernetes Engine, Amazon EKS, and Azure AKS.

To use an OLM bundle, install OLM on the Kubernetes cluster before you install the operator. For more information, see olm.operatorframework.io.

Tip: If your CD (continuous delivery) tooling already uses OLM, choose this option.
OpenShift Operator with OLM Bundle Openshift Container Platform Web Console OpenShift environment

OpenShift, a variant of Kubernetes, uses OLM as its standard, built-in method for packaging and deploying operators.

Verify access

Verify that you have access to the following:

Fulfill hardware and software requirements

Each node in the Kubernetes cluster must have the following:

  • A minimum of two x86 or AMD64 CPUs.
  • At least 8GB of RAM.
  • Linux kernel version 4.18 or later.
  • Control group (cgroup) v2 enabled.

Install the AlloyDB Omni operator

If you want to deploy AlloyDB Omni in your production environment, see Run AlloyDB Omni in production.

You can install the AlloyDB Omni operator using different methods, including Helm and the Operator Lifecycle Manager (OLM).

Helm

To install the AlloyDB Omni operator, follow these steps:

Note: AlloyDB Omni operator Helm charts are no longer available in Cloud Storage. You must download them from the OCI registry.
  1. Authenticate Docker and Helm:
    gcloud auth configure-docker gcr.io
    gcloud auth print-access-token | helm registry login -u oauth2accesstoken --password-stdin https://gcr.io
    
  2. Install the AlloyDB Omni operator from the OCI registry:
    helm install alloydbomni-operator oci://gcr.io/alloydb-omni/alloydbomni-operator \
      --version 1.7.0 \
    --create-namespace \
    --namespace alloydb-omni-system \
    --atomic \
    --timeout 5m
    

    Successful installation displays the following output:

    NAME: alloydbomni-operator
    LAST DEPLOYED: CURRENT_TIMESTAMP
    NAMESPACE: alloydb-omni-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    

OLM

To install the AlloyDB Omni operator using the Operator Lifecycle Manager, follow these steps:

  1. Navigate to https://operatorhub.io/operator/alloydb-omni-operator.

  2. Click the Install button to display the instructions.

  3. Complete all installation steps.

  4. This step is optional if you use custom certificate issuers. Otherwise, you must install the default certificate issuers by running the following commands:

    kubectl create ns NAMESPACE
    kubectl apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: alloydbomni-selfsigned-cluster-issuer
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: fleet-selfsigned-issuer
      namespace: NAMESPACE
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: fleet-serving-cert
      namespace: NAMESPACE
    spec:
      dnsNames:
      - fleet-webhook-service.alloydb-omni-system.svc
      - fleet-webhook-service.alloydb-omni-system.svc.cluster.local
      issuerRef:
        kind: Issuer
        name: fleet-selfsigned-issuer
      secretName: fleet-webhook-server-cert
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: local-selfsigned-issuer
      namespace: NAMESPACE
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: local-serving-cert
      namespace: NAMESPACE
    spec:
      dnsNames:
      - local-webhook-service.alloydb-omni-system.svc
      - local-webhook-service.alloydb-omni-system.svc.cluster.local
      issuerRef:
        kind: Issuer
        name: local-selfsigned-issuer
      secretName: local-webhook-server-cert
    EOF

    Replace NAMESPACE with the namespace where you have your operator—for example, alloydb-omni-system.

OLM

To install the AlloyDB Omni operator on your Red Hat OpenShift environment using the OLM, follow these steps:

  1. Sign in to your Red Hat OpenShift web console.
  2. For offline or disconnected users, you must manually mirror the required images to your private registry using tools that preserve SHA digests, such as oc image mirror.
  3. In the OpenShift web console, navigate to Operators > OperatorHub. The AlloyDB Omni Operator is listed in the Certified catalog.

    AlloyDB Omni Operator hub
    Figure 1: The AlloyDB Omni operator in the OperatorHub
  4. In the AlloyDB Omni operator pane, click Install.

    AlloyDB Omni Operator pane
    Figure 2: The AlloyDB Omni operator pane
  5. This step is optional if you use custom certificate issuers. Otherwise, you must install the default certificate issuers by running the following commands:

    kubectl create ns NAMESPACE
    kubectl apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: alloydbomni-selfsigned-cluster-issuer
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: fleet-selfsigned-issuer
      namespace: NAMESPACE
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: fleet-serving-cert
      namespace: NAMESPACE
    spec:
      dnsNames:
      - fleet-webhook-service.alloydb-omni-system.svc
      - fleet-webhook-service.alloydb-omni-system.svc.cluster.local
      issuerRef:
        kind: Issuer
        name: fleet-selfsigned-issuer
      secretName: fleet-webhook-server-cert
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: local-selfsigned-issuer
      namespace: NAMESPACE
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: local-serving-cert
      namespace: NAMESPACE
    spec:
      dnsNames:
      - local-webhook-service.alloydb-omni-system.svc
      - local-webhook-service.alloydb-omni-system.svc.cluster.local
      issuerRef:
        kind: Issuer
        name: local-selfsigned-issuer
      secretName: local-webhook-server-cert
    EOF

    Replace NAMESPACE with the namespace where you have your operator—for example, alloydb-omni-system.

Configure disconnected environments {: #configure-disconnected-environments }

For air-gapped OpenShift clusters, you must configure an ImageDigestMirrorSet to redirect image pulls from the public gcr.io repository to your private registry. This makes sure that the AlloyDB Omni operator can pull the required images using their immutable SHA256 digests.

Configure GDC connected storage

To install the AlloyDB Omni operator on GDC connected, you need to follow additional steps to configure storage because GDC connected clusters don't set a default storage class. You must set a default storage class before you create an AlloyDB Omni database cluster.

To learn how to set Symcloud Storage as the default storage class, see Set Symcloud Storage as the default storage class.

For more information about changing the default for all other storage classes, see Change the default StorageClass.

Create a database cluster

An AlloyDB Omni database cluster contains all the storage and compute resources needed to run an AlloyDB Omni server, including the primary server, any replicas, and all of your data.

After you install the AlloyDB Omni operator on your Kubernetes cluster, you can create an AlloyDB Omni database cluster on the Kubernetes cluster by applying a manifest similar to the following:

apiVersion: v1
kind: Secret
metadata:
  name: db-pw-DB_CLUSTER_NAME
type: Opaque
data:
  DB_CLUSTER_NAME: "ENCODED_PASSWORD"
---
apiVersion: alloydbomni.dbadmin.goog/v1
kind: DBCluster
metadata:
  name: DB_CLUSTER_NAME
spec:
  databaseVersion: "18.1.0"
  primarySpec:
    adminUser:
      passwordRef:
        name: db-pw-DB_CLUSTER_NAME
    resources:
      cpu: CPU_COUNT
      memory: MEMORY_SIZE
      disks:
      - name: DataDisk
        size: DISK_SIZE

Replace the following:

  • DB_CLUSTER_NAME: the name of this database cluster—for example, my-db-cluster.

  • ENCODED_PASSWORD: the database login password for the default postgres user role, encoded as a base64 string—for example, Q2hhbmdlTWUxMjM= for ChangeMe123.

  • CPU_COUNT: the number of CPUs available to each database instance in this database cluster.

  • MEMORY_SIZE: the amount of memory per database instance of this database cluster. We recommend setting this to 8 gigabytes per CPU. For example, if you set cpu to 2 earlier in this manifest, then we recommend setting memory to 16Gi.

  • DISK_SIZE: the disk size per database instance—for example, 10Gi.

After you apply this manifest, your Kubernetes cluster contains an AlloyDB Omni database cluster with the specified memory, CPU, and storage configuration. To establish a test connection with the new database cluster, see Connect using the preinstalled psql.

For more information about Kubernetes manifests and how to apply them, see Managing resources.

Scale a database cluster

To scale the compute resources for your database cluster, update the cpu and memory values in your db-cluster.yaml manifest and apply the changes. The scaling process depends on whether you opt for a regular scaling operation or a low downtime scaling operation.

Regular scaling

When you update your scaling specification and apply the manifest without any further configuration, the database pods restart instantly. This results in brief downtime across the primary and standby instances while the new resource allocations take effect.

Low downtime scaling

For high availability (HA) clusters with at least one standby, you can minimize downtime during scaling using the Low Downtime Maintenance (LDTM) prepare-and-switch strategy. This strategy applies the scaling changes to the standby first, performs a swift switchover, and then and then applies the changes to the original primary instance. You can scale up or down with the LDTM strategy.

To enable and monitor low downtime scaling, follow these steps:

  1. Enable low downtime scaling. Add the enableLDTM annotation to your database cluster:

    kubectl annotate dbclusters.alloydbomni.dbadmin.goog DB_CLUSTER_NAME dbcluster.dbadmin.goog/enableLDTM=true
    

    Replace DB_CLUSTER_NAME with the name of your database cluster.

  2. Apply the updated scaling specs. Update the cpu and memory values under primarySpec.resources in your manifest, and apply the changes:

    kubectl apply -f db-cluster.yaml
    
  3. Monitor the scaling process. Check the LDTMScalingInProgress status condition to monitor the operation:

    kubectl get dbclusters.alloydbomni.dbadmin.goog DB_CLUSTER_NAME -o yaml | yq '.status.conditions[] | select(.type == "LDTMScalingInProgress")'
    

    Replace DB_CLUSTER_NAME with the name of your database cluster.

    While the process is in progress, the status is true. When the scaling is complete, the condition's status changes to false.

Limitations

  • LDTM scaling is only supported for HA clusters with at least one standby.
  • You cannot perform two LDTM operations simultaneously. For example, you can either use LDTM to scale database clusters or to perform minor version upgrades, but not both at the same time.
  • You must manually roll back after an LDTM scaling operation fails.

What's next