Questa pagina fornisce vari esempi di configurazione YAML per il deployment e la gestione di AlloyDB Omni su Kubernetes.
Esempi di DBCluster
Minimal DBCluster
Una configurazione di base per l'esecuzione di un DBCluster.
# This is a minimal DBCluster spec. See v1_dbcluster_full.yaml for more configurations. apiVersion: v1 kind: Secret metadata: name: db-pw-dbcluster-sample type: Opaque data: dbcluster-sample: "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion: alloydbomni.dbadmin.goog/v1 kind: DBCluster metadata: name: dbcluster-sample spec: databaseVersion: "16.9.0" primarySpec: adminUser: passwordRef: name: db-pw-dbcluster-sample resources: memory: 5Gi cpu: 1 disks: - name: DataDisk size: 10Gi
Full DBCluster
Una configurazione DBCluster completa che mostra più opzioni.
apiVersion: v1 kind: Secret metadata: name: db-pw-dbcluster-sample type: Opaque data: dbcluster-sample: "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion: alloydbomni.dbadmin.goog/v1 kind: DBCluster metadata: name: dbcluster-sample spec: allowExternalIncomingTraffic: true availability: healthcheckPeriodSeconds: 30 # The default is 30 seconds. This is a new feature in 1.2.0. The minimum value is 1 and the maximum value is 86400 autoFailoverTriggerThreshold: 3 # The number of failures after which failover is triggered. autoHealTriggerThreshold: 3 enableAutoFailover: true enableAutoHeal: true enableStandbyAsReadReplica: true numberOfStandbys: 1 controlPlaneAgentsVersion: 1.6.0 databaseVersion: 16.9.0 databaseImageOSType: UBI9 isDeleted: false mode: "" primarySpec: adminUser: passwordRef: name: db-pw-dbcluster-sample allowExternalIncomingTrafficToInstance: false auditLogTarget: {} dbLoadBalancerOptions: annotations: networking.gke.io/load-balancer-type: "internal" lb.company.com/enabled: "true" gcp: {} features: columnarSpillToDisk: cacheSize: 50Gi ultraFastCache: cacheSize: 100Gi # either generic volume or local volume genericVolume: storageClass: "local-storage" # localVolume: # path: "/mnt/disks/raid/0" # nodeAffinity: # required: # nodeSelectorTerms: # - matchExpressions: # - key: "cloud.google.com/gke-local-nvme-ssd" # operator: "In" # values: # - "true" googleMLExtension: config: vertexAIKeyRef: vertex-ai-key-alloydb # secret used to enable AlloyDB Omni to access AlloyDB AI features vertexAIRegion: us-central1 # default resources: cpu: "12" disks: - name: DataDisk size: 1000Gi storageClass: px-ceph - name: LogDisk size: 10Gi storageClass: px-ceph - name: ObsDisk size: 4Gi storageClass: px-ceph - name: BackupDisk size: 10Gi storageClass: px-ceph memory: 100Gi walArchiveSetting: location: wal/log # enable WAL archiving and archive logs to /archive/wal/log sidecarRef: name: cv-sidecar-config # provide a sidecar config that is referenced here parameters: google_columnar_engine.enabled: "on" google_columnar_engine.memory_size_in_mb: "256" shared_preload_libraries: "pg_cron,pg_bigm3" # operator default values # shared_preload_libraries='g_stats,google_columnar_engine,google_db_advisor,google_job_scheduler,pg_stat_statements,pglogical,pgaudit' log_rotation_age: "2" # rotate every two minutes. Set to "0" to disable age-based rotation. If unset, no age-based rotation log_rotation_size: "400000" # Rotate every 400,000kb. Set to "0" to disable size-based rotation. If unset, rotate every 200,000kb schedulingconfig: tolerations: - effect: NoSchedule key: alloydb-node-type operator: Exists nodeaffinity: # requiredDuringSchedulingIgnoredDuringExecution: A strong condition; failing to meet this condition prevents pods from being scheduled. preferredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: alloydb-node-type operator: In values: - database podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - store topologyKey: "kubernetes.io/hostname" podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: security operator: In values: - S1 topologyKey: "topology.kubernetes.io/zone" services: Logging: true Monitoring: true --- apiVersion: v1 kind: PersistentVolume metadata: name: "example-local-pv" spec: capacity: storage: 375Gi accessModes: - "ReadWriteOnce" persistentVolumeReclaimPolicy: "Retain" storageClassName: "local-storage" local: path: "/mnt/disks/raid/0" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: # following example key applies to an operator that is deployed on # Google Cloud and uses the local ssd option - key: "cloud.google.com/gke-local-nvme-ssd" operator: "In" values: - "true" --- apiVersion: alloydbomni.dbadmin.goog/v1 kind: DBInstance metadata: name: dbcluster-sample-rp-1 spec: instanceType: ReadPool dbcParent: name: dbcluster-sample nodeCount: 2 resources: memory: 6Gi cpu: 2 disks: - name: DataDisk size: 15Gi schedulingconfig: tolerations: - key: "node-role.kubernetes.io/control-plane" operator: "Exists" effect: "NoSchedule" nodeaffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: another-node-label-key operator: In values: - another-node-label-value podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - store topologyKey: "kubernetes.io/hostname" podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: security operator: In values: - S1 topologyKey: "topology.kubernetes.io/zone"
DBCluster con agente ML
Esempio di configurazione dell'agente ML all'interno di un DBCluster.
apiVersion: v1 kind: Secret metadata: name: db-pw-dbcluster-sample type: Opaque data: dbcluster-sample: "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion: v1 kind: Secret metadata: name: vertex-ai-key-alloydb type: Opaque data: private-key.json: "" --- apiVersion: alloydbomni.dbadmin.goog/v1 kind: DBCluster metadata: name: dbcluster-sample spec: databaseVersion: "16.9.0" primarySpec: features: googleMLExtension: enabled: true config: vertexAIKeyRef: vertex-ai-key-alloydb vertexAIRegion: us-central1 adminUser: passwordRef: name: db-pw-dbcluster-sample resources: memory: 5Gi cpu: 1 disks: - name: DataDisk size: 10Gi
DBCluster con bilanciatore del carico
apiVersion: v1 kind: Secret metadata: name: db-pw-dbcluster-sample type: Opaque data: dbcluster-sample: "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion: alloydbomni.dbadmin.goog/v1 kind: DBCluster metadata: name: dbcluster-sample spec: databaseVersion: "16.9.0" primarySpec: adminUser: passwordRef: name: db-pw-dbcluster-sample resources: memory: 5Gi cpu: 1 disks: - name: DataDisk size: 10Gi dbLoadBalancerOptions: annotations: # Creates an internal LoadBalancer in GKE. networking.gke.io/load-balancer-type: "internal" allowExternalIncomingTraffic: true
DBCluster con sidecar Commvault
apiVersion: v1 kind: Secret metadata: name: db-pw-dbcluster-sample type: Opaque data: dbcluster-sample: "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion: alloydbomni.dbadmin.goog/v1 kind: DBCluster metadata: name: dbcluster-sample spec: databaseVersion: "16.9.0" primarySpec: adminUser: passwordRef: name: db-pw-dbcluster-sample resources: memory: 5Gi cpu: 1 disks: - name: DataDisk size: 10Gi - name: LogDisk size: 10Gi walArchiveSetting: location: wal/log # enable WAL archiving and archive logs to /archive/wal/log sidecarRef: name: cv-sidecar-config
Backup e ripristino
Piano di backup
Esempio di pianificazione di backup completi e incrementali.
apiVersion: alloydbomni.dbadmin.goog/v1 kind: BackupPlan metadata: name: backupplan1 spec: dbclusterRef: dbcluster-sample backupRetainDays: 14 paused: false backupSchedules: # Full backup at 00:00 on every Sunday. full: "0 0 * * 0" # Incremental backup at 21:00 every day. incremental: "0 21 * * *"
Ripristina dal backup
apiVersion: alloydbomni.dbadmin.goog/v1 kind: Restore metadata: name: restore1 spec: sourceDBCluster: dbcluster-sample backup: backup1
Clona
apiVersion: alloydbomni.dbadmin.goog/v1 kind: Restore metadata: name: clone1 spec: sourceDBCluster: dbcluster-sample pointInTime: "2024-02-23T19:59:43Z" clonedDBClusterConfig: dbclusterName: new-dbcluster-sample
Alta disponibilità e resilienza dei dati
Failover
Esempio di esecuzione di un failover non pianificato a un'istanza di standby.
apiVersion: alloydbomni.dbadmin.goog/v1 kind: Failover metadata: name: failover-sample spec: dbclusterRef: dbcluster-sample
Cambia
Esempio di esecuzione di un cambio controllato a un'istanza di standby.
apiVersion: alloydbomni.dbadmin.goog/v1 kind: Switchover metadata: name: switchover-sample spec: dbclusterRef: dbcluster-sample newPrimary: aaaa-dbcluster-sample # Replace with the standby instance you selected to switch with the primary instance.
Monitoraggio e pool di connessioni
Configurazione di PgBouncer
apiVersion: alloydbomni.dbadmin.goog/v1 kind: PgBouncer metadata: name: mypgbouncer spec: allowSuperUserAccess: true dbclusterRef: dbcluster-sample replicaCount: 1 parameters: pool_mode: transaction ignore_startup_parameters: extra_float_digits default_pool_size: "15" max_client_conn: "800" max_db_connections: "160" podSpec: resources: memory: 1Gi cpu: 1 image: "gcr.io/alloydb-omni-staging/g-pgbouncer:1.4.0" serviceOptions: type: "ClusterIP"
Esempio di Sidecar
apiVersion: alloydbomni.dbadmin.goog/v1 kind: Sidecar metadata: name: sidecar-sample spec: sidecars: - image: busybox name: sidecar-sample volumeMounts: - name: obsdisk mountPath: /logs command: ["/bin/sh"] args: - -c - | while [ true ] do date set -x ls -lh /logs/diagnostic set +x done