Depending on your automation needs, you can choose between two orchestrator interfaces:
- AlloyDB Omni orchestrator CLI
alloydbctl: recommended for environments using shell scripts for automation. - AlloyDB Omni orchestrator Ansible collection: recommended for environments that use Ansible-based automation.
Limitations
The following limitations apply to this release:
- This release only supports AlloyDB Omni PostgreSQL 18.
- This release only supports RHEL version 9-compatible software packages.
- This release only supports Intel x86 64-bit based platform.
- Major version upgrade is unsupported in this release.
- This release does not include instructions to set up Disaster Recovery.
- This release does not include instructions to set up read pool instances.
- The AlloyDB Omni monitor does not support SSL connections. You must deploy your monitoring dashboard servers on the same private network as the AlloyDB Omni nodes.
- AlloyDB Omni assumes that SELinux, when present, is configured on the host to be permissive, including access to the file system.
Before you begin
Before starting the deployment, ensure you meet the following requirements:
Obtain AlloyDB Omni packages
AlloyDB Omni software packages are in Preview. You must sign up using the AlloyDB Omni for Linux Preview signup form to receive the RPM files.
Set up a software repository
AlloyDB Omni RPMs are maintained in a global YUM repository. Alternatively, if your organization manages software through a private YUM repository, you must include the RPM packages provided after the signup into your repository according to your management tools. You need your private repository URL during the deployment process.
Provision machines
Provision a set of VMs running RHEL 9, including:
- Controller: 1 node. Orchestrates the deployment and must have passwordless SSH access to all other nodes.
- Database: 1 or 3 nodes. Run the AlloyDB Omni core and monitoring services.
- Load balancer: Optional, 2 nodes. Run HAProxy and optional connection poolers like PgBouncer.
Configure network and firewall
Configure your firewall to allow the following TCP ports based on your architecture:
| Port | Protocol | Node | Service | Traffic | Notes |
|---|---|---|---|---|---|
| 5432 | TCP | Load balancer | HAProxy | Incoming | On the load balancer node if applicable |
| 6432 | TCP | Load balancer | PgBouncer | Incoming | Connection pooler if applicable |
| 9187 | TCP | Database | AlloyDB monitor | Incoming | Monitor if applicable; does not support TLS |
| 5432 | TCP | Database | AlloyDB | Incoming | 3-node resilient architecture |
| 6432 | TCP | Database | PgBouncer | Incoming | Connection pooler if applicable on 3-node resilient architecture |
| 2380 | TCP | etcd | etcd | Incoming | For etcd peer-peer protocol |
| - | VRRP | Load balancer | Keepalived | Incoming | Keepalived leader election |
| - | VRRP | Database | Keepalived | Incoming | Keepalived leader election on 3-node resilient architecture |
| 6703 | TCP | Cluster Manager | Cluster Manager | Incoming | Cluster Manager gRPC port |
| 6702 | TCP | All nodes | Node Manager | Incoming | Node Manager gRPC port |
| 6700 | TCP | All nodes | Node Manager | Incoming | Node Manager Init gRPC port |
| 8086 | HTTP | All nodes | Node Manager | Incoming | Node Manager HTTP Port to check primary or standby |
| 2379 | TCP | etcd | etcd | Incoming | For Cluster Manager to work with etcd |
| 443 | HTTPS | Database | Cloud Storage | Outgoing | For sending backup data |
To ensure SELinux is permissive, run the following commands on all database and load balancer nodes. These commands are not required for the control node:
sudo sed -i 's/^SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
sudo setenforce 0Install the orchestrator
Ansible
Use this method if you plan to use Ansible playbooks to manage your clusters.
On your control node, install Ansible and the required Python libraries:
# Install EPEL for RHEL 9 sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm -y # Install Ansible and dependencies sudo dnf install ansible sudo dnf install python3-grpcio python3-protobuf python3-googleapis-common-protosDownload the Ansible collection tar file and its signature, verify the package, and install it:
# Download the collection and its signature wget ALLOYDBOMNI_SERVER/google-alloydbomni_orchestrator-0.1.0-1.tar.gz wget ALLOYDBOMNI_SERVER/google-alloydbomni_orchestrator-0.1.0-1.tar.gz.asc# Import the Google Linux signing key curl -sS https://dl.google.com/linux/linux_signing_key.pub | gpg --import# Verify the collection tarball gpg --verify google-alloydbomni_orchestrator-0.1.0-1.tar.gz.asc \ google-alloydbomni_orchestrator-0.1.0-1.tar.gz# Install the collection ansible-galaxy collection install google-alloydbomni_orchestrator-0.1.0-1.tar.gz# Verify the installation ansible-galaxy collection list | grep alloydbomni_orchestratorReplace
ALLOYDBOMNI_SERVERwith the URL of the server where the AlloyDB Omni software packages, including the Ansible collection tar file, are hosted. This source is provided to you after you sign up for the AlloyDB Omni for Linux Preview.
alloydbctl
Use this method if you prefer a command-line utility for managing resources.
Create a YUM configuration file on the control node at
/etc/yum.repos.d/alloydbomni_orchestrator.repo:[alloydbomni_orchestrator] name=Google AlloyDB Omni orchestrator packages baseurl=REPOSITORY_SERVER enabled=1 repo_gpgcheck=0 gpgcheck=1 gpgkey=https://dl.google.com/linux/linux_signing_key.pubReplace REPOSITORY_SERVER with the URL of the YUM repository where the AlloyDB Omni orchestrator RPM packages are hosted. This source is provided to you after you sign up for the AlloyDB Omni for Linux Preview.
Install the package and add the binary to your system's PATH:
sudo dnf install alloydbomni_orchestrator export PATH=$PATH:/usr/local/binVerify the installation by running:
alloydbctl --help