You can generate multimodal embeddings in AlloyDB for PostgreSQL using the supported
Vertex AI multimodal model, multimodalembedding@001.
You can use the Vertex AI multimodal embedding models referred to in Supported models.
This page assumes that you're familiar with AlloyDB for PostgreSQL and generative AI concepts. For more information, see What are embeddings.
Before you begin
Before you use multimodal embeddings, do the following:
- Verify that the
google_ml_integrationextension is installed. - Verify that the
google_ml_integration.enable_model_supportflag is set toon. - Integrate with Vertex AI.
- Access data in Cloud Storage to generate multimodal embeddings.
Integrate with Vertex AI and install the extension
- Configure user access to Vertex AI models.
- Verify that the latest version of
google_ml_integrationis installed.To check the installed version, run the following command:
SELECT extversion FROM pg_extension WHERE extname = 'google_ml_integration'; extversion ------------ 1.5.2 (1 row)
If the extension isn't installed or if the installed version is earlier than 1.5.2, update the extension.
CREATE EXTENSION IF NOT EXISTS google_ml_integration; ALTER EXTENSION google_ml_integration UPDATE;
If you experience issues when you run the preceding commands, or if the extension isn't updated to version 1.5.2 after you run the preceding commands, contact Google Cloud support.
To use the AlloyDB AI query engine functionality, set the
google_ml_integration.enable_ai_query_engineflag totrue.SQL
- Enable the AI query engine for the current session.
SET google_ml_integration.enable_ai_query_engine = true;
- Enable features for a specific database across sessions.
ALTER DATABASE DATABASE_NAME SET google_ml_integration.enable_ai_query_engine = 'on';
- Enable the AI query engine for a specific user across sessions and databases.
ALTER ROLE postgres SET google_ml_integration.enable_ai_query_engine = 'on';
Console
To modify the value of the
google_ml_integration.enable_ai_query_engineflag, follow the steps in Configure an instance's database flags.gcloud
To use the gcloud CLI, you can install and initialize the Google Cloud CLI, or you can use Cloud Shell.
You can modify the value of the
google_ml_integration.enable_ai_query_engineflag. For more information, see Configure an instance's database flags.gcloud alloydb instances update INSTANCE_ID \ --database-flags google_ml_integration.enable_ai_query_engine=on \ --region=REGION_ID \ --cluster=CLUSTER_ID \ --project=PROJECT_ID
- Enable the AI query engine for the current session.
Access data in Cloud Storage to generate multimodal embeddings
- To generate multimodal embeddings, refer to content in Cloud Storage using
a
gs://URI. - Access Cloud Storage content through your current project's Vertex AI service agent. By default, the Vertex AI service agent already has permission to access the bucket in the same project. For more information, see IAM roles and permissions index.
To access data in a Cloud Storage bucket in another Google Cloud project, run the following gcloud CLI command to grant the Storage Object Viewer role (
roles/storage.objectViewer) to the Vertex AI service agent of your AlloyDB project.gcloud projects add-iam-policy-binding <ANOTHER_PROJECT_ID> \ --member="serviceAccount:service-<PROJECT_ID>@gcp-sa-aiplatform.iam.gserviceaccount.com" \ --role="roles/storage.objectViewer"For more information, see Set and manage IAM policies on buckets.
To generate multimodal embeddings, select one of the following schemas.