Generate text embeddings

The google_ml_integration extension includes embedding functions in two different namespaces; public and google_ml. This page describes how to generate text embeddings using functions from these namespaces.

The embedding() function in the public schema can be used with any Vertex AI embedding model without registering the endpoint. If you want to pass any custom information such as the task type, register the endpoint, and then use the google_ml.embedding() function in the google_ml schema. For more information about registering an endpoint, see Register a model.

How embeddings work

Imagine a database running on AlloyDB with the following characteristics:

  • The database contains a table, items. Each row in this table describes an item that your business sells.

  • The items table contains a column, complaints. This TEXT column stores buyer complaints logged about each item.

  • The database integrates with the Vertex AI Model Garden, giving it access to the gemini-embedding-001 English models.

Even though this database stores complaints about items, these complaints are stored as plain text, making them difficult to query. For example, to see which items have the most complaints from customers who received the wrong color of merchandise, then you can perform ordinary SQL queries on the table, that look for various keyword matches. However, this approach only matches rows that contain those exact keywords.

For example, a basic SQL query such as SELECT * FROM item WHERE complaints LIKE "%wrong color%" doesn't return a row whose complaints field contains only The picture shows a blue one, but the one I received was red.

SQL queries using LLM-powered embeddings can help return semantically similar responses for such queries. By applying embeddings, you can query the table in this example for items whose complaints have semantic similarity to a given text prompt, such as It was the wrong color.

For basic embedding generation, select one of the following schemas.

Before you begin

To let AlloyDB generate embeddings, do the following:

  • Connect to your database using psql or AlloyDB for PostgreSQL Studio as the postgres user.
  • Verify that the google_ml_integration extension is installed.
  • Verify that the google_ml_integration.enable_model_support flag is set to on.
  • Verify that the google_ml_integration extension is version 1.5 or higher and that the google_ml_integration.enable_faster_embedding_generation flag is set to on.

    To check your extension version with the following command:

    SELECT extversion FROM pg_extension WHERE extname = 'google_ml_integration';
    

    If you need to update the extension, use the ALTER EXTENSION google_ml_integration UPDATE; command.

  • Before you can generate embeddings from an AlloyDB database, you must configure AlloyDB to work with Vertex AI. For more information, see Integrate your database with Vertex AI.

  • Grant permissions to database users to generate embeddings.

    To generate embeddings, grant the EXECUTE permission on the google_ml.embedding function to the user:

    \c 'DB_NAME';
    GRANT EXECUTE ON FUNCTION google_ml.embedding TO 'USER_NAME';
    

    Replace the following:

    • DB_NAME: the name of the database on which the permissions are granted.

    • USER_NAME: the name of the user for whom the permissions are granted.

    For managing and monitoring auto embedding generation, users have Select access to the google_ml.embed_gen_progress and google_ml.embed_gen_settings tables by default.

    To let a user manage auto embedding generation, grant INSERT, UPDATE, and DELETE permissions on the google_ml.embed_gen_progress and google_ml.embed_gen_settings tables:

    GRANT INSERT, UPDATE, DELETE ON google_ml.embed_gen_progress TO 'USER_NAME';
    

Generate embeddings

Use the google_ml.embedding() SQL function to call text embedding models.

To call the model and generate embeddings, run the following query:

SELECT
 google_ml.embedding(
   model_id => 'MODEL_ID',
   content => 'CONTENT');

Replace the following:

  • MODEL_ID: the qualified model ID-for example, gemini-embedding-001.
  • CONTENT: the text to translate into a vector embedding.

Examples for generating embeddings

Some examples for generating embeddings using registered model endpoint are listed in this section.

Gemini embedding models

To generate embeddings for a registered gemini-embedding-001 model endpoint, run the following statement:

   SELECT
     google_ml.embedding(
       model_id => 'gemini-embedding-001',
       content => 'AlloyDB is a managed, cloud-hosted SQL database service');

If your AlloyDB cluster and the Vertex AI endpoint are in different projects, then set the model_id to the qualified path of the endpoint-for example, projects/PROJECT_ID/locations/REGION_ID/publishers/google/models/gemini-embedding-001.

To generate embeddings for a registered gemini-embedding-001 model endpoint, run the following statement:

   SELECT
     google_ml.embedding(
       model_id => 'gemini-embedding-001',
       content => 'AlloyDB is a managed, cloud-hosted SQL database service');

OpenAI embedding model

To generate embeddings for a registered text-embedding-ada-002 model endpoint by OpenAI, run the following statement:

   SELECT
     google_ml.embedding(
       model_id => 'text-embedding-ada-002',
       content => 'e-mail spam');

To generate embeddings for a registered text-embedding-3-small or text-embedding-3-large model endpoints by OpenAI, run the following statement:

 SELECT
   google_ml.embedding(
     model_id => 'text-embedding-3-small',
     content => 'Vector embeddings in AI');

What's next