Generate text embeddings by using the ML.GENERATE_EMBEDDING function
This document shows you how to create a BigQuery ML
remote model
that references an embedding model. You then use that model with the
ML.GENERATE_EMBEDDING
function
to create text embeddings by using data from a BigQuery
standard table.
The following types of remote models are supported:
Required roles
To create a remote model and use the ML.GENERATE_EMBEDDING
function, you
need the following Identity and Access Management (IAM) roles:
- Create and use BigQuery datasets, tables, and models:
BigQuery Data Editor (
roles/bigquery.dataEditor
) on your project. Create, delegate, and use BigQuery connections: BigQuery Connections Admin (
roles/bigquery.connectionsAdmin
) on your project.If you don't have a default connection configured, you can create and set one as part of running the
CREATE MODEL
statement. To do so, you must have BigQuery Admin (roles/bigquery.admin
) on your project. For more information, see Configure the default connection.Grant permissions to the connection's service account: Project IAM Admin (
roles/resourcemanager.projectIamAdmin
) on the project that contains the Vertex AI endpoint. This is the current project for remote models that you create by specifying the model name as an endpoint. This is the project identified in the URL for remote models that you create by specifying a URL as an endpoint.Create BigQuery jobs: BigQuery Job User (
roles/bigquery.jobUser
) on your project.
These predefined roles contain the permissions required to perform the tasks in this document. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
- Create a dataset:
bigquery.datasets.create
- Create, delegate, and use a connection:
bigquery.connections.*
- Set service account permissions:
resourcemanager.projects.getIamPolicy
andresourcemanager.projects.setIamPolicy
- Create a model and run inference:
bigquery.jobs.create
bigquery.models.create
bigquery.models.getData
bigquery.models.updateData
bigquery.models.updateMetadata
- Query table data:
bigquery.tables.getData
You might also be able to get these permissions with custom roles or other predefined roles.
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator
), which contains theresourcemanager.projects.create
permission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the BigQuery, BigQuery Connection, and Vertex AI APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin
), which contains theserviceusage.services.enable
permission. Learn how to grant roles.
Create a dataset
Create a BigQuery dataset to contain your resources:
Console
In the Google Cloud console, go to the BigQuery page.
In the left pane, click
Explorer:If you don't see the left pane, click
Expand left pane to open the pane.In the Explorer pane, click your project name.
Click
View actions > Create dataset.On the Create dataset page, do the following:
For Dataset ID, type a name for the dataset.
For Location type, select Region or Multi-region.
- If you selected Region, then select a location from the Region list.
- If you selected Multi-region, then select US or Europe from the Multi-region list.
Click Create dataset.
bq
Create a connection
You can skip this step if you either have a default connection configured, or you have the BigQuery Admin role.
Create a Cloud resource connection for the remote model to use, and get the connection's service account. Create the connection in the same location as the dataset that you created in the previous step.
Select one of the following options:
Console
Go to the BigQuery page.
In the Explorer pane, click
Add data:The Add data dialog opens.
In the Filter By pane, in the Data Source Type section, select Business Applications.
Alternatively, in the Search for data sources field, you can enter
Vertex AI
.In the Featured data sources section, click Vertex AI.
Click the Vertex AI Models: BigQuery Federation solution card.
In the Connection type list, select Vertex AI remote models, remote functions, BigLake and Spanner (Cloud Resource).
In the Connection ID field, enter a name for your connection.
Click Create connection.
Click Go to connection.
In the Connection info pane, copy the service account ID for use in a later step.
bq
In a command-line environment, create a connection:
bq mk --connection --location=REGION --project_id=PROJECT_ID \ --connection_type=CLOUD_RESOURCE CONNECTION_ID
The
--project_id
parameter overrides the default project.Replace the following:
REGION
: your connection regionPROJECT_ID
: your Google Cloud project IDCONNECTION_ID
: an ID for your connection
When you create a connection resource, BigQuery creates a unique system service account and associates it with the connection.
Troubleshooting: If you get the following connection error, update the Google Cloud SDK:
Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
Retrieve and copy the service account ID for use in a later step:
bq show --connection PROJECT_ID.REGION.CONNECTION_ID
The output is similar to the following:
name properties 1234.REGION.CONNECTION_ID {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}
Terraform
Use the
google_bigquery_connection
resource.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
The following example creates a Cloud resource connection named
my_cloud_resource_connection
in the US
region:
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- Launch Cloud Shell.
-
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
-
In Cloud Shell, create a directory and a new
file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
-
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
-
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
-
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
-
Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Grant a role to the remote model connection's service account
You must grant the connection's service account the Vertex AI User role.
If you plan to specify the endpoint as a URL when you create the remote model,
for example
endpoint = 'https://us-central1-aiplatform.googleapis.com/v1/projects/myproject/locations/us-central1/publishers/google/models/text-embedding-005'
,
grant this role in the same project you specify in the URL.
If you plan to specify the endpoint by using the model name when you create
the remote model, for example endpoint = 'text-embedding-005'
, grant this
role in the same project where you plan to create the remote model.
Granting the role in a different project results in the error
bqcx-1234567890-wxyz@gcp-sa-bigquery-condel.iam.gserviceaccount.com does not have the permission to access resource
.
To grant the role, follow these steps:
Console
Go to the IAM & Admin page.
Click
Grant access.The Add principals dialog opens.
In the New principals field, enter the service account ID that you copied earlier.
In the Select a role field, select Vertex AI, and then select Vertex AI User.
Click Save.
gcloud
Use the
gcloud projects add-iam-policy-binding
command:
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER
: your project numberMEMBER
: the service account ID that you copied earlier
Choose an open model deployment method
If you are creating a remote model over a
supported open model,
you can automatically deploy the open model at the same time that
you create the remote model by specifying the Vertex AI
Model Garden or Hugging Face model ID in the CREATE MODEL
statement.
Alternatively, you can manually deploy the open model first, and then use that
open model with the remote model by specifying the model
endpoint in the CREATE MODEL
statement. For more information, see
Deploy open models.
Create a BigQuery ML remote model
Create a remote model:
New open models
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION {DEFAULT | `PROJECT_ID.REGION.CONNECTION_ID`} OPTIONS ( {HUGGING_FACE_MODEL_ID = 'HUGGING_FACE_MODEL_ID' | MODEL_GARDEN_MODEL_NAME = 'MODEL_GARDEN_MODEL_NAME'} [, HUGGING_FACE_TOKEN = 'HUGGING_FACE_TOKEN' ] [, MACHINE_TYPE = 'MACHINE_TYPE' ] [, MIN_REPLICA_COUNT = MIN_REPLICA_COUNT ] [, MAX_REPLICA_COUNT = MAX_REPLICA_COUNT ] [, RESERVATION_AFFINITY_TYPE = {'NO_RESERVATION' | 'ANY_RESERVATION' | 'SPECIFIC_RESERVATION'} ] [, RESERVATION_AFFINITY_KEY = 'compute.googleapis.com/reservation-name' ] [, RESERVATION_AFFINITY_VALUES = RESERVATION_AFFINITY_VALUES ] [, ENDPOINT_IDLE_TTL = ENDPOINT_IDLE_TTL ] );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are using.MODEL_NAME
: the name of the model.REGION
: the region used by the connection.CONNECTION_ID
: the ID of your BigQuery connection.You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection
.HUGGING_FACE_MODEL_ID
: aSTRING
value that specifies the model ID for a supported Hugging Face model, in the formatprovider_name
/model_name
. For example,deepseek-ai/DeepSeek-R1
. You can get the model ID by clicking the model name in the Hugging Face Model Hub and then copying the model ID from the top of the model card.MODEL_GARDEN_MODEL_NAME
: aSTRING
value that specifies the model ID and model version of a supported Vertex AI Model Garden model, in the formatpublishers/publisher
/models/model_name
@model_version
. For example,publishers/openai/models/gpt-oss@gpt-oss-120b
. You can get the model ID by clicking the model card in the Vertex AI Model Garden and then copying the model ID from the Model ID field. You can get the default model version by copying it from the Version field on the model card. To see other model versions that you can use, click Deploy model and then click the Resource ID field.HUGGING_FACE_TOKEN
: aSTRING
value that specifies the Hugging Face User Access Token to use. You can only specify a value for this option if you also specify a value for theHUGGING_FACE_MODEL_ID
option.The token must have the
read
role at minimum but tokens with a broader scope are also acceptable. This option is required when the model identified by theHUGGING_FACE_MODEL_ID
value is a Hugging Face gated or private model.Some gated models require explicit agreement to their terms of service before access is granted. To agree to these terms, follow these steps:
- Navigate to the model's page on the Hugging Face website.
- Locate and review the model's terms of service. A link to the service agreement is typically found on the model card.
- Accept the terms as prompted on the page.
MACHINE_TYPE
: aSTRING
value that specifies the machine type to use when deploying the model to Vertex AI. For information about supported machine types, see Machine types. If you don't specify a value for theMACHINE_TYPE
option, the Vertex AI Model Garden default machine type for the model is used.MIN_REPLICA_COUNT
: anINT64
value that specifies the minimum number of machine replicas used when deploying the model on a Vertex AI endpoint. The service increases or decreases the replica count as required by the inference load on the endpoint. The number of replicas used is never lower than theMIN_REPLICA_COUNT
value and never higher than theMAX_REPLICA_COUNT
value. TheMIN_REPLICA_COUNT
value must be in the range[1, 4096]
. The default value is1
.MAX_REPLICA_COUNT
: anINT64
value that specifies the maximum number of machine replicas used when deploying the model on a Vertex AI endpoint. The service increases or decreases the replica count as required by the inference load on the endpoint. The number of replicas used is never lower than theMIN_REPLICA_COUNT
value and never higher than theMAX_REPLICA_COUNT
value. TheMAX_REPLICA_COUNT
value must be in the range[1, 4096]
. The default value is theMIN_REPLICA_COUNT
value.RESERVATION_AFFINITY_TYPE
: determines whether the deployed model uses Compute Engine reservations to provide assured virtual machine (VM) availability when serving predictions, and specifies whether the model uses VMs from all available reservations or just one specific reservation. For more information, see Compute Engine reservation affinity.You can only use Compute Engine reservations that are shared with Vertex AI. For more information, see Allow a reservation to be consumed.
Supported values are as follows:
NO_RESERVATION
: no reservation is consumed when your model is deployed to a Vertex AI endpoint. SpecifyingNO_RESERVATION
has the same effect as not specifying a reservation affinity.ANY_RESERVATION
: the Vertex AI model deployment consumes virtual machines (VMs) from Compute Engine reservations that are in the current project or that are shared with the project, and that are configured for automatic consumption. Only VMs that meet the following qualifications are used:- They use the machine type specified by the
MACHINE_TYPE
value. - If the BigQuery dataset in which you are creating the
remote model is a single region, the reservation must be in the
same region. If the dataset
is in the
US
multiregion, the reservation must be in theus-central1
region. If the dataset is in theEU
multiregion, the reservation must be in theeurope-west4
region.
If there isn't enough capacity in the available reservations, or if no suitable reservations are found, the system provisions on-demand Compute Engine VMs to meet the resource requirements.
- They use the machine type specified by the
SPECIFIC_RESERVATION
: the Vertex AI model deployment consumes VMs only from the reservation that you specify in theRESERVATION_AFFINITY_VALUES
value. This reservation must be configured for specifically targeted consumption. Deployment fails if the specified reservation doesn't have sufficient capacity.
RESERVATION_AFFINITY_KEY
: the stringcompute.googleapis.com/reservation-name
. You must specify this option when theRESERVATION_AFFINITY_TYPE
value isSPECIFIC_RESERVATION
.RESERVATION_AFFINITY_VALUES
: anARRAY<STRING>
value that specifies the full resource name of the Compute Engine reservation, in the following format:
projects/myproject/zones/reservation_zone/reservations/reservation_name
For example,
RESERVATION_AFFINITY_values = ['projects/myProject/zones/us-central1-a/reservations/myReservationName']
.You can get the reservation name and zone from the Reservations page of the Google Cloud console. For more information, see View reservations.
You must specify this option when the
RESERVATION_AFFINITY_TYPE
value isSPECIFIC_RESERVATION
.ENDPOINT_IDLE_TTL
: anINTERVAL
value that specifies the duration of inactivity after which the open model is automatically undeployed from the Vertex AI endpoint.To enable automatic undeployment, specify an interval literal value between 390 minutes (6.5 hours) and 7 days. For example, specify
INTERVAL 8 HOUR
to have the model undeployed after 8 hours of idleness. The default value is 390 minutes (6.5 hours).Model inactivity is defined as the amount of time that has passed since the any of the following operations were performed on the model:
- Running the
CREATE MODEL
statement. - Running the
ALTER MODEL
statement with theDEPLOY_MODEL
argument set toTRUE
. - Sending an inference request to the model endpoint. For example, by
running the
ML.GENERATE_EMBEDDING
orML.GENERATE_TEXT
function.
Each of these operations resets the inactivity timer to zero. The reset is triggered at the start of the BigQuery job that performs the operation.
After the model is undeployed, inference requests sent to the model return an error. The BigQuery model object remains unchanged, including model metadata. To use the model for inference again, you must redeploy it by running the
ALTER MODEL
statement on the model and setting theDEPLOY_MODEL
option toTRUE
.- Running the
Deployed open models
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION {DEFAULT | `PROJECT_ID.REGION.CONNECTION_ID`} OPTIONS ( ENDPOINT = 'https://ENDPOINT_REGION-aiplatform.googleapis.com/v1/projects/ENDPOINT_PROJECT_ID/locations/ENDPOINT_REGION/endpoints/ENDPOINT_ID' );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are using.MODEL_NAME
: the name of the model.REGION
: the region used by the connection.CONNECTION_ID
: the ID of your BigQuery connection.You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection
.ENDPOINT_REGION
: the region in which the open model is deployed.ENDPOINT_PROJECT_ID
: the project in which the open model is deployed.ENDPOINT_ID
: the ID of the HTTPS endpoint used by the open model. You can get the endpoint ID by locating the open model on the Online prediction page and copying the value in the ID field.
All other models
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION {DEFAULT | `PROJECT_ID.REGION.CONNECTION_ID`} OPTIONS (ENDPOINT = 'ENDPOINT');
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are using.MODEL_NAME
: the name of the model.REGION
: the region used by the connection.CONNECTION_ID
: the ID of your BigQuery connection.You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection
.ENDPOINT
: the name of an embedding model to use. For more information, seeENDPOINT
.The Vertex AI model that you specify must be available in the location where you are creating the remote model. For more information, see Locations.
Generate text embeddings
Generate text embeddings with the
ML.GENERATE_EMBEDDING
function
by using text data from a table column or a query.
Typically, you would use a text embedding model for text-only use cases, and a multimodal embedding model for cross-modal search use cases, where embeddings for text and visual content are generated in the same semantic space.
Vertex AI text
Generate text embeddings by using a remote model over a Vertex AI text embedding model:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (CONTENT_QUERY)}, STRUCT(FLATTEN_JSON AS flatten_json_output, TASK_TYPE AS task_type, OUTPUT_DIMENSIONALITY AS output_dimensionality) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the remote model over an embedding model.TABLE_NAME
: the name of the table that contains the text to embed. This table must have a column that's namedcontent
, or you can use an alias to use a differently named column.CONTENT_QUERY
: a query whose result contains aSTRING
column calledcontent
.FLATTEN_JSON
: aBOOL
value that indicates whether to parse the embedding into a separate column. The default value isTRUE
.TASK_TYPE
: aSTRING
literal that specifies the intended downstream application to help the model produce better quality embeddings.TASK_TYPE
accepts the following values:RETRIEVAL_QUERY
: specifies that the given text is a query in a search or retrieval setting.RETRIEVAL_DOCUMENT
: specifies that the given text is a document in a search or retrieval setting.When using this task type, it is helpful to include the document title in the query statement in order to improve embedding quality. The document title must be in a column either named
title
or aliased astitle
, for example:SELECT * FROM ML.GENERATE_EMBEDDING( MODEL
mydataset.embedding_model
, (SELECT abstract as content, header as title, publication_number FROMmydataset.publications
), STRUCT(TRUE AS flatten_json_output, 'RETRIEVAL_DOCUMENT' as task_type) );Specifying the title column in the input query populates the
title
field of the request body sent to the model. If you specify atitle
value when using any other task type, that input is ignored and has no effect on the embedding results.SEMANTIC_SIMILARITY
: specifies that the given text will be used for Semantic Textual Similarity (STS).CLASSIFICATION
: specifies that the embeddings will be used for classification.CLUSTERING
: specifies that the embeddings will be used for clustering.QUESTION_ANSWERING
: specifies that the embeddings will be used for question answering.FACT_VERIFICATION
: specifies that the embeddings will be used for fact verification.CODE_RETRIEVAL_QUERY
: specifies that the embeddings will be used for code retrieval.
OUTPUT_DIMENSIONALITY
: anINT64
value that specifies the number of dimensions to use when generating embeddings. For example, if you specify256 AS output_dimensionality
, then theml_generate_embedding_result
output column contains 256 embeddings for each input value.For remote models over
gemini-embedding-001
models, theOUTPUT_DIMENSIONALITY
value must be in the range[1, 3072]
. The default value is3072
. For remote models overtext-embedding
ortext-multilingual-embedding
models, theOUTPUT_DIMENSIONALITY
value must be in the range[1, 768]
. The default value is768
.If you are using a remote model over a
text-embedding
model, thetext-embedding
model version must betext-embedding-004
or later. If you are using a remote model over atext-multilingual-embedding
model, thetext-multilingual-embedding
model version must betext-multilingual-embedding-002
or later.
Example: embed text in a table
The following example shows a request to embed the content
column
of the text_data
table:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `mydataset.embedding_model`, TABLE mydataset.text_data, STRUCT(TRUE AS flatten_json_output, 'CLASSIFICATION' AS task_type) );
Open text
Generate text embeddings by using a remote model over an open embedding model:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (CONTENT_QUERY)}, STRUCT(FLATTEN_JSON AS flatten_json_output) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the remote model over an embedding model.TABLE_NAME
: the name of the table that contains the text to embed. This table must have a column that's namedcontent
, or you can use an alias to use a differently named column.CONTENT_QUERY
: a query whose result contains aSTRING
column calledcontent
.FLATTEN_JSON
: aBOOL
value that indicates whether to parse the embedding into a separate column. The default value isTRUE
.
Vertex AI multimodal
Generate text embeddings by using a remote model over a Vertex AI multimodal embedding model:
SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (CONTENT_QUERY)}, STRUCT(FLATTEN_JSON AS flatten_json_output, OUTPUT_DIMENSIONALITY AS output_dimensionality) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the remote model over amultimodalembedding@001
model.TABLE_NAME
: the name of the table that contains the text to embed. This table must have a column that's namedcontent
, or you can use an alias to use a differently named column.CONTENT_QUERY
: a query whose result contains aSTRING
column calledcontent
.FLATTEN_JSON
: aBOOL
that indicates whether to parse the embedding into a separate column. The default value isTRUE
.OUTPUT_DIMENSIONALITY
: anINT64
value that specifies the number of dimensions to use when generating embeddings. Valid values are128
,256
,512
, and1408
. The default value is1408
. For example, if you specify256 AS output_dimensionality
, then theml_generate_embedding_result
output column contains 256 embeddings for each input value.
Example: use embeddings to rank semantic similarity
The following example embeds a collection of movie reviews and orders them by
cosine distance to the review "This movie was average" using the VECTOR_SEARCH
function.
A smaller distance indicates more semantic similarity.
For more information about vector search and vector index, see Introduction to vector search.
CREATE TEMPORARY TABLE movie_review_embeddings AS ( SELECT * FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, ( SELECT "This movie was fantastic" AS content UNION ALL SELECT "This was the best movie I've ever seen!!" AS content UNION ALL SELECT "This movie was just okay..." AS content UNION ALL SELECT "This movie was terrible." AS content ), STRUCT(TRUE AS flatten_json_output) ) ); WITH average_review_embedding AS ( SELECT ml_generate_embedding_result FROM ML.GENERATE_EMBEDDING( MODEL `bqml_tutorial.embedding_model`, (SELECT "This movie was average" AS content), STRUCT(TRUE AS flatten_json_output) ) ) SELECT base.content AS content, distance AS distance_to_average_review FROM VECTOR_SEARCH( TABLE movie_review_embeddings, "ml_generate_embedding_result", (SELECT ml_generate_embedding_result FROM average_review_embedding), distance_type=>"COSINE", top_k=>-1 ) ORDER BY distance_to_average_review;
The result is the following:
+------------------------------------------+----------------------------+ | content | distance_to_average_review | +------------------------------------------+----------------------------+ | This movie was just okay... | 0.062789813467745592 | | This movie was fantastic | 0.18579561313064263 | | This movie was terrible. | 0.35707466240930985 | | This was the best movie I've ever seen!! | 0.41844932504542975 | +------------------------------------------+----------------------------+
What's next
- Learn how to use text and image embeddings to perform a text-to-image semantic search.
- Learn how to use text embeddings for semantic search and retrieval-augmented generation (RAG).