Generate text by using the ML.GENERATE_TEXT function
This document shows you how to create a BigQuery ML
remote model
that represents a Vertex AI model, and then use that remote model
with the
ML.GENERATE_TEXT
function
to generate text.
The following types of remote models are supported:
- Remote models over any of the generally available or preview Gemini models.
- Remote models over Anthropic Claude models.
- Remote models over Llama models
- Remote models over Mistral AI models
- Remote models over supported open models.
Depending on the Vertex AI model that you choose, you can generate text based on unstructured data input from object tables or text input from standard tables.
Required roles
To create a remote model and generate text, you need the following Identity and Access Management (IAM) roles:
- Create and use BigQuery datasets, tables, and models:
BigQuery Data Editor (
roles/bigquery.dataEditor
) on your project. Create, delegate, and use BigQuery connections: BigQuery Connections Admin (
roles/bigquery.connectionsAdmin
) on your project.If you don't have a default connection configured, you can create and set one as part of running the
CREATE MODEL
statement. To do so, you must have BigQuery Admin (roles/bigquery.admin
) on your project. For more information, see Configure the default connection.Grant permissions to the connection's service account: Project IAM Admin (
roles/resourcemanager.projectIamAdmin
) on the project that contains the Vertex AI endpoint. This is the current project for remote models that you create by specifying the model name as an endpoint. This is the project identified in the URL for remote models that you create by specifying a URL as an endpoint.If you use the remote model to analyze unstructured data from an object table, and the Cloud Storage bucket that you use in the object table is in a different project than your Vertex AI endpoint, you must also have Storage Admin (
roles/storage.admin
) on the Cloud Storage bucket used by the object table.Create BigQuery jobs: BigQuery Job User (
roles/bigquery.jobUser
) on your project.
These predefined roles contain the permissions required to perform the tasks in this document. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
- Create a dataset:
bigquery.datasets.create
- Create, delegate, and use a connection:
bigquery.connections.*
- Set service account permissions:
resourcemanager.projects.getIamPolicy
andresourcemanager.projects.setIamPolicy
- Create a model and run inference:
bigquery.jobs.create
bigquery.models.create
bigquery.models.getData
bigquery.models.updateData
bigquery.models.updateMetadata
You might also be able to get these permissions with custom roles or other predefined roles.
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator
), which contains theresourcemanager.projects.create
permission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the BigQuery, BigQuery Connection, and Vertex AI APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin
), which contains theserviceusage.services.enable
permission. Learn how to grant roles.
Create a dataset
Create a BigQuery dataset to contain your resources:
Console
In the Google Cloud console, go to the BigQuery page.
In the left pane, click
Explorer:If you don't see the left pane, click
Expand left pane to open the pane.In the Explorer pane, click your project name.
Click
View actions > Create dataset.On the Create dataset page, do the following:
For Dataset ID, type a name for the dataset.
For Location type, select Region or Multi-region.
- If you selected Region, then select a location from the Region list.
- If you selected Multi-region, then select US or Europe from the Multi-region list.
Click Create dataset.
bq
Create a connection
You can skip this step if you either have a default connection configured, or you have the BigQuery Admin role.
Create a Cloud resource connection for the remote model to use, and get the connection's service account. Create the connection in the same location as the dataset that you created in the previous step.
Select one of the following options:
Console
Go to the BigQuery page.
In the Explorer pane, click
Add data:The Add data dialog opens.
In the Filter By pane, in the Data Source Type section, select Business Applications.
Alternatively, in the Search for data sources field, you can enter
Vertex AI
.In the Featured data sources section, click Vertex AI.
Click the Vertex AI Models: BigQuery Federation solution card.
In the Connection type list, select Vertex AI remote models, remote functions, BigLake and Spanner (Cloud Resource).
In the Connection ID field, enter a name for your connection.
Click Create connection.
Click Go to connection.
In the Connection info pane, copy the service account ID for use in a later step.
bq
In a command-line environment, create a connection:
bq mk --connection --location=REGION --project_id=PROJECT_ID \ --connection_type=CLOUD_RESOURCE CONNECTION_ID
The
--project_id
parameter overrides the default project.Replace the following:
REGION
: your connection regionPROJECT_ID
: your Google Cloud project IDCONNECTION_ID
: an ID for your connection
When you create a connection resource, BigQuery creates a unique system service account and associates it with the connection.
Troubleshooting: If you get the following connection error, update the Google Cloud SDK:
Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
Retrieve and copy the service account ID for use in a later step:
bq show --connection PROJECT_ID.REGION.CONNECTION_ID
The output is similar to the following:
name properties 1234.REGION.CONNECTION_ID {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}
Terraform
Use the
google_bigquery_connection
resource.
To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries.
The following example creates a Cloud resource connection named
my_cloud_resource_connection
in the US
region:
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- Launch Cloud Shell.
-
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
-
In Cloud Shell, create a directory and a new
file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
-
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
-
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
-
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
-
Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Grant a role to the remote model connection's service account
You must grant the Vertex AI User role to the service account of the connection that the remote model uses.
If you plan to specify the remote model's endpoint as a URL,
for example endpoint = 'https://us-central1-aiplatform.googleapis.com/v1/projects/myproject/locations/us-central1/publishers/google/models/gemini-2.0-flash'
,
grant this role in the same project you specify in the URL.
If you plan to specify the remote model's endpoint by using the model name,
for example endpoint = 'gemini-2.0-flash'
, grant this role
in the same project where you plan to create the remote model.
Granting the role in a different project results in the error
bqcx-1234567890-wxyz@gcp-sa-bigquery-condel.iam.gserviceaccount.com does not have the permission to access resource
.
To grant the Vertex AI User role, follow these steps:
Console
Go to the IAM & Admin page.
Click
Add.The Add principals dialog opens.
In the New principals field, enter the service account ID that you copied earlier.
In the Select a role field, select Vertex AI, and then select Vertex AI User.
Click Save.
gcloud
Use the
gcloud projects add-iam-policy-binding
command.
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER
: your project numberMEMBER
: the service account ID that you copied earlier
Grant a role to the object table connection's service account
If you are using the remote model to generate text from object table data, grant the object table connection's service account the Vertex AI User role in the same project where you plan to create the remote model. Otherwise, you can skip this step.
To find the service account for the object table connection, follow these steps:
Go to the BigQuery page.
In the left pane, click
Explorer:If you don't see the left pane, click
Expand left pane to open the pane.In the Explorer pane, click Datasets, and then select a dataset that contains the object table.
Click Overview > Tables, and then select the object table.
In the editor pane, click the Details tab.
Note the connection name in the Connection ID field.
In the Explorer pane, click Connections.
Select the connection that matches the one from the object table's Connection ID field.
Copy the value in the Service account id field.
To grant the role, follow these steps:
Console
Go to the IAM & Admin page.
Click
Add.The Add principals dialog opens.
In the New principals field, enter the service account ID that you copied earlier.
In the Select a role field, select Vertex AI, and then select Vertex AI User.
Click Save.
gcloud
Use the
gcloud projects add-iam-policy-binding
command.
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER
: your project numberMEMBER
: the service account ID that you copied earlier
Enable a partner model
This step is only required if you want to use Anthropic Claude, Llama, or Mistral AI models.
In the Google Cloud console, go to the Vertex AI Model Garden page.
Search or browse for the partner model that you want to use.
Click the model card.
On the model page, click Enable.
Fill out the requested enablement information, and then click Next.
In the Terms and conditions section, select the checkbox.
Click Agree to agree to the terms and conditions and enable the model.
Choose an open model deployment method
If you are creating a remote model over a
supported open model,
you can automatically deploy the open model at the same time that
you create the remote model by specifying the Vertex AI
Model Garden or Hugging Face model ID in the CREATE MODEL
statement.
Alternatively, you can manually deploy the open model first, and then use that
open model with the remote model by specifying the model
endpoint in the CREATE MODEL
statement. For more information, see
Deploy open models.
Create a BigQuery ML remote model
Create a remote model:
New open models
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION {DEFAULT | `PROJECT_ID.REGION.CONNECTION_ID`} OPTIONS ( {HUGGING_FACE_MODEL_ID = 'HUGGING_FACE_MODEL_ID' | MODEL_GARDEN_MODEL_NAME = 'MODEL_GARDEN_MODEL_NAME'} [, HUGGING_FACE_TOKEN = 'HUGGING_FACE_TOKEN' ] [, MACHINE_TYPE = 'MACHINE_TYPE' ] [, MIN_REPLICA_COUNT = MIN_REPLICA_COUNT ] [, MAX_REPLICA_COUNT = MAX_REPLICA_COUNT ] [, RESERVATION_AFFINITY_TYPE = {'NO_RESERVATION' | 'ANY_RESERVATION' | 'SPECIFIC_RESERVATION'} ] [, RESERVATION_AFFINITY_KEY = 'compute.googleapis.com/reservation-name' ] [, RESERVATION_AFFINITY_VALUES = RESERVATION_AFFINITY_VALUES ] [, ENDPOINT_IDLE_TTL = ENDPOINT_IDLE_TTL ] );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are using.MODEL_NAME
: the name of the model.REGION
: the region used by the connection.CONNECTION_ID
: the ID of your BigQuery connection.You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection
.HUGGING_FACE_MODEL_ID
: aSTRING
value that specifies the model ID for a supported Hugging Face model, in the formatprovider_name
/model_name
. For example,deepseek-ai/DeepSeek-R1
. You can get the model ID by clicking the model name in the Hugging Face Model Hub and then copying the model ID from the top of the model card.MODEL_GARDEN_MODEL_NAME
: aSTRING
value that specifies the model ID and model version of a supported Vertex AI Model Garden model, in the formatpublishers/publisher
/models/model_name
@model_version
. For example,publishers/openai/models/gpt-oss@gpt-oss-120b
. You can get the model ID by clicking the model card in the Vertex AI Model Garden and then copying the model ID from the Model ID field. You can get the default model version by copying it from the Version field on the model card. To see other model versions that you can use, click Deploy model and then click the Resource ID field.HUGGING_FACE_TOKEN
: aSTRING
value that specifies the Hugging Face User Access Token to use. You can only specify a value for this option if you also specify a value for theHUGGING_FACE_MODEL_ID
option.The token must have the
read
role at minimum but tokens with a broader scope are also acceptable. This option is required when the model identified by theHUGGING_FACE_MODEL_ID
value is a Hugging Face gated or private model.Some gated models require explicit agreement to their terms of service before access is granted. To agree to these terms, follow these steps:
- Navigate to the model's page on the Hugging Face website.
- Locate and review the model's terms of service. A link to the service agreement is typically found on the model card.
- Accept the terms as prompted on the page.
MACHINE_TYPE
: aSTRING
value that specifies the machine type to use when deploying the model to Vertex AI. For information about supported machine types, see Machine types. If you don't specify a value for theMACHINE_TYPE
option, the Vertex AI Model Garden default machine type for the model is used.MIN_REPLICA_COUNT
: anINT64
value that specifies the minimum number of machine replicas used when deploying the model on a Vertex AI endpoint. The service increases or decreases the replica count as required by the inference load on the endpoint. The number of replicas used is never lower than theMIN_REPLICA_COUNT
value and never higher than theMAX_REPLICA_COUNT
value. TheMIN_REPLICA_COUNT
value must be in the range[1, 4096]
. The default value is1
.MAX_REPLICA_COUNT
: anINT64
value that specifies the maximum number of machine replicas used when deploying the model on a Vertex AI endpoint. The service increases or decreases the replica count as required by the inference load on the endpoint. The number of replicas used is never lower than theMIN_REPLICA_COUNT
value and never higher than theMAX_REPLICA_COUNT
value. TheMAX_REPLICA_COUNT
value must be in the range[1, 4096]
. The default value is theMIN_REPLICA_COUNT
value.RESERVATION_AFFINITY_TYPE
: determines whether the deployed model uses Compute Engine reservations to provide assured virtual machine (VM) availability when serving predictions, and specifies whether the model uses VMs from all available reservations or just one specific reservation. For more information, see Compute Engine reservation affinity.You can only use Compute Engine reservations that are shared with Vertex AI. For more information, see Allow a reservation to be consumed.
Supported values are as follows:
NO_RESERVATION
: no reservation is consumed when your model is deployed to a Vertex AI endpoint. SpecifyingNO_RESERVATION
has the same effect as not specifying a reservation affinity.ANY_RESERVATION
: the Vertex AI model deployment consumes virtual machines (VMs) from Compute Engine reservations that are in the current project or that are shared with the project, and that are configured for automatic consumption. Only VMs that meet the following qualifications are used:- They use the machine type specified by the
MACHINE_TYPE
value. - If the BigQuery dataset in which you are creating the
remote model is a single region, the reservation must be in the
same region. If the dataset
is in the
US
multiregion, the reservation must be in theus-central1
region. If the dataset is in theEU
multiregion, the reservation must be in theeurope-west4
region.
If there isn't enough capacity in the available reservations, or if no suitable reservations are found, the system provisions on-demand Compute Engine VMs to meet the resource requirements.
- They use the machine type specified by the
SPECIFIC_RESERVATION
: the Vertex AI model deployment consumes VMs only from the reservation that you specify in theRESERVATION_AFFINITY_VALUES
value. This reservation must be configured for specifically targeted consumption. Deployment fails if the specified reservation doesn't have sufficient capacity.
RESERVATION_AFFINITY_KEY
: the stringcompute.googleapis.com/reservation-name
. You must specify this option when theRESERVATION_AFFINITY_TYPE
value isSPECIFIC_RESERVATION
.RESERVATION_AFFINITY_VALUES
: anARRAY<STRING>
value that specifies the full resource name of the Compute Engine reservation, in the following format:
projects/myproject/zones/reservation_zone/reservations/reservation_name
For example,
RESERVATION_AFFINITY_values = ['projects/myProject/zones/us-central1-a/reservations/myReservationName']
.You can get the reservation name and zone from the Reservations page of the Google Cloud console. For more information, see View reservations.
You must specify this option when the
RESERVATION_AFFINITY_TYPE
value isSPECIFIC_RESERVATION
.ENDPOINT_IDLE_TTL
: anINTERVAL
value that specifies the duration of inactivity after which the open model is automatically undeployed from the Vertex AI endpoint.To enable automatic undeployment, specify an interval literal value between 390 minutes (6.5 hours) and 7 days. For example, specify
INTERVAL 8 HOUR
to have the model undeployed after 8 hours of idleness. The default value is 390 minutes (6.5 hours).Model inactivity is defined as the amount of time that has passed since the any of the following operations were performed on the model:
- Running the
CREATE MODEL
statement. - Running the
ALTER MODEL
statement with theDEPLOY_MODEL
argument set toTRUE
. - Sending an inference request to the model endpoint. For example, by
running the
ML.GENERATE_EMBEDDING
orML.GENERATE_TEXT
function.
Each of these operations resets the inactivity timer to zero. The reset is triggered at the start of the BigQuery job that performs the operation.
After the model is undeployed, inference requests sent to the model return an error. The BigQuery model object remains unchanged, including model metadata. To use the model for inference again, you must redeploy it by running the
ALTER MODEL
statement on the model and setting theDEPLOY_MODEL
option toTRUE
.- Running the
Deployed open models
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION {DEFAULT | `PROJECT_ID.REGION.CONNECTION_ID`} OPTIONS ( ENDPOINT = 'https://ENDPOINT_REGION-aiplatform.googleapis.com/v1/projects/ENDPOINT_PROJECT_ID/locations/ENDPOINT_REGION/endpoints/ENDPOINT_ID' );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are using.MODEL_NAME
: the name of the model.REGION
: the region used by the connection.CONNECTION_ID
: the ID of your BigQuery connection.You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection
.ENDPOINT_REGION
: the region in which the open model is deployed.ENDPOINT_PROJECT_ID
: the project in which the open model is deployed.ENDPOINT_ID
: the ID of the HTTPS endpoint used by the open model. You can get the endpoint ID by locating the open model on the Online prediction page and copying the value in the ID field.
All other models
In the Google Cloud console, go to the BigQuery page.
Using the SQL editor, create a remote model:
CREATE OR REPLACE MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME` REMOTE WITH CONNECTION {DEFAULT | `PROJECT_ID.REGION.CONNECTION_ID`} OPTIONS (ENDPOINT = 'ENDPOINT');
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset to contain the model. This dataset must be in the same location as the connection that you are using.MODEL_NAME
: the name of the model.REGION
: the region used by the connection.CONNECTION_ID
: the ID of your BigQuery connection.You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection
.ENDPOINT
: the endpoint of the Vertex AI model to use.For pre-trained Vertex AI models, Claude models, and Mistral AI models, specify the name of the model. For some of these models, you can specify a particular version of the model as part of the name. For supported Gemini models, you can specify the global endpoint to improve availability.
For Llama models, specify an OpenAI API endpoint in the format
openapi/<publisher_name>/<model_name>
. For example,openapi/meta/llama-3.1-405b-instruct-maas
.For information about supported model names and versions, see
ENDPOINT
.The Vertex AI model that you specify must be available in the location where you are creating the remote model. For more information, see Locations.
Generate text from standard table data
Generate text by using the
ML.GENERATE_TEXT
function
with prompt data from a standard table:
Gemini
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (PROMPT_QUERY)}, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, GROUND_WITH_GOOGLE_SEARCH AS ground_with_google_search, SAFETY_SETTINGS AS safety_settings, REQUEST_TYPE AS request_type) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.PROMPT_QUERY
: a query that provides the prompt data. This query must produce a column that's namedprompt
.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH
: aBOOL
value that determines whether the Vertex AI model uses [Grounding with Google Search](/vertex-ai/generative-ai/docs/grounding/overview#ground-public) when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When bothflatten_json_output
and this field are set toTrue
, an additionalml_generate_text_grounding_result
column is included in the results, providing the sources that the model used to gather additional information. The default isFALSE
.SAFETY_SETTINGS
: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>
value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)
andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold)
. If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVE
safety setting is used. Supported categories are as follows:HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
BLOCK_NONE
(Restricted)BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
(Default)BLOCK_ONLY_HIGH
HARM_BLOCK_THRESHOLD_UNSPECIFIED
REQUEST_TYPE
: aSTRING
value that specifies the type of inference request to send to the Gemini model. The request type determines what quota the request uses. Valid values are as follows:DEDICATED
: TheML.GENERATE_TEXT
function only uses Provisioned Throughput quota. TheML.GENERATE_TEXT
function returns the errorProvisioned throughput is not purchased or is not active
if Provisioned Throughput quota isn't available.SHARED
: TheML.GENERATE_TEXT
function only uses dynamic shared quota (DSQ), even if you have purchased Provisioned Throughput quota.UNSPECIFIED
: TheML.GENERATE_TEXT
function uses quota as follows:- If you haven't purchased Provisioned Throughput quota,
the
ML.GENERATE_TEXT
function uses DSQ quota. - If you have purchased Provisioned Throughput quota,
the
ML.GENERATE_TEXT
function uses the Provisioned Throughput quota first. If requests exceed the Provisioned Throughput quota, the overflow traffic uses DSQ quota.
- If you haven't purchased Provisioned Throughput quota,
the
The default value is
UNSPECIFIED
.For more information, see Use Vertex AI Provisioned Throughput.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't parse the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
Example 3
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
Example 4
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns a short response.
- Flattens the JSON response into separate columns.
- Retrieves and returns public web data for response grounding.
- Filters out unsafe responses by using two safety settings.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT( 100 AS max_output_tokens, 0.5 AS top_p, TRUE AS flatten_json_output, TRUE AS ground_with_google_search, [STRUCT('HARM_CATEGORY_HATE_SPEECH' AS category, 'BLOCK_LOW_AND_ABOVE' AS threshold), STRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)] AS safety_settings));
Example 5
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Returns a longer response.
- Flattens the JSON response into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.flash_2_model`, TABLE mydataset.prompts, STRUCT( 0.4 AS temperature, 8192 AS max_output_tokens, TRUE AS flatten_json_output));
Example 6
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Flattens the JSON response into separate columns.
- Retrieves and returns public web data for response grounding.
- Filters out unsafe responses by using two safety settings.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT( .1 AS TEMPERATURE, TRUE AS flatten_json_output, TRUE AS ground_with_google_search, [STRUCT('HARM_CATEGORY_HATE_SPEECH' AS category, 'BLOCK_LOW_AND_ABOVE' AS threshold), STRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)] AS safety_settings));
Claude
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (PROMPT_QUERY)}, STRUCT(TOKENS AS max_output_tokens, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.PROMPT_QUERY
: a query that provides the prompt data. This query must produce a column that's namedprompt
.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't parse the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
Example 3
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
Llama
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (PROMPT_QUERY)}, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.PROMPT_QUERY
: a query that provides the prompt data. This query must produce a column that's namedprompt
.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't parse the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
Example 3
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
Mistral AI
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (PROMPT_QUERY)}, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.PROMPT_QUERY
: a query that provides the prompt data. This query must produce a column that's namedprompt
.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]
. Specify a lower value for shorter responses and a higher value for longer responses. The default is128
.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. The default is0
.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95
.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.STOP_SEQUENCES
: anARRAY<STRING>
value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't parse the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
Example 3
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
Open models
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, {TABLE PROJECT_ID.DATASET_ID.TABLE_NAME | (PROMPT_QUERY)}, STRUCT(TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_K AS top_k, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model.TABLE_NAME
: the name of the table that contains the prompt. This table must have a column that's namedprompt
, or you can use an alias to use a differently named column.PROMPT_QUERY
: a query that provides the prompt data. This query must produce a column that's namedprompt
.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]
. Specify a lower value for shorter responses and a higher value for longer responses. If you don't specify a value, the model determines an appropriate value.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. If you don't specify a value, the model determines an appropriate value.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
body
column of thearticles
table. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT('Summarize this text', body) AS prompt FROM mydataset.articles ), STRUCT(TRUE AS flatten_json_output));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings that provide prompt prefixes with table columns.
- Returns a short response.
- Doesn't parse the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, ( SELECT CONCAT(question, 'Text:', description, 'Category') AS prompt FROM mydataset.input_table ), STRUCT( 100 AS max_output_tokens, FALSE AS flatten_json_output));
Example 3
The following example shows a request with these characteristics:
- Uses the
prompt
column of theprompts
table for the prompt. - Parses the JSON response from the model into separate columns.
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.text_model`, TABLE mydataset.prompts, STRUCT(TRUE AS flatten_json_output));
Generate text from object table data
Generate text by using the
ML.GENERATE_TEXT
function
with a Gemini model to analyze unstructured data from an object
table. You provide the prompt data in the prompt
parameter.
SELECT * FROM ML.GENERATE_TEXT( MODEL `PROJECT_ID.DATASET_ID.MODEL_NAME`, TABLE PROJECT_ID.DATASET_ID.TABLE_NAME, STRUCT(PROMPT AS prompt, TOKENS AS max_output_tokens, TEMPERATURE AS temperature, TOP_P AS top_p, FLATTEN_JSON AS flatten_json_output, STOP_SEQUENCES AS stop_sequences, SAFETY_SETTINGS AS safety_settings) );
Replace the following:
PROJECT_ID
: your project ID.DATASET_ID
: the ID of the dataset that contains the model.MODEL_NAME
: the name of the model. This must be a Gemini model.TABLE_NAME
: the name of the object table that contains the content to analyze. For more information on what types of content you can analyze, see Input.The Cloud Storage bucket used by the object table should be in the same project where you have created the model and where you are calling the
ML.GENERATE_TEXT
function. If you want to call theML.GENERATE_TEXT
function in a different project than the one that contains the Cloud Storage bucket used by the object table, you must grant the Storage Admin role at the bucket level to theservice-A@gcp-sa-aiplatform.iam.gserviceaccount.com
service account.PROMPT
: the prompt to use to analyze the content.TOKENS
: anINT64
value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]
. Specify a lower value for shorter responses and a higher value for longer responses. If you don't specify a value, the model determines an appropriate value.TEMPERATURE
: aFLOAT64
value in the range[0.0,1.0]
that controls the degree of randomness in token selection. If you don't specify a value, the model determines an appropriate value.Lower values for
temperature
are good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperature
can lead to more diverse or creative results. A value of0
fortemperature
is deterministic, meaning that the highest probability response is always selected.TOP_K
: anINT64
value in the range[1,40]
that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.TOP_P
: aFLOAT64
value in the range[0.0,1.0]
helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.FLATTEN_JSON
: aBOOL
value that determines whether to return the generated text and the safety attributes in separate columns. The default isFALSE
.
Examples
This example translates and transcribes audio content from an
object table that's named feedback
:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.audio_model`, TABLE `mydataset.feedback`, STRUCT('What is the content of this audio clip, translated into Spanish?' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));
This example classifies PDF content from an object table
that's named invoices
:
SELECT * FROM ML.GENERATE_TEXT( MODEL `mydataset.classify_model`, TABLE `mydataset.invoices`, STRUCT('Classify this document based on the invoice total, using the following categories: 0 to 100, 101 to 200, greater than 200' AS PROMPT, TRUE AS FLATTEN_JSON_OUTPUT));