The AI.GENERATE function

This document describes the AI.GENERATE function, which lets you analyze any combination of text and unstructured data. You can choose to generate text or structured output according to a custom schema that you specify. The function generates a STRUCT that contains your generated data, the full model response, and a status.

The function works by sending requests to a Vertex AI Gemini model, and then returning that model's response.

You can use the AI.GENERATE function to perform tasks such as classification and sentiment analysis.

Prompt design can strongly affect the responses returned by the model. For more information, see Introduction to prompting.

Input

Using the AI.GENERATE function, you can use the following types of input:

When you analyze unstructured data, that data must meet the following requirements:

  • Content must be in one of the supported formats that are described in the Gemini API model mimeType parameter.
  • If you are analyzing a video, the maximum supported length is two minutes. If the video is longer than two minutes, AI.GENERATE only returns results based on the first two minutes.

Syntax

AI.GENERATE(
  [ prompt => ] 'PROMPT',
  [, endpoint => 'ENDPOINT']
  [, model_params => MODEL_PARAMS]
  [, output_schema => 'OUTPUT_SCHEMA']
  [, connection_id => 'CONNECTION']
  [, request_type => 'REQUEST_TYPE']
)

Arguments

AI.GENERATE takes the following arguments:

  • PROMPT: a STRING or STRUCT value that specifies the PROMPT value to send to the model. The prompt must be the first argument that you specify. You can provide the prompt value in the following ways:

    • Specify a STRING value. For example, 'Write a poem about birds'.
    • Specify a STRUCT value that contains one or more fields. You can use the following types of fields within the STRUCT value:

      Field type Description Examples
      STRING A string literal, or the name of a STRING column. String literal:
      'Describe the city of Seattle in 15 words'

      String column name:
      my_string_column
      ObjectRefRuntime

      An ObjectRefRuntime value returned by the OBJ.GET_ACCESS_URL function. The OBJ.GET_ACCESS_URL function takes an ObjectRef value as input, which you can provide by either specifying the name of a column that contains ObjectRef values, or by constructing an ObjectRef value.

      ObjectRefRuntime values must have the access_url.read_url and details.gcs_metadata.content_type elements of the JSON value populated.

      Function call with ObjectRef column:
      OBJ.GET_ACCESS_URL(my_objectref_column, 'r')

      Function call with constructed ObjectRef value:
      OBJ.GET_ACCESS_URL(OBJ.MAKE_REF('gs://image.jpg', 'myconnection'), 'r')

      The function combines STRUCT fields similarly to a CONCAT operation and concatenates the fields in their specified order. The same is true for the elements of any arrays used within the struct. The following table shows some examples of STRUCT prompt values and how they are interpreted:

      Struct field types Struct value Semantic equivalent
      STRUCT<STRING, STRING, STRING> ('Describe the city of ', my_city_column, ' in 15 words') 'Describe the city of my_city_column_value in 15 words'
      STRUCT<STRING, ObjectRefRuntime> ('Describe the following city', OBJ.GET_ACCESS_URL(image_objectref_column, 'r')) 'Describe the following city' image
  • ENDPOINT: a STRING value that specifies the Vertex AI endpoint to use for the model. You can specify any generally available or preview Gemini model. If you specify the model name, BigQuery ML automatically identifies and uses the full endpoint of the model. If you don't specify an ENDPOINT value, BigQuery ML selects a recent stable version of Gemini to use.

  • MODEL_PARAMS: a JSON literal that provides additional parameters to the model. The MODEL_PARAMS value must conform to the generateContent request body format. You can provide a value for any field in the request body except for the contents field; the contents field is populated with the PROMPT argument value.

  • OUTPUT_SCHEMA: a STRING value that specifies the schema of the output, in the form field_name1 data_type1, field_name2 data_type2, .... Supported data types include STRING, INT64, FLOAT64, BOOL, ARRAY, and STRUCT.

  • CONNECTION: a STRING value specifying the connection to use to communicate with the model, in the format [PROJECT_ID].LOCATION.CONNECTION_ID. For example, myproject.us.myconnection.

    If you don't specify a connection, then the query uses your end-user credentials.

    For information about configuring permissions, see Set permissions for BigQuery ML generative AI functions that call Vertex AI models.

  • REQUEST_TYPE: a STRING value that specifies the type of inference request to send to the Gemini model. The request type determines what quota the request uses. Valid values are as follows:

    • SHARED: The function only uses dynamic shared quota (DSQ).
    • DEDICATED: The function only uses Provisioned Throughput quota. The function returns an invalid query error if Provisioned Throughput quota isn't available. For more information, see Use Vertex AI Provisioned Throughput.
    • UNSPECIFIED: The function uses quota as follows:

      • If you haven't purchased Provisioned Throughput quota, the function uses DSQ quota.
      • If you have purchased Provisioned Throughput quota, the function uses the Provisioned Throughput quota first. If requests exceed the Provisioned Throughput quota, the overflow traffic uses DSQ quota.

    The default value is UNSPECIFIED.

Output

AI.GENERATE returns a STRUCT value for each row in the table. The struct contains the following fields:

  • result: a STRING value containing the model's response to the prompt. The result is NULL if the request fails or is filtered by responsible AI. If you specify an output schema then result is replaced by your custom schema.
  • full_response: a JSON value containing the response from the projects.locations.endpoints.generateContent call to the model. The generated text is in the text element.
  • status: a STRING value that contains the API response status for the corresponding row. This value is empty if the operation was successful.

Examples

The following examples assume that you have granted the Vertex AI User role to your personal account. See Run generative AI queries with end-user credentials for how-to.

Describe cities

To generate a short description of each city, you can call the AI.GENERATE function and select the result field in the output by running the following query:

SELECT
 city,
 AI.GENERATE(("Give a short, one sentence description of ", city)).result
FROM UNNEST(["Seattle", "Beijing", "Paris", "London"]) city;

The result is similar to the following:

+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+
|  city   |                                                                           result                                                                            |
+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Seattle | Seattle is a vibrant city nestled between mountains and water, renowned for its coffee culture, tech industry, and rainy weather.                           |
| Beijing | Beijing is a vibrant metropolis where ancient history meets modern innovation, offering a captivating blend of cultural treasures and bustling urban life.  |
| Paris   | Paris is a romantic city renowned for its iconic landmarks, elegant architecture, and vibrant culture.                                                      |
| London  | London, a vibrant global metropolis brimming with history, culture, and innovation.                                                                         |
+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+

Use structured output for entity extraction

The following query extracts information about a person from an unstructured description. The query uses the output_schema argument to set custom fields in the output:

SELECT
  AI.GENERATE(
    input,
    output_schema => '''name STRING,
                        age INT64,
                        address STRUCT,
                        is_married BOOL,
                        phone_number ARRAY,
                        weight_in_pounds FLOAT64''') AS info
FROM
  (
    SELECT
      '''John Smith is a 20-year old single man living at 1234 NW 45th St, Kirkland WA, 98033.
           He has two phone numbers 123-123-1234, and 234-234-2345. He is 200.5 pounds.'''
        AS input
  );

The result is similar to the following:

+------------+----------+-----------------------------+-------------------+-----+
| info.name  | info.age | info.address.street_address | info.address.city | ... |
+------------+----------+-----------------------------+-------------------+-----+
| John Smith | 20       | 1234 NW 45th St             | Kirkland          | ... |
+------------+----------+-----------------------------+-------------------+-----+

Process images in a Cloud Storage bucket

The following query creates an external table from images of pet products stored in a publicly available Cloud Storage bucket:

CREATE SCHEMA IF NOT EXISTS bqml_tutorial;

CREATE OR REPLACE EXTERNAL TABLE bqml_tutorial.product_images
  WITH CONNECTION DEFAULT OPTIONS (
    object_metadata = 'SIMPLE',
    uris = ['gs://cloud-samples-data/bigquery/tutorials/cymbal-pets/images/*.png']);

You can use AI.GENERATE to describe images and what's in them. To do that, construct your prompt from a natural language instruction and an ObjectRefRuntime of the image. The following query asks Gemini what each image is. It specifies an output_schema to structure the results with one column to name the items in the image and another column to provide a description of the image.

SELECT
  uri,
  STRING(OBJ.GET_ACCESS_URL(ref,'r').access_urls.read_url) AS signed_url,
  AI.GENERATE(
    ("What is this: ", OBJ.GET_ACCESS_URL(ref, 'r')),
    output_schema =>
      "image_description STRING, entities_in_the_image ARRAY<STRING>").*
FROM bqml_tutorial.product_images
LIMIT 3;

This result is similar to the following:

AI_GENERATE_WITH_IMAGE

Use grounding with Google Search

The following query shows how to set the model_params argument to use Google Search grounding for the request. You can only use Google Search grounding with Gemini 2.0 or later models.

SELECT
  name,
  AI.GENERATE(
    ('Please check the weather of ', name, ' for today.'),
    model_params => JSON '{"tools": [{"googleSearch": {}}]}'
  )
FROM UNNEST(['Seattle', 'NYC', 'Austin']) AS name;

Set the thinking budget for a Gemini 2.5 Flash model

The following query shows how to set the model_params argument to set the model's thinking budget to 0 for the request:

SELECT
  AI.GENERATE(
    ('What is the capital of Monaco?'),
    endpoint => 'gemini-2.5-flash',
    model_params => JSON '{"generation_config":{"thinking_config": {"thinking_budget": 0}}}');

Best Practices

This function passes your input to a Gemini model and incurs charges in Vertex AI each time it's called. For information about how to view these charges, see Track costs. To minimize Vertex AI charges when you use AI.GENERATE on some filtered data using the WHERE clause, materialize the filtered data to a table first. For example, the first of the following examples is preferable to the second one:

CREATE TABLE mydataset.cities
AS (
  SELECT city_name from mydataset.customers WHERE...
);

SELECT
  city,
  AI.GENERATE(
    ('Give a short, one sentence description of ', city)).result
FROM mydataset.cities;
SELECT
  city,
  AI.GENERATE(
    ('Give a short, one sentence description of ', city)).result
FROM (SELECT city_name from mydataset.customers WHERE...);

Writing the query results to a table beforehand helps you to ensure that you are sending as few rows as possible to the model.

Use Vertex AI Provisioned Throughput

You can use Vertex AI Provisioned Throughput with the AI.GENERATE function to provide consistent high throughput for requests. The remote model that you reference in the AI.GENERATE function must use a supported Gemini model in order for you to use Provisioned Throughput.

To use Provisioned Throughput, calculate your Provisioned Throughput requirements and then purchase Provisioned Throughput quota before running the AI.GENERATE function. When you purchase Provisioned Throughput, do the following:

  • For Model, select the same Gemini model as the one used by the remote model that you reference in the AI.GENERATE function.
  • For Region, select the same region as the dataset that contains the remote model that you reference in the AI.GENERATE function, with the following exceptions:

    • If the dataset is in the US multi-region, select the us-central1 region.
    • If the dataset is in the EU multi-region, select the europe-west4 region.

After you submit the order, wait for the order to be approved and appear on the Orders page.

After you have purchased Provisioned Throughput quota, use the REQUEST_TYPE argument to determine how the AI.GENERATE function uses the quota.

Locations

You can run AI.GENERATE in all of the regions that support Gemini models, and also in the US and EU multi-regions.

Quotas

See Vertex AI and Cloud AI service functions quotas and limits.

What's next