The ALTER MODEL statement

To update a model in BigQuery, use the BigQuery ML ALTER MODEL statement.

For more information about supported SQL statements and functions for different model types, see the following documents:

ALTER MODEL syntax

ALTER MODEL [IF EXISTS]
`PROJECT_ID.DATASET.MODEL`
SET OPTIONS
  (
    [vertex_ai_model_id = VERTEX_MODEL_ID]
    [, expiration_timestamp = EXPIRATION_TIMESTAMP]
    [, kms_key_name = KMS_KEY_NAME]
    [, description = DESCRIPTION]
    [, labels = LABELS]
    [, deploy_model = DEPLOY_MODEL]
    [, endpoint_idle_ttl = ENDPOINT_IDLE_TTL]
  );

Arguments

  • PROJECT_ID: the project that contains the resource.
  • DATASET: the dataset that contains the resource.
  • MODEL: the name of the model you're creating or replacing. The model name must be unique in the dataset: no other model or table can have the same name. The model name must follow the same naming rules as a BigQuery table. A model name can:

    • Contain up to 1,024 characters
    • Contain letters (upper or lower case), numbers, and underscores

    MODEL is case-sensitive.

    If you don't have a default project configured, then you must prepend the project ID to the model name in the following format, including backticks:

    `[PROJECT_ID].[DATASET].[MODEL]`

    For example, `myproject.mydataset.mymodel`.

  • VERTEX_MODEL_ID: a STRING value that specifies the Vertex AI model ID to register the model with. To learn more, see Register an existing BigQuery ML model to the Model Registry.

  • EXPIRATION_TIMESTAMP: a TIMESTAMP value that specifies when this model expires. If the model is an open model that BigQuery manages in Vertex AI, all Vertex AI resources associated with the model are deleted when the model expires.

  • KMS_KEY_NAME: a STRING value that specifies the name of the Cloud KMS key used to encrypt the model.

  • DESCRIPTION: a STRING value that provides a description of the model.

  • LABELS: an ARRAY<STRUCT<STRING, STRING>> value that specifies any labels for the model as key,value pairs.

  • DEPLOY_MODEL: a BOOL value that determines the model's deployment status in Vertex AI. You can use this option to control costs by undeploying or redeploying the model as needed. We recommend undeploying an unused Vertex AI endpoint, because otherwise the endpoint continues to generate charges for the compute resources that it uses, even when it is idle. For more information on Vertex AI compute pricing, see Prediction and explanation.

    You can only use this option with an open model that BigQuery manages in Vertex AI (Preview).

    If the model was previously undeployed, setting this option to TRUE redeploys the model to a Vertex AI endpoint. If the model is already deployed, this operation has no effect. Model redeployment requires a "cold start" period while the endpoint resources are provisioned. The cold start period can take up to 30 minutes, depending on the size of the model and the number of machine replicas specified for the model.

    Setting this option to FALSE undeploys the model from a Vertex AI endpoint. If the model is already undeployed, this operation has no effect.

  • ENDPOINT_IDLE_TTL: an INTERVAL value that specifies the duration of inactivity after which the model is automatically undeployed from the Vertex AI endpoint. You can only use this option with an open model that BigQuery manages in Vertex AI (Preview).

    To enable automatic undeployment, specify an interval literal value between 390 minutes (6.5 hours) and 7 days. For example, specify INTERVAL 8 HOUR to have the model undeployed after 8 hours of idleness. The default value is 390 minutes (6.5 hours).

    Model inactivity is defined as the amount of time that has passed since the any of the following operations were performed on the model:

    Each of these operations resets the inactivity timer to zero. The reset is triggered at the start of the BigQuery job that performs the operation.

    After the model is undeployed, inference requests sent to the model return an error. The BigQuery model object remains unchanged, including model metadata. To use the model for inference again, you must redeploy it by running the ALTER MODEL statement on the model and setting the DEPLOY_MODEL option to TRUE.