Continuous tuning lets you continue tuning an already tuned model or model checkpoint by adding more epochs or training examples. Using an already tuned model or checkpoint as a base model allows for more efficient tuning experimentation.
You can use continuous tuning for the following purposes:
- To tune with more data if an existing tuned model is underfitting.
- To boost performance or keep the model up to date with new data.
- To further customize an existing tuned model.
The following Gemini models support continuous tuning:
For detailed information about Gemini model versions, see Google models and Model versions and lifecycle.
Configure continuous tuning
When creating a continuous tuning job, note the following:
- Continuous tuning is supported in the Google Gen AI SDK. It isn't supported in the Vertex AI SDK for Python.
You must provide a model resource name:
- In the Google Cloud console, the model resource name appears in the Vertex AI Tuning page, in the Tuning details > Model Name field.
- The model resource name uses the following format:
projects/{project}/locations/{location}/models/{modelId}@{version_id}{version_id}is optional and can be either the generated version ID or a user-provided version alias. If it is omitted, the default version is used.
If you don't specify a model version, the default version is used.
If you're using a checkpoint as a base model and don't specify a checkpoint ID, the default checkpoint is used. For more information, see Use checkpoints in supervised fine-tuning for Gemini models. In the Google Cloud console, the default checkpoint can be found as follows:
- Go to the Model Registry page.
- Click the Model Name for the model.
- Click View all versions.
- Click the desired version to view a list of checkpoints. The default
checkpoint is indicated by the word
defaultnext to the checkpoint ID.
By default, a new model version is created under the same parent model as the pre-tuned model. If you supply a new tuned model display name, a new model is created.
Only supervised tuning base models that are tuned on or after July 11, 2025 can be used as base models for continuous tuning.
If you're using customer-managed encryption keys (CMEK), your continuous tuning job must use the same CMEK that was used in the tuning job for the pre-tuned model.
Console
To configure continuous tuning for a pre-tuned model by using the Google Cloud console, perform the following steps:
In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.
Click Create tuned model.
Under Model details, configure the following:
- Choose Tune a pre-tuned model.
- In the Pre-tuned model field, choose the name of your pre-tuned model.
- If the model has at least one checkpoint, the Checkpoint drop-down field appears. Choose the desired checkpoint.
Click Continue.
REST
To to configure continuous tuning, send a POST request by using the
tuningJobs.create
method. Some of the parameters are not supported by all of the models. Ensure
that you include only the applicable parameters for the model that you're
tuning.
Before using any of the request data, make the following replacements:
- Parameters for continuous tuning:
- TUNED_MODEL_NAME: Name of the tuned model to use.
- CHECKPOINT_IDOptional: ID of the checkpoint to use.
- The remaining parameters are the same as for supervised fine tuning or preference tuning.
HTTP method and URL:
POST https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs
Request JSON body:
{
"preTunedModel": {
"tunedModelName": "TUNED_MODEL_NAME",
"checkpointId": "CHECKPOINT_ID",
},
"supervisedTuningSpec" : {
"trainingDatasetUri": "TRAINING_DATASET_URI",
"validationDatasetUri": "VALIDATION_DATASET_URI",
"hyperParameters": {
"epochCount": EPOCH_COUNT,
"adapterSize": "ADAPTER_SIZE",
"learningRateMultiplier": "LEARNING_RATE_MULTIPLIER"
},
"exportLastCheckpointOnly": EXPORT_LAST_CHECKPOINT_ONLY,
"evaluationConfig": {
"metrics": [
{
"aggregation_metrics": ["AVERAGE", "STANDARD_DEVIATION"],
"METRIC_SPEC": {
"METRIC_SPEC_FIELD_NAME":
METRIC_SPEC_FIELD_CONTENT
}
},
],
"outputConfig": {
"gcs_destination": {
"output_uri_prefix": "CLOUD_STORAGE_BUCKET"
}
},
},
},
"tunedModelDisplayName": "TUNED_MODEL_DISPLAYNAME",
"encryptionSpec": {
"kmsKeyName": "KMS_KEY_NAME"
},
"serviceAccount": "SERVICE_ACCOUNT"
}
To send your request, choose one of these options:
curl
Save the request body in a file named request.json,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs"
PowerShell
Save the request body in a file named request.json,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Example curl command
PROJECT_ID=myproject
LOCATION=global
curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
"https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/tuningJobs" \
-d \
$'{
"preTunedModel": "gemini-2.5-flash",
"supervisedTuningSpec" : {
"trainingDatasetUri": "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_train_data.jsonl",
"validationDatasetUri": "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_validation_data.jsonl"
},
"tunedModelDisplayName": "tuned_gemini"
}'
Google Gen AI SDK
The following example shows how to configure continuous tuning by using the Google Gen AI SDK.