This page describes how to create, view, list, cancel, and delete storage batch operations jobs. It also describes how to use Cloud Audit Logs with storage batch operations jobs.
Before you begin
To create and manage storage batch operations jobs, complete the steps in the following sections.
Configure Storage Intelligence
To create and manage storage batch operations jobs, configure Storage Intelligence on the bucket where you want to run the job.
Enable API
Enable the storage batch operations API.
gcloud services enable storagebatchoperations.googleapis.com
Create a manifest
If you want to use a manifest for object selection, create a manifest file. Using a manifest is one of the ways you can select objects to process in a storage batch operations job.
Create a storage batch operations job
This section describes how to create a storage batch operations job.
To get the permissions that
you need to create a storage batch operations job,
ask your administrator to grant you the
Storage Admin (roles/storage.admin) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
In the list of buckets, click the name of the bucket that contains the objects on which you want to perform batch operations.
The Bucket details page opens, with the Objects tab selected.
- Click Create batch operations.
- In the Select operation pane, choose the operation type:
- Manage object holds: Select Temporary hold or Event-based hold. For more information, see object holds.
- Update object metadata: To add object
metadata, do the following:
- To add custom metadata, complete the following steps:
- In the Key field, enter a key name.
- In the Value field, enter a value for that key.
- Optional: Click + Add item to add more key-value pairs.
- To update fixed-key metadata, complete the following steps:
- To expand the Update fixed-key metadata section, click the expander arrow.
- In the Select one or more metadata to update list, select metadata items to edit.
- To add custom metadata, complete the following steps:
- Update/Rotate encryption key: To use or update the encryption
key for objects, do the following:
- In the Select a Cloud KMS key list, select a customer-managed encryption key (CMEK).
- Optional: Select Switch project to pick a key from another project or select Enter key manually to fill details.
- Delete objects: To delete
objects, do the following:
- Check whether Object Versioning is enabled.
If Object Versioning is enabled, choose one of the following deletion options:
- Select Delete all versions of the objects to remove both live and noncurrent versions.
- Select Permanently delete live versions to remove only the live version.
If Object Versioning is not enabled, any objects selected for deletion are permanently deleted.
- Click Next.
- In the Name operation & specify objects pane, do the following:
- In the Name field, enter a name.
- Optional: In the Description field, enter a description.
- In the Specify objects section, define a criterion to process
objects from the bucket. Choose one of the following options:
- Select all objects: Includes all objects in the bucket.
- Select objects by using prefix filters: To define the list of
objects by using prefix filters, do the following:
- In the Enter Prefixes of the objects to be included field, enter a prefix.
- Optional: Click + Add prefix to specify additional prefixes.
- Upload lists of objects using manifest CSV files: To use a
manifest file for selecting objects, do the following:
- Upload your manifest CSV file to a bucket. This file must contain headers for Bucket name, Object key, and Generation number.
- In the Select manifest file mode list, choose one of the following options:
- If you select Select a manifest file from Cloud Storage, click Browse in the Select a manifest file from Cloud Storage field. In the Select object dialog that appears, navigate to your manifest CSV file, then click Select.
- If you select Select multiple manifest files using wildcard,
enter the file path in the Enter manifest file location using
wildcard field. For example,
bucket-name/folder/manifest_*.
- Click Create.
Command line
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
To set the default project, run the
gcloud config set projectcommand:gcloud config set project PROJECT_ID
Where PROJECT_ID is the ID of your project.
Optional: Run a dry run job. Before executing any job, we recommend that you run the job in dry run mode to verify the object selection criteria and check for any errors. The dry run does not modify any objects.
In your development environment, run the
gcloud storage batch-operations jobs createcommand with the--dry-runflag:gcloud storage batch-operations jobs create DRY_RUN_JOB_NAME \ --bucket=BUCKET_NAME OBJECT_SELECTION_FLAG JOB_TYPE_FLAG \ --dry-run
The dry run uses the same parameters as the actual job. For details, see parameter descriptions.
To view the results of the dry run, see Get storage batch operations job details.
After a successful dry run, run the
gcloud storage batch-operations jobs createcommand.gcloud storage batch-operations jobs create JOB_NAME\ --bucket=BUCKET_NAME OBJECT_SELECTION_FLAG JOB_TYPE_FLAG
Where the parameters are as follows:
DRY_RUN_JOB_NAMEis the name of the storage batch operations dry run job.JOB_NAMEis the name of the storage batch operations job.BUCKET_NAMEis the name of the bucket that contains one or more objects you want to process.OBJECT_SELECTION_FLAGis one of the following flags that you need to specify:--included-object-prefixes: Specify one or more object prefixes. For example:- To match a single prefix, use:
--included-object-prefixes='prefix1'. - To match multiple prefixes, use a comma-separated prefix list:
--included-object-prefixes='prefix1,prefix2'. - To include all objects, use an empty prefix:
--included-object-prefixes=''.
- To match a single prefix, use:
--manifest-location: Specify the manifest location. For example,gs://bucket_name/path/object_name.csv.
JOB_TYPE_FLAGis one of the following flags that you need to specify, depending on the job type.--delete-object: Delete one or more objects.If Object Versioning is enabled for the bucket, current objects transition to a noncurrent state, and noncurrent objects are skipped.
If Object Versioning is disabled for the bucket, the delete operation permanently deletes objects and skips noncurrent objects.
--enable-permanent-object-deletion: Permanently delete objects. Use this flag along with the--delete-objectflag to permanently delete both live and noncurrent objects in a bucket, regardless of the bucket's object versioning configuration.--rewrite-object: Update the customer-managed encryption keys for one or more objects.--put-object-event-based-hold: Enable event-based object holds.--no-put-object-event-based-hold: Disable event-based object holds.--put-object-temporary-hold: Enable temporary object holds.--no-put-object-temporary-hold: Disable temporary object holds.The following example shows how to create a job to update the
Content-Languagemetadata toenfor all objects listed inmanifest.csv.gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --put-metadata=Content-Language=en
--put-metadata: Update object metadata. Specify the key-value pair for the object metadata you want to modify. You can specify one or more key-value pairs as a list. You can also set object retention configurations using the--put-metadataflag. To do so, specify the retention parameters using theRetain-UntilandRetention-Modefields. For example,gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --put-metadata=Retain-Until=RETAIN_UNTIL_TIME, Retention-Mode=RETENTION_MODE
Where:
RETAIN_UNTIL_TIMEis the date and time, in RFC 3339 format, until which the object is retained. For example,2025-10-09T10:30:00Z. To set the retention configuration on an object, you'll need to enable retention on the bucket which contains the object.RETENTION_MODEis the retention mode, eitherUnlockedorLocked.When you send a request to update the
RETENTION_MODEandRETAIN_UNTIL_TIMEfields, consider the following:- To update the object retention configuration, you must provide non-empty values for both
RETENTION_MODEandRETAIN_UNTIL_TIMEfields; setting only one results in anINVALID_ARGUMENTerror. - You can extend the
RETAIN_UNTIL_TIMEvalue for objects in bothUnlockedorLockedmodes. - The object retention must be in
Unlockedmode if you want to do the following:- Reduce the
RETAIN_UNTIL_TIMEvalue. - Remove the retention configuration. To remove the configuration, you'll need to
provide empty values for both
RETENTION_MODEandRETAIN_UNTIL_TIMEfields.
- Reduce the
- If you omit both
RETENTION_MODEandRETAIN_UNTIL_TIMEfields, the retention configuration remains unchanged.
- To update the object retention configuration, you must provide non-empty values for both
--clear-all-object-custom-contexts: Delete all existing object contexts.The following example shows how to create a job to clear all object contexts for objects listed in
manifest.csv:gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --clear-all-object-custom-contexts
--clear-object-custom-contexts: Remove contexts with specific keys. You can also update specific contexts along with removing keys by using both the--clear-object-custom-contextsflag and one of the following flags:--update-object-custom-contexts: Provide a map of key-value pairs.The following example shows how to create a job to remove the context with key
temp-idand update or insert context with keyproject-idandcost-centerfor all objects listed inmanifest.csv:gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --clear-object-custom-contexts=temp-id \ --update-object-custom-contexts=project-id=project-A,cost-center=engineering
--update-object-custom-contexts-file: Provide the path to a JSON or YAML file with key-value pairs.The following example shows how to create a job to process objects defined in
manifest.csv. The job does the following:Removes all contexts with the
temp-idkey.Updates existing contexts with the
project-idandcost-centerkeys defined in the/tmp/context_updates.jsonfile.
gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --clear-object-custom-contexts=temp-id \ --update-object-custom-contexts-file=/tmp/context_updates.json
Where
/tmp/context_updates.jsoncontains the following object contexts:{ "project-id": {"value": "project-A"}, "cost-center": {"value": "engineering"} }
Client libraries
For more information, see the
Cloud Storage C++ API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage PHP API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
C++
PHP
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorizationheader.Create a JSON file that contains the settings for the storage batch operations job. The following are common settings to include:
{ "Description": "JOB_DESCRIPTION", "BucketList": { "Buckets": [ { "Bucket": "BUCKET_NAME", "Manifest": { "manifest_location": "MANIFEST_LOCATION" } "PrefixList": { "include_object_prefixes": "OBJECT_PREFIXES" } } ] }, "DeleteObject": { "permanent_object_deletion_enabled": OBJECT_DELETION_VALUE } "RewriteObject": { "kms_key":"KMS_KEY_VALUE" } "PutMetadata":{ "METADATA_KEY": "METADATA_VALUE", ..., "objectRetention": { "retainUntilTime": "RETAIN_UNTIL_TIME", "mode": "RETENTION_MODE" } } "PutObjectHold": { "temporary_hold": TEMPORARY_HOLD_VALUE, "event_based_hold": EVENT_BASED_HOLD_VALUE }, "updateObjectCustomContext": { "customContextUpdates": { "updates": { "CONTEXT_KEY": { "value": "CONTEXT_VALUE" } }, "keysToClear": ["CONTEXT_KEY_TO_CLEAR"] }, "clearAll": CLEAR_ALL_VALUE }, "dryRun": DRY_RUN_VALUE }
Where:
JOB_NAMEis the name of the storage batch operations job.JOB_DESCRIPTIONis the description of the storage batch operations job.BUCKET_NAMEis the name of the bucket that contains one or more objects you want to process.To specify the objects you want to process, use any one of the following attributes in the JSON file:
MANIFEST_LOCATIONis the manifest location. For example,gs://bucket_name/path/object_name.csv.OBJECT_PREFIXESis the comma-separated list containing one or more object prefixes. To match all objects, use an empty list.
Depending on the job you want to process, specify any one of the following options:
Delete objects:
"DeleteObject": { "permanent_object_deletion_enabled": OBJECT_DELETION_VALUE }
Where
OBJECT_DELETION_VALUEisTRUEto delete objects.Update the Customer-managed encryption key for objects:
"RewriteObject": { "kms_key": KMS_KEY_VALUE }
Where
KMS_KEY_VALUEis the value of the object's KMS key you want to update.Update object metadata:
"PutMetadata": { "METADATA_KEY": "METADATA_VALUE", ..., "objectRetention": { "retainUntilTime": "RETAIN_UNTIL_TIME", "mode": "RETENTION_MODE" } }
Where:
METADATA_KEY/VALUEis the object's metadata key-value pair. You can specify one or more pairs.RETAIN_UNTIL_TIMEis the date and time, in RFC 3339 format, until which the object is retained. For example,2025-10-09T10:30:00Z. To set the retention configuration on an object, you'll need to enable retention on the bucket which contains the object.RETENTION_MODEis the retention mode, eitherUnlockedorLocked.When you send a request to update the
RETENTION_MODEandRETAIN_UNTIL_TIMEfields, consider the following:- To update the object retention configuration, you must provide non-empty values for both
RETENTION_MODEandRETAIN_UNTIL_TIMEfields; setting only one results in anINVALID_ARGUMENTerror. - You can extend the
RETAIN_UNTIL_TIMEvalue for objects in bothUnlockedorLockedmodes. - The object retention must be in
Unlockedmode if you want to do the following:- Reduce the
RETAIN_UNTIL_TIMEvalue. - Remove the retention configuration. To remove the configuration, you'll need to
provide empty values for both
RETENTION_MODEandRETAIN_UNTIL_TIMEfields.
- Reduce the
- If you omit both
RETENTION_MODEandRETAIN_UNTIL_TIMEfields, the retention configuration remains unchanged.
- To update the object retention configuration, you must provide non-empty values for both
Update object holds:
"PutObjectHold": { "temporary_hold": TEMPORARY_HOLD_VALUE, "event_based_hold": EVENT_BASED_HOLD_VALUE }
Where:
TEMPORARY_HOLD_VALUEis used to enable or disable the temporary object hold. A value of1enables the hold, and a value of2disables the hold.EVENT_BASED_HOLD_VALUEis used to enable or disable the event-based object hold. A value of1enables the hold, and a value of2disables the hold.
Update object contexts:
"updateObjectCustomContext": { "customContextUpdates": { "updates": { "CONTEXT_KEY": { "value": "CONTEXT_VALUE" } }, "keysToClear": ["CONTEXT_KEY_TO_CLEAR"] }, "clearAll": CLEAR_ALL_VALUE }
Where:
CONTEXT_KEYis the object context key to insert or update.CONTEXT_VALUEis the object context value for the key.CONTEXT_KEY_TO_CLEARis the key to remove.CLEAR_ALL_VALUEis set totrueto delete all existing object contexts.
DRY_RUN_VALUEis an optional boolean value. Set totrueto run the job in dry run mode. The default value isfalse.
Use
cURLto call the JSON API with aPOSTstorage batch operations job request:curl -X POST --data-binary @JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs?job_id=JOB_NAME"
Where:
JSON_FILE_NAMEis the name of the JSON file.PROJECT_IDis the ID or number of the project. For example,my-project.JOB_NAMEis the name of the storage batch operations job.
Get storage batch operations job details
This section describes how to get the storage batch operations job details.
To get the permissions that
you need to view a storage batch operations job,
ask your administrator to grant you the
Storage Admin (roles/storage.admin) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
- In the list of buckets, click the name of the bucket associated with the operation.
- On the Bucket details page, click the Operations tab.
- In the list of operations, click the Operation ID of the job you want to view.
- The details page shows metrics for your job in the Overview tab, such as objects discovered, processed, and any errors that occurred.
- In the Error summary table, review execution failure details or click View in Cloud Logging to view records.
- To view the configuration settings for the job, click the Configuration tab.
Command line
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
In your development environment, run the
gcloud storage batch-operations jobs describecommand.gcloud storage batch-operations jobs describe JOB_ID
Where:
JOB_IDis the name of the storage batch operations job.When you dry run a job, the output includes the following fields:
totalObjectCount: Displays the number of objects that match your selection criteria.errorSummaries: Lists any errors found during the dry run, such as permission issues or invalid configurations.totalBytesFound: If you use object prefixes for object selection, then the job also shows the total size of the objects that will be affected.
If successful, the response for the dry run job looks similar to the following example:
bucketList: buckets: - bucket: my-bucket manifest: manifestLocation: gs://my-bucket/manifest.csv completeTime: '2025-10-27T23:56:32Z' counters: totalObjectCount: '4' createTime: '2025-10-27T23:56:22.243528568Z' dryRun: true name: projects/my-project/locations/global/jobs/my-job putMetadata: contentLanguage: en state: SUCCEEDEDA successful job response omits the
dryRunfield and returns the following metrics in thecountersfield:- Total objects found.
- Total bytes found when using object prefixes.
- Successful object transformations.
- Failed object transformations, if applicable.
- Object contexts created, if applicable.
- Object contexts deleted, if applicable.
- Object contexts updated, if applicable. This counter tracks updates made to existing context keys.
The response for an actual job run looks similar to the following example:
bucketList: buckets: - bucket: my-bucket manifest: manifestLocation: gs://my-bucket/manifest.csv completeTime: '2025-10-31T20:19:42.357826655Z' counters: succeededObjectCount: '4' totalObjectCount: '4' createTime: '2025-10-31T20:19:22.016517077Z' name: projects/my-project/locations/global/jobs/my-job putMetadata: contentLanguage: en state: SUCCEEDED
Client libraries
For more information, see the
Cloud Storage C++ API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage PHP API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
C++
PHP
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call the JSON API with aGETstorage batch operations job request:curl -X GET \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs/JOB_ID"
Where:
PROJECT_IDis the ID or number of the project. For example,my-project.JOB_IDis the name of the storage batch operations job.
When you dry run a job, the output includes the following fields:
totalObjectCount: Displays the number of objects that match your selection criteria.errorSummaries: Lists any errors found during the dry run, such as permission issues or invalid configurations.totalBytesFound: If you use object prefixes for object selection, then the job also shows the total size of the objects that will be affected.
If successful, the response for the dry run looks similar to the following example:
{ "name": "projects/my-project/locations/global/jobs/my-job", "description": "dry-run-job", "deleteObject": { "permanent_object_deletion_enabled": true }, "createTime": "2025-10-28T00:26:53.900882459Z", "completeTime": "2025-10-28T00:27:04.101663275Z", "counters": { "totalObjectCount": "5", "totalBytesFound": "203" }, "state": "SUCCEEDED", "bucketList": { "buckets": [ { "bucket": "my-bucket", "prefixList": { "includedObjectPrefixes": [ "" ] } } ] }, "dryRun": true }
A successful job response omits the dryRun field and returns the following
metrics in the counters field:
- Total objects found.
- Total bytes found when using object prefixes.
- Successful object transformations.
- Failed object transformations, if applicable.
- Object contexts created, if applicable.
- Object contexts deleted, if applicable.
Object contexts updated, if applicable. This counter tracks updates made to existing context keys.
The response for an actual job run looks similar to the following example:
{ "name": "my-job", "description": "my-delete-objects-job", "deleteObject": { "permanent_object_deletion_enabled": true }, "createTime": "2025-10-28T00:26:53.900882459Z", "completeTime": "2025-10-28T00:27:04.101663275Z", "counters": { "succeededObjectCount: "5" "totalObjectCount": "5", "totalBytesFound": "203" }, "state": "SUCCEEDED", "bucketList": { "buckets": [ { "bucket": "my-bucket", "prefixList": { "includedObjectPrefixes": [ "" ] } } ] } }
List storage batch operations jobs
This section describes how to list the storage batch operations jobs within a project.
To get the permissions that
you need to list storage batch operations jobs,
ask your administrator to grant you the
Storage Admin (roles/storage.admin) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
- In the list of buckets, click the name of the bucket associated with the operation.
- On the Bucket details page, click the Operations tab. The Operations page shows a list of active running operations.
Command line
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
In your development environment, run the
gcloud storage batch-operations jobs listcommand.gcloud storage batch-operations jobs list
Client libraries
For more information, see the
Cloud Storage C++ API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage PHP API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
C++
PHP
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call the JSON API with aLISTstorage batch operations jobs request:curl -X GET \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs"
Where:
PROJECT_IDis the ID or number of the project. For example,my-project.
Cancel a storage batch operations job
This section describes how to cancel a storage batch operations job within a project.
To get the permissions that
you need to cancel a storage batch operations job,
ask your administrator to grant you the
Storage Admin (roles/storage.admin) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
In the bucket list, click the name of the bucket associated with the storage batch operation that you want to cancel.
Click the Operations tab. This tab displays a list of batch operation jobs. You can only cancel jobs that are in progress.
In the list of operations, select one or multiple jobs that you want to cancel, and then click Cancel.
Command line
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
In your development environment, run the
gcloud storage batch-operations jobs cancelcommand.gcloud storage batch-operations jobs cancel JOB_ID
Where:
JOB_IDis the name of the storage batch operations job.
Client libraries
For more information, see the
Cloud Storage C++ API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage PHP API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
C++
PHP
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call the JSON API with aCANCELa storage batch operations job request:curl -X CANCEL \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs/JOB_ID"
Where:
PROJECT_IDis the ID or number of the project. For example,my-project.JOB_IDis the name of the storage batch operations job.
Delete a storage batch operations job
This section describes how to delete a storage batch operations job.
To get the permissions that
you need to delete a storage batch operations job,
ask your administrator to grant you the
Storage Admin (roles/storage.admin) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
In the bucket list, click the name of the bucket associated with the storage batch operation that you want to delete.
Click the Operations tab. This tab displays a list of batch operation jobs. You can delete only jobs that aren't running, such as jobs that succeeded, failed, or were canceled.
In the list of operations, select one or multiple jobs that you want to delete, and then click Delete.
Command line
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
In your development environment, run the
gcloud storage batch-operations jobs deletecommand.gcloud storage batch-operations jobs delete JOB_ID
Where:
JOB_IDis the name of the storage batch operations job.
Client libraries
For more information, see the
Cloud Storage C++ API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage PHP API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
C++
PHP
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call the JSON API with aDELETEa storage batch operations job request:curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs/JOB_ID"
Where:
PROJECT_IDis the ID or number of the project. For example,my-project.JOB_IDis the name of the storage batch operations job.
Create a storage batch operations job using Storage Insights datasets
To create a storage batch operations job using Storage Insights datasets, complete the steps in the following sections.
To get the permissions that
you need to create a storage batch operations job,
ask your administrator to grant you the
Storage Admin (roles/storage.admin) IAM role on the project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Create a manifest using Storage Insights datasets
You can create the manifest for your storage batch operations job by extracting data from BigQuery. To do so, you'll need to query the linked dataset, export the resulting data as a CSV file, and save it to a Cloud Storage bucket. The storage batch operations job can then use this CSV file as its manifest.
Running the following SQL query in BigQuery on a Storage Insights
dataset view retrieves objects larger than 1 KiB that are named Temp_Training:
EXPORT DATA OPTIONS( uri=`URI`, format=`CSV`, overwrite=OVERWRITE_VALUE, field_delimiter=',') AS SELECT bucket, name, generation FROM DATASET_VIEW_NAME WHERE bucket = BUCKET_NAME AND name LIKE (`Temp_Training%`) AND size > 1024 * 1024 AND snapshotTime = SNAPSHOT_TIME
Where:
URIis the URI to the bucket that contains the manifest. For example,gs://bucket_name/path_to_csv_file/*.csv. When you use the*.csvwildcard, BigQuery exports the result to multiple CSV files.OVERWRITE_VALUEis a boolean value. If set totrue, the export operation overwrites existing files at the specified location.DATASET_VIEW_NAMEis the fully qualified name of the Storage Insights dataset view inPROJECT_ID.DATASET_ID.VIEW_NAMEformat. To find the name of your dataset, view the linked dataset.Where:
PROJECT_IDis the ID or number of the project. For example,my-project.DATASET_IDis the name of the dataset. For example,objects-deletion-dataset.VIEW_NAMEis the name of the dataset view. For example,bucket_attributes_view.
BUCKET_NAMEis the name of the bucket. For example,my-bucket.SNAPSHOT_TIMEis the snapshot time of the Storage Insights dataset view. For example,2024-09-10T00:00:00Z.
Create a storage batch operations job using a manifest file
To create a storage batch operations job to process objects contained in the manifest, complete the following steps:
Console
- In the Google Cloud console, go to the Cloud Storage Buckets page.
In the list of buckets, click the name of the bucket that contains the objects on which you want to perform batch operations.
The Bucket details page opens, with the Objects tab selected.
- Click Create batch operations.
- In the Select operation pane, choose the operation type:
- Manage object holds: Select Temporary hold or Event-based hold. For more information, see object holds.
- Update object metadata: To add object
metadata, do the following:
- To add custom metadata, complete the following steps:
- In the Key field, enter a key name.
- In the Value field, enter a value for that key.
- Optional: Click + Add item to add more key-value pairs.
- To update fixed-key metadata, complete the following steps:
- To expand the Update fixed-key metadata section, click the expander arrow.
- In the Select one or more metadata to update list, select metadata items to edit.
- To add custom metadata, complete the following steps:
- Update/Rotate encryption key: To use or update the encryption
key for objects, do the following:
- In the Select a Cloud KMS key list, select a customer-managed encryption key (CMEK).
- Optional: Select Switch project to pick a key from another project or select Enter key manually to fill details.
- Delete objects: To delete
objects, do the following:
- Check whether Object Versioning is enabled.
If Object Versioning is enabled, choose one of the following deletion options:
- Select Delete all versions of the objects to remove both live and noncurrent versions.
- Select Permanently delete live versions to remove only the live version.
If Object Versioning is not enabled, any objects selected for deletion are permanently deleted.
- Click Next.
- In the Name operation & specify objects pane, do the following:
- In the Name field, enter a name.
- Optional: In the Description field, enter a description.
- In the Specify objects section, select Upload lists of
objects using manifest CSV files, and then do the following:
- Upload your manifest CSV file to a bucket. This file must contain headers for Bucket name, Object key, and Generation number.
- In the Select manifest file mode list, choose one of the following options:
- If you select Select a manifest file from Cloud Storage, click Browse in the Select a manifest file from Cloud Storage field. In the Select object dialog that appears, navigate to your manifest CSV file, then click Select.
- If you select Select multiple manifest files using wildcard,
enter the file path in the Enter manifest file location using
wildcard field. For example,
bucket-name/folder/manifest_*.
- Click Create.
Command line
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
In your development environment, run the
gcloud storage batch-operations jobs createcommand:gcloud storage batch-operations jobs create \ JOB_ID \ --bucket=SOURCE_BUCKET_NAME \ --manifest-location=URI \ JOB_TYPE_FLAG
Where:
JOB_IDis the name of the storage batch operations job.SOURCE_BUCKET_NAMEis the bucket that contains one or more objects you want to process. For example,my-bucket.URIis the URI to the bucket that contains the manifest. For example,gs://bucket_name/path_to_csv_file/*.csv. When you use the*.csvwildcard, BigQuery exports the result to multiple CSV files.JOB_TYPE_FLAGis one of the following flags, depending on the job type.--delete-object: Delete one or more objects.--put-metadata: Update object metadata. Object metadata is stored as key-value pairs. Specify the key-value pair for the metadata you want to modify. You can specify one or more key-value pairs as a list. You can also provide object retention configurations using the--put-metadataflag.--rewrite-object: Update the customer-managed encryption keys for one or more objects.--put-object-event-based-hold: Enable event-based object holds.--no-put-object-event-based-hold: Disable event-based object holds.--put-object-temporary-hold: Enable temporary object holds.--no-put-object-temporary-hold: Disable temporary object holds.
--clear-all-object-custom-contexts: Delete all existing object contexts.The following example shows how to create a job to clear all object contexts for objects listed in
manifest.csv:gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --clear-all-object-custom-contexts
--clear-object-custom-contexts: Remove contexts with specific keys. You can also update specific contexts along with removing keys by using both the--clear-object-custom-contextsflag and one of the following flags:--update-object-custom-contexts: Provide a map of key-value pairs.The following example shows how to create a job to remove the context with key
temp-idand update or insert context with keyproject-idandcost-centerfor all objects listed inmanifest.csv:gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --clear-object-custom-contexts=temp-id \ --update-object-custom-contexts=project-id=project-A,cost-center=engineering
--update-object-custom-contexts-file: Provide the path to a JSON or YAML file with key-value pairs.The following example shows how to create a job to process objects defined in
manifest.csv. The job does the following:Removes all contexts with the
temp-idkey.Updates existing contexts with the
project-idandcost-centerkeys defined in the/tmp/context_updates.jsonfile.
gcloud storage batch-operations jobs create my-job \ --bucket=my-bucket \ --manifest-location=gs://my-bucket/manifest.csv \ --clear-object-custom-contexts=temp-id \ --update-object-custom-contexts-file=/tmp/context_updates.json
Where
/tmp/context_updates.jsoncontains the following object contexts:{ "project-id": {"value": "project-A"}, "cost-center": {"value": "engineering"} }
Integration with VPC Service Controls
You can provide an additional layer of security for storage batch operations resources by using VPC Service Controls. When you use VPC Service Controls, you add projects to service perimeters that protect resources and services from requests that originate from outside of the perimeter. To learn more about VPC Service Controls service perimeter details for storage batch operations, see Supported products and limitations.
Use Cloud Audit Logs for storage batch operations jobs
Storage batch operations jobs record transformations on Cloud Storage
objects in Cloud Storage Cloud Audit Logs. You can use Cloud Audit Logs
with Cloud Storage to track the object transformations that
storage batch operations jobs perform. For information about enabling audit
logs, see Enabling audit logs. In the audit log entry, the callUserAgent
metadata field with the value StorageBatchOperations indicates a
storage batch operations transformation.
Next Steps
- Learn about Storage Insights datasets