Create and manage batch operation jobs

This page describes how to create, view, list, cancel, and delete storage batch operations jobs. It also describes how to use Cloud Audit Logs with storage batch operations jobs.

Before you begin

To create and manage storage batch operations jobs, complete the steps in the following sections.

Configure Storage Intelligence

To create and manage storage batch operations jobs, configure Storage Intelligence on the bucket where you want to run the job.

Enable API

Enable the storage batch operations API.

gcloud services enable storagebatchoperations.googleapis.com

Create a manifest

If you want to use a manifest for object selection, create a manifest file. Using a manifest is one of the ways you can select objects to process in a storage batch operations job.

Create a storage batch operations job

This section describes how to create a storage batch operations job.

To get the permissions that you need to create a storage batch operations job, ask your administrator to grant you the Storage Admin (roles/storage.admin) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Console

  1. In the Google Cloud console, go to the Cloud Storage Buckets page.

    Go to Buckets

  2. In the list of buckets, click the name of the bucket that contains the objects on which you want to perform batch operations.

    The Bucket details page opens, with the Objects tab selected.

  3. Click Create batch operations.
  4. In the Select operation pane, choose the operation type:
    • Manage object holds: Select Temporary hold or Event-based hold. For more information, see object holds.
    • Update object metadata: To add object metadata, do the following:
      • To add custom metadata, complete the following steps:
        1. In the Key field, enter a key name.
        2. In the Value field, enter a value for that key.
        3. Optional: Click + Add item to add more key-value pairs.
      • To update fixed-key metadata, complete the following steps:
        1. To expand the Update fixed-key metadata section, click the expander arrow.
        2. In the Select one or more metadata to update list, select metadata items to edit.
    • Update/Rotate encryption key: To use or update the encryption key for objects, do the following:
      1. In the Select a Cloud KMS key list, select a customer-managed encryption key (CMEK).
      2. Optional: Select Switch project to pick a key from another project or select Enter key manually to fill details.
    • Delete objects: To delete objects, do the following:
      1. Check whether Object Versioning is enabled.
      2. If Object Versioning is enabled, choose one of the following deletion options:

        • Select Delete all versions of the objects to remove both live and noncurrent versions.
        • Select Permanently delete live versions to remove only the live version.

        If Object Versioning is not enabled, any objects selected for deletion are permanently deleted.

  5. Click Next.
  6. In the Name operation & specify objects pane, do the following:
    1. In the Name field, enter a name.
    2. Optional: In the Description field, enter a description.
    3. In the Specify objects section, define a criterion to process objects from the bucket. Choose one of the following options:
      • Select all objects: Includes all objects in the bucket.
      • Select objects by using prefix filters: To define the list of objects by using prefix filters, do the following:
        1. In the Enter Prefixes of the objects to be included field, enter a prefix.
        2. Optional: Click + Add prefix to specify additional prefixes.
      • Upload lists of objects using manifest CSV files: To use a manifest file for selecting objects, do the following:

        1. Upload your manifest CSV file to a bucket. This file must contain headers for Bucket name, Object key, and Generation number.
        2. In the Select manifest file mode list, choose one of the following options:
          • If you select Select a manifest file from Cloud Storage, click Browse in the Select a manifest file from Cloud Storage field. In the Select object dialog that appears, navigate to your manifest CSV file, then click Select.
          • If you select Select multiple manifest files using wildcard, enter the file path in the Enter manifest file location using wildcard field. For example, bucket-name/folder/manifest_*.
  7. Click Create.

Command line

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Use Google Cloud CLI version 516.0.0 or later.

  3. To set the default project, run the gcloud config set project command:

    gcloud config set project PROJECT_ID

    Where PROJECT_ID is the ID of your project.

  4. Optional: Run a dry run job. Before executing any job, we recommend that you run the job in dry run mode to verify the object selection criteria and check for any errors. The dry run does not modify any objects.

    In your development environment, run the gcloud storage batch-operations jobs create command with the --dry-run flag:

    gcloud storage batch-operations jobs create DRY_RUN_JOB_NAME \
    --bucket=BUCKET_NAME OBJECT_SELECTION_FLAG JOB_TYPE_FLAG \
    --dry-run

    The dry run uses the same parameters as the actual job. For details, see parameter descriptions.

    To view the results of the dry run, see Get storage batch operations job details.

  5. After a successful dry run, run the gcloud storage batch-operations jobs create command.

    gcloud storage batch-operations jobs create JOB_NAME\
    --bucket=BUCKET_NAME OBJECT_SELECTION_FLAG JOB_TYPE_FLAG

    Where the parameters are as follows:

    • DRY_RUN_JOB_NAME is the name of the storage batch operations dry run job.
    • JOB_NAME is the name of the storage batch operations job.

    • BUCKET_NAME is the name of the bucket that contains one or more objects you want to process.

    • OBJECT_SELECTION_FLAG is one of the following flags that you need to specify:

      • --included-object-prefixes: Specify one or more object prefixes. For example:

        • To match a single prefix, use: --included-object-prefixes='prefix1'.
        • To match multiple prefixes, use a comma-separated prefix list: --included-object-prefixes='prefix1,prefix2'.
        • To include all objects, use an empty prefix: --included-object-prefixes=''.
      • --manifest-location: Specify the manifest location. For example, gs://bucket_name/path/object_name.csv.

    • JOB_TYPE_FLAG is one of the following flags that you need to specify, depending on the job type.

      • --delete-object: Delete one or more objects.

        • If Object Versioning is enabled for the bucket, current objects transition to a noncurrent state, and noncurrent objects are skipped.

        • If Object Versioning is disabled for the bucket, the delete operation permanently deletes objects and skips noncurrent objects.

      • --enable-permanent-object-deletion: Permanently delete objects. Use this flag along with the --delete-object flag to permanently delete both live and noncurrent objects in a bucket, regardless of the bucket's object versioning configuration.

      • --rewrite-object: Update the customer-managed encryption keys for one or more objects.

      • --put-object-event-based-hold: Enable event-based object holds.

      • --no-put-object-event-based-hold: Disable event-based object holds.

      • --put-object-temporary-hold: Enable temporary object holds.

      • --no-put-object-temporary-hold: Disable temporary object holds.

        The following example shows how to create a job to update the Content-Language metadata to en for all objects listed in manifest.csv.

        gcloud storage batch-operations jobs create my-job \
        --bucket=my-bucket \
        --manifest-location=gs://my-bucket/manifest.csv \
        --put-metadata=Content-Language=en
      • --put-metadata: Update object metadata. Specify the key-value pair for the object metadata you want to modify. You can specify one or more key-value pairs as a list. You can also set object retention configurations using the --put-metadata flag. To do so, specify the retention parameters using the Retain-Until and Retention-Mode fields. For example,

        gcloud storage batch-operations jobs create my-job \
        --bucket=my-bucket \
        --manifest-location=gs://my-bucket/manifest.csv \
        --put-metadata=Retain-Until=RETAIN_UNTIL_TIME, Retention-Mode=RETENTION_MODE

        Where:

        • RETAIN_UNTIL_TIME is the date and time, in RFC 3339 format, until which the object is retained. For example, 2025-10-09T10:30:00Z. To set the retention configuration on an object, you'll need to enable retention on the bucket which contains the object.

        • RETENTION_MODE is the retention mode, either Unlocked or Locked.

          When you send a request to update the RETENTION_MODE and RETAIN_UNTIL_TIME fields, consider the following:

          • To update the object retention configuration, you must provide non-empty values for both RETENTION_MODE and RETAIN_UNTIL_TIME fields; setting only one results in an INVALID_ARGUMENT error.
          • You can extend the RETAIN_UNTIL_TIME value for objects in both Unlocked or Locked modes.
          • The object retention must be in Unlocked mode if you want to do the following:
            • Reduce the RETAIN_UNTIL_TIME value.
            • Remove the retention configuration. To remove the configuration, you'll need to provide empty values for both RETENTION_MODE and RETAIN_UNTIL_TIME fields.
          • If you omit both RETENTION_MODE and RETAIN_UNTIL_TIME fields, the retention configuration remains unchanged.

      • --clear-all-object-custom-contexts: Delete all existing object contexts.

        The following example shows how to create a job to clear all object contexts for objects listed in manifest.csv:

        gcloud storage batch-operations jobs create my-job \
        --bucket=my-bucket \
        --manifest-location=gs://my-bucket/manifest.csv \
        --clear-all-object-custom-contexts
      • --clear-object-custom-contexts: Remove contexts with specific keys. You can also update specific contexts along with removing keys by using both the --clear-object-custom-contexts flag and one of the following flags:

        • --update-object-custom-contexts: Provide a map of key-value pairs.

          The following example shows how to create a job to remove the context with key temp-id and update or insert context with key project-id and cost-center for all objects listed in manifest.csv:

          gcloud storage batch-operations jobs create my-job \
          --bucket=my-bucket \
          --manifest-location=gs://my-bucket/manifest.csv \
          --clear-object-custom-contexts=temp-id \
          --update-object-custom-contexts=project-id=project-A,cost-center=engineering
        • --update-object-custom-contexts-file: Provide the path to a JSON or YAML file with key-value pairs.

          The following example shows how to create a job to process objects defined in manifest.csv. The job does the following:

          • Removes all contexts with the temp-id key.

          • Updates existing contexts with the project-id and cost-center keys defined in the /tmp/context_updates.json file.

          gcloud storage batch-operations jobs create my-job \
          --bucket=my-bucket \
          --manifest-location=gs://my-bucket/manifest.csv \
          --clear-object-custom-contexts=temp-id \
          --update-object-custom-contexts-file=/tmp/context_updates.json

          Where /tmp/context_updates.json contains the following object contexts:

          {
          "project-id": {"value": "project-A"},
          "cost-center": {"value": "engineering"}
          }

Client libraries

C++

For more information, see the Cloud Storage C++ API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

[](google::cloud::storagebatchoperations_v1::StorageBatchOperationsClient
       client,
   std::string const& project_id, std::string const& job_id,
   std::string const& target_bucket_name, std::string const& object_prefix) {
  auto const parent =
      std::string{"projects/"} + project_id + "/locations/global";
  namespace sbo = google::cloud::storagebatchoperations::v1;
  sbo::Job job;
  sbo::BucketList* bucket_list = job.mutable_bucket_list();
  sbo::BucketList::Bucket* bucket_config = bucket_list->add_buckets();
  bucket_config->set_bucket(target_bucket_name);
  sbo::PrefixList* prefix_list_config = bucket_config->mutable_prefix_list();
  prefix_list_config->add_included_object_prefixes(object_prefix);
  sbo::DeleteObject* delete_object_config = job.mutable_delete_object();
  delete_object_config->set_permanent_object_deletion_enabled(false);
  auto result = client.CreateJob(parent, job, job_id).get();
  if (!result) throw result.status();
  std::cout << "Created job: " << result->name() << "\n";
}

PHP

For more information, see the Cloud Storage PHP API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

use Google\Cloud\StorageBatchOperations\V1\Client\StorageBatchOperationsClient;
use Google\Cloud\StorageBatchOperations\V1\CreateJobRequest;
use Google\Cloud\StorageBatchOperations\V1\Job;
use Google\Cloud\StorageBatchOperations\V1\BucketList;
use Google\Cloud\StorageBatchOperations\V1\BucketList\Bucket;
use Google\Cloud\StorageBatchOperations\V1\PrefixList;
use Google\Cloud\StorageBatchOperations\V1\DeleteObject;

/**
 * Create a new batch job.
 *
 * @param string $projectId Your Google Cloud project ID.
 *        (e.g. 'my-project-id')
 * @param string $jobId A unique identifier for this job.
 *        (e.g. '94d60cc1-2d95-41c5-b6e3-ff66cd3532d5')
 * @param string $bucketName The name of your Cloud Storage bucket to operate on.
 *        (e.g. 'my-bucket')
 * @param string $objectPrefix The prefix of objects to include in the operation.
 *        (e.g. 'prefix1')
 */
function create_job(string $projectId, string $jobId, string $bucketName, string $objectPrefix): void
{
    // Create a client.
    $storageBatchOperationsClient = new StorageBatchOperationsClient();

    $parent = $storageBatchOperationsClient->locationName($projectId, 'global');

    $prefixListConfig = new PrefixList(['included_object_prefixes' => [$objectPrefix]]);
    $bucket = new Bucket(['bucket' => $bucketName, 'prefix_list' => $prefixListConfig]);
    $bucketList = new BucketList(['buckets' => [$bucket]]);

    $deleteObject = new DeleteObject(['permanent_object_deletion_enabled' => false]);

    $job = new Job(['bucket_list' => $bucketList, 'delete_object' => $deleteObject]);

    $request = new CreateJobRequest([
        'parent' => $parent,
        'job_id' => $jobId,
        'job' => $job,
    ]);
    $response = $storageBatchOperationsClient->createJob($request);

    printf('Created job: %s', $response->getName());
}

REST APIs

JSON API

  1. Have gcloud CLI installed and initialized, which lets you generate an access token for the Authorization header.

  2. Create a JSON file that contains the settings for the storage batch operations job. The following are common settings to include:

    {
        "Description": "JOB_DESCRIPTION",
        "BucketList":
        {
        "Buckets":
        [
         {
           "Bucket": "BUCKET_NAME",
           "Manifest": {
              "manifest_location": "MANIFEST_LOCATION"
               }
           "PrefixList": {
              "include_object_prefixes": "OBJECT_PREFIXES"
               }
         }
        ]
        },
        "DeleteObject":
        {
        "permanent_object_deletion_enabled": OBJECT_DELETION_VALUE
         }
        "RewriteObject": {
          "kms_key":"KMS_KEY_VALUE"
          }
        "PutMetadata":{
          "METADATA_KEY": "METADATA_VALUE",
          ...,
          "objectRetention": {
              "retainUntilTime": "RETAIN_UNTIL_TIME",
              "mode": "RETENTION_MODE"
             }
           }
        "PutObjectHold": {
          "temporary_hold": TEMPORARY_HOLD_VALUE,
          "event_based_hold": EVENT_BASED_HOLD_VALUE
        },
        "updateObjectCustomContext": {
           "customContextUpdates": {
              "updates": {
                 "CONTEXT_KEY": { "value": "CONTEXT_VALUE" }
              },
              "keysToClear": ["CONTEXT_KEY_TO_CLEAR"]
           },
           "clearAll": CLEAR_ALL_VALUE
        },
        "dryRun": DRY_RUN_VALUE
        }
         

    Where:

    • JOB_NAME is the name of the storage batch operations job.

    • JOB_DESCRIPTION is the description of the storage batch operations job.

    • BUCKET_NAME is the name of the bucket that contains one or more objects you want to process.

    • To specify the objects you want to process, use any one of the following attributes in the JSON file:

      • MANIFEST_LOCATION is the manifest location. For example, gs://bucket_name/path/object_name.csv.

      • OBJECT_PREFIXES is the comma-separated list containing one or more object prefixes. To match all objects, use an empty list.

    • Depending on the job you want to process, specify any one of the following options:

      • Delete objects:

        "DeleteObject":
        {
        "permanent_object_deletion_enabled": OBJECT_DELETION_VALUE
        }

        Where OBJECT_DELETION_VALUE is TRUE to delete objects.

      • Update the Customer-managed encryption key for objects:

        "RewriteObject":
        {
        "kms_key": KMS_KEY_VALUE
        }

        Where KMS_KEY_VALUE is the value of the object's KMS key you want to update.

      • Update object metadata:

        "PutMetadata": {
         "METADATA_KEY": "METADATA_VALUE",
         ...,
        "objectRetention": {
           "retainUntilTime": "RETAIN_UNTIL_TIME",
           "mode": "RETENTION_MODE"
         }
        }

        Where:

        • METADATA_KEY/VALUE is the object's metadata key-value pair. You can specify one or more pairs.
        • RETAIN_UNTIL_TIME is the date and time, in RFC 3339 format, until which the object is retained. For example, 2025-10-09T10:30:00Z. To set the retention configuration on an object, you'll need to enable retention on the bucket which contains the object.
        • RETENTION_MODE is the retention mode, either Unlocked or Locked.

          When you send a request to update the RETENTION_MODE and RETAIN_UNTIL_TIME fields, consider the following:

          • To update the object retention configuration, you must provide non-empty values for both RETENTION_MODE and RETAIN_UNTIL_TIME fields; setting only one results in an INVALID_ARGUMENT error.
          • You can extend the RETAIN_UNTIL_TIME value for objects in both Unlocked or Locked modes.
          • The object retention must be in Unlocked mode if you want to do the following:
            • Reduce the RETAIN_UNTIL_TIME value.
            • Remove the retention configuration. To remove the configuration, you'll need to provide empty values for both RETENTION_MODE and RETAIN_UNTIL_TIME fields.
          • If you omit both RETENTION_MODE and RETAIN_UNTIL_TIME fields, the retention configuration remains unchanged.
        • Update object holds:

          "PutObjectHold": {
          "temporary_hold": TEMPORARY_HOLD_VALUE,
          "event_based_hold": EVENT_BASED_HOLD_VALUE
          }

          Where:

          • TEMPORARY_HOLD_VALUE is used to enable or disable the temporary object hold. A value of 1 enables the hold, and a value of 2 disables the hold.

          • EVENT_BASED_HOLD_VALUE is used to enable or disable the event-based object hold. A value of 1 enables the hold, and a value of 2 disables the hold.

        • Update object contexts:

          "updateObjectCustomContext": {
          "customContextUpdates": {
            "updates": {
              "CONTEXT_KEY": { "value": "CONTEXT_VALUE" }
            },
            "keysToClear": ["CONTEXT_KEY_TO_CLEAR"]
          },
          "clearAll": CLEAR_ALL_VALUE
          }

          Where:

          • CONTEXT_KEY is the object context key to insert or update.
          • CONTEXT_VALUE is the object context value for the key.
          • CONTEXT_KEY_TO_CLEAR is the key to remove.
          • CLEAR_ALL_VALUE is set to true to delete all existing object contexts.
      • DRY_RUN_VALUE is an optional boolean value. Set to true to run the job in dry run mode. The default value is false.

    • Use cURL to call the JSON API with a POST storage batch operations job request:

      curl -X POST --data-binary @JSON_FILE_NAME \
       -H "Authorization: Bearer $(gcloud auth print-access-token)" \
       -H "Content-Type: application/json" \
       "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs?job_id=JOB_NAME"

      Where:

      • JSON_FILE_NAME is the name of the JSON file.
      • PROJECT_ID is the ID or number of the project. For example, my-project.
      • JOB_NAME is the name of the storage batch operations job.

Get storage batch operations job details

This section describes how to get the storage batch operations job details.

To get the permissions that you need to view a storage batch operations job, ask your administrator to grant you the Storage Admin (roles/storage.admin) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Console

  1. In the Google Cloud console, go to the Cloud Storage Buckets page.

    Go to Buckets

  2. In the list of buckets, click the name of the bucket associated with the operation.
  3. On the Bucket details page, click the Operations tab.
  4. In the list of operations, click the Operation ID of the job you want to view.
  5. The details page shows metrics for your job in the Overview tab, such as objects discovered, processed, and any errors that occurred.
  6. In the Error summary table, review execution failure details or click View in Cloud Logging to view records.
  7. To view the configuration settings for the job, click the Configuration tab.

Command line

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. In your development environment, run the gcloud storage batch-operations jobs describe command.

    gcloud storage batch-operations jobs describe JOB_ID

    Where:

    JOB_ID is the name of the storage batch operations job.

    When you dry run a job, the output includes the following fields:

    • totalObjectCount: Displays the number of objects that match your selection criteria.
    • errorSummaries: Lists any errors found during the dry run, such as permission issues or invalid configurations.
    • totalBytesFound: If you use object prefixes for object selection, then the job also shows the total size of the objects that will be affected.

    If successful, the response for the dry run job looks similar to the following example:

      bucketList:
        buckets:
        - bucket: my-bucket
          manifest:
            manifestLocation: gs://my-bucket/manifest.csv
      completeTime: '2025-10-27T23:56:32Z'
      counters:
        totalObjectCount: '4'
      createTime: '2025-10-27T23:56:22.243528568Z'
      dryRun: true
      name: projects/my-project/locations/global/jobs/my-job
      putMetadata:
        contentLanguage: en
      state: SUCCEEDED
    

    A successful job response omits the dryRun field and returns the following metrics in the counters field:

    • Total objects found.
    • Total bytes found when using object prefixes.
    • Successful object transformations.
    • Failed object transformations, if applicable.
    • Object contexts created, if applicable.
    • Object contexts deleted, if applicable.
    • Object contexts updated, if applicable. This counter tracks updates made to existing context keys.

    The response for an actual job run looks similar to the following example:

      bucketList:
        buckets:
        - bucket: my-bucket
          manifest:
            manifestLocation: gs://my-bucket/manifest.csv
      completeTime: '2025-10-31T20:19:42.357826655Z'
      counters:
        succeededObjectCount: '4'
        totalObjectCount: '4'
      createTime: '2025-10-31T20:19:22.016517077Z'
      name: projects/my-project/locations/global/jobs/my-job
      putMetadata:
        contentLanguage: en
      state: SUCCEEDED
      

Client libraries

C++

For more information, see the Cloud Storage C++ API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

[](google::cloud::storagebatchoperations_v1::StorageBatchOperationsClient
       client,
   std::string const& project_id, std::string const& job_id) {
  auto const parent =
      std::string{"projects/"} + project_id + "/locations/global";
  auto const name = parent + "/jobs/" + job_id;
  auto job = client.GetJob(name);
  if (!job) throw job.status();
  std::cout << "Got job: " << job->name() << "\n";
}

PHP

For more information, see the Cloud Storage PHP API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

use Google\Cloud\StorageBatchOperations\V1\Client\StorageBatchOperationsClient;
use Google\Cloud\StorageBatchOperations\V1\GetJobRequest;

/**
 * Gets a batch job.
 *
 * @param string $projectId Your Google Cloud project ID.
 *        (e.g. 'my-project-id')
 * @param string $jobId A unique identifier for this job.
 *        (e.g. '94d60cc1-2d95-41c5-b6e3-ff66cd3532d5')
 */
function get_job(string $projectId, string $jobId): void
{
    // Create a client.
    $storageBatchOperationsClient = new StorageBatchOperationsClient();

    $parent = $storageBatchOperationsClient->locationName($projectId, 'global');
    $formattedName = $parent . '/jobs/' . $jobId;

    $request = new GetJobRequest([
        'name' => $formattedName,
    ]);

    $response = $storageBatchOperationsClient->getJob($request);

    printf('Got job: %s', $response->getName());
}

REST APIs

JSON API

  1. Have gcloud CLI installed and initialized, which lets you generate an access token for the Authorization header.

  2. Use cURL to call the JSON API with a GET storage batch operations job request:

    curl -X GET \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs/JOB_ID"

    Where:

    • PROJECT_ID is the ID or number of the project. For example, my-project.
    • JOB_ID is the name of the storage batch operations job.

    When you dry run a job, the output includes the following fields:

    • totalObjectCount: Displays the number of objects that match your selection criteria.
    • errorSummaries: Lists any errors found during the dry run, such as permission issues or invalid configurations.
    • totalBytesFound: If you use object prefixes for object selection, then the job also shows the total size of the objects that will be affected.

    If successful, the response for the dry run looks similar to the following example:

    {
      "name": "projects/my-project/locations/global/jobs/my-job",
      "description": "dry-run-job",
      "deleteObject": {
        "permanent_object_deletion_enabled": true
         },
      "createTime": "2025-10-28T00:26:53.900882459Z",
      "completeTime": "2025-10-28T00:27:04.101663275Z",
      "counters": {
          "totalObjectCount": "5",
          "totalBytesFound": "203"
        },
      "state": "SUCCEEDED",
      "bucketList": {
        "buckets": [
          {
            "bucket": "my-bucket",
            "prefixList": {
              "includedObjectPrefixes": [
                ""
              ]
            }
          }
        ]
      },
      "dryRun": true
    }
    

A successful job response omits the dryRun field and returns the following metrics in the counters field:

  • Total objects found.
  • Total bytes found when using object prefixes.
  • Successful object transformations.
  • Failed object transformations, if applicable.
  • Object contexts created, if applicable.
  • Object contexts deleted, if applicable.
  • Object contexts updated, if applicable. This counter tracks updates made to existing context keys.

    The response for an actual job run looks similar to the following example:

    {
    "name": "my-job",
    "description": "my-delete-objects-job",
    "deleteObject": {
      "permanent_object_deletion_enabled": true
    },
    "createTime": "2025-10-28T00:26:53.900882459Z",
    "completeTime": "2025-10-28T00:27:04.101663275Z",
    "counters": {
      "succeededObjectCount: "5"
      "totalObjectCount": "5",
      "totalBytesFound": "203"
    },
    "state": "SUCCEEDED",
    "bucketList": {
      "buckets": [
        {
          "bucket": "my-bucket",
          "prefixList": {
            "includedObjectPrefixes": [
              ""
            ]
          }
        }
      ]
    }
    }
    

List storage batch operations jobs

This section describes how to list the storage batch operations jobs within a project.

To get the permissions that you need to list storage batch operations jobs, ask your administrator to grant you the Storage Admin (roles/storage.admin) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Console

  1. In the Google Cloud console, go to the Cloud Storage Buckets page.

    Go to Buckets

  2. In the list of buckets, click the name of the bucket associated with the operation.
  3. On the Bucket details page, click the Operations tab. The Operations page shows a list of active running operations.

Command line

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. In your development environment, run the gcloud storage batch-operations jobs list command.

    gcloud storage batch-operations jobs list

Client libraries

C++

For more information, see the Cloud Storage C++ API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

[](google::cloud::storagebatchoperations_v1::StorageBatchOperationsClient
       client,
   std::string const& project_id) {
  auto const parent =
      std::string{"projects/"} + project_id + "/locations/global";
  for (auto const& job : client.ListJobs(parent)) {
    if (!job) throw job.status();
    std::cout << job->name() << "\n";
  }
}

PHP

For more information, see the Cloud Storage PHP API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

use Google\Cloud\StorageBatchOperations\V1\Client\StorageBatchOperationsClient;
use Google\Cloud\StorageBatchOperations\V1\ListJobsRequest;

/**
 * List Jobs in a given project.
 *
 * @param string $projectId Your Google Cloud project ID.
 *        (e.g. 'my-project-id')
 */
function list_jobs(string $projectId): void
{
    // Create a client.
    $storageBatchOperationsClient = new StorageBatchOperationsClient();

    $parent = $storageBatchOperationsClient->locationName($projectId, 'global');

    $request = new ListJobsRequest([
        'parent' => $parent,
    ]);

    $jobs = $storageBatchOperationsClient->listJobs($request);

    foreach ($jobs as $job) {
        printf('Job name: %s' . PHP_EOL, $job->getName());
    }
}

REST APIs

JSON API

  1. Have gcloud CLI installed and initialized, which lets you generate an access token for the Authorization header.

  2. Use cURL to call the JSON API with a LIST storage batch operations jobs request:

    curl -X GET \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs"

    Where:

    PROJECT_ID is the ID or number of the project. For example, my-project.

Cancel a storage batch operations job

This section describes how to cancel a storage batch operations job within a project.

To get the permissions that you need to cancel a storage batch operations job, ask your administrator to grant you the Storage Admin (roles/storage.admin) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Console

  1. In the Google Cloud console, go to the Cloud Storage Buckets page.

    Go to Buckets

  2. In the bucket list, click the name of the bucket associated with the storage batch operation that you want to cancel.

  3. Click the Operations tab. This tab displays a list of batch operation jobs. You can only cancel jobs that are in progress.

  4. In the list of operations, select one or multiple jobs that you want to cancel, and then click Cancel.

Command line

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. In your development environment, run the gcloud storage batch-operations jobs cancel command.

    gcloud storage batch-operations jobs cancel JOB_ID

    Where:

    JOB_ID is the name of the storage batch operations job.

Client libraries

C++

For more information, see the Cloud Storage C++ API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

[](google::cloud::storagebatchoperations_v1::StorageBatchOperationsClient
       client,
   std::string const& project_id, std::string const& job_id) {
  auto const parent =
      std::string{"projects/"} + project_id + "/locations/global";
  auto const name = parent + "/jobs/" + job_id;
  auto response = client.CancelJob(name);
  if (!response) throw response.status();
  std::cout << "Cancelled job: " << name << "\n";
}

PHP

For more information, see the Cloud Storage PHP API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

use Google\Cloud\StorageBatchOperations\V1\Client\StorageBatchOperationsClient;
use Google\Cloud\StorageBatchOperations\V1\CancelJobRequest;

/**
 * Cancel a batch job.
 *
 * @param string $projectId Your Google Cloud project ID.
 *        (e.g. 'my-project-id')
 * @param string $jobId A unique identifier for this job.
 *        (e.g. '94d60cc1-2d95-41c5-b6e3-ff66cd3532d5')
 */
function cancel_job(string $projectId, string $jobId): void
{
    // Create a client.
    $storageBatchOperationsClient = new StorageBatchOperationsClient();

    $parent = $storageBatchOperationsClient->locationName($projectId, 'global');
    $formattedName = $parent . '/jobs/' . $jobId;

    $request = new CancelJobRequest([
        'name' => $formattedName,
    ]);

    $storageBatchOperationsClient->cancelJob($request);

    printf('Cancelled job: %s', $formattedName);
}

REST APIs

JSON API

  1. Have gcloud CLI installed and initialized, which lets you generate an access token for the Authorization header.

  2. Use cURL to call the JSON API with a CANCEL a storage batch operations job request:

    curl -X CANCEL \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs/JOB_ID"

    Where:

    • PROJECT_ID is the ID or number of the project. For example, my-project.

    • JOB_ID is the name of the storage batch operations job.

Delete a storage batch operations job

This section describes how to delete a storage batch operations job.

To get the permissions that you need to delete a storage batch operations job, ask your administrator to grant you the Storage Admin (roles/storage.admin) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Console

  1. In the Google Cloud console, go to the Cloud Storage Buckets page.

    Go to Buckets

  2. In the bucket list, click the name of the bucket associated with the storage batch operation that you want to delete.

  3. Click the Operations tab. This tab displays a list of batch operation jobs. You can delete only jobs that aren't running, such as jobs that succeeded, failed, or were canceled.

  4. In the list of operations, select one or multiple jobs that you want to delete, and then click Delete.

Command line

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. In your development environment, run the gcloud storage batch-operations jobs delete command.

    gcloud storage batch-operations jobs delete JOB_ID

    Where:

    JOB_ID is the name of the storage batch operations job.

Client libraries

C++

For more information, see the Cloud Storage C++ API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

[](google::cloud::storagebatchoperations_v1::StorageBatchOperationsClient
       client,
   std::string const& project_id, std::string const& job_id) {
  auto const parent =
      std::string{"projects/"} + project_id + "/locations/global";
  auto const name = parent + "/jobs/" + job_id;
  auto status = client.DeleteJob(name);
  if (!status.ok()) throw status;
  std::cout << "Deleted job: " << name << "\n";
}

PHP

For more information, see the Cloud Storage PHP API reference documentation.

To authenticate to Cloud Storage, set up Application Default Credentials. For more information, see Set up authentication for client libraries.

use Google\Cloud\StorageBatchOperations\V1\Client\StorageBatchOperationsClient;
use Google\Cloud\StorageBatchOperations\V1\DeleteJobRequest;

/**
 * Delete a batch job.
 *
 * @param string $projectId Your Google Cloud project ID.
 *        (e.g. 'my-project-id')
 * @param string $jobId A unique identifier for this job.
 *        (e.g. '94d60cc1-2d95-41c5-b6e3-ff66cd3532d5')
 */
function delete_job(string $projectId, string $jobId): void
{
    // Create a client.
    $storageBatchOperationsClient = new StorageBatchOperationsClient();

    $parent = $storageBatchOperationsClient->locationName($projectId, 'global');
    $formattedName = $parent . '/jobs/' . $jobId;

    $request = new DeleteJobRequest([
        'name' => $formattedName,
    ]);

    $storageBatchOperationsClient->deleteJob($request);

    printf('Deleted job: %s', $formattedName);
}

REST APIs

JSON API

  1. Have gcloud CLI installed and initialized, which lets you generate an access token for the Authorization header.

  2. Use cURL to call the JSON API with a DELETE a storage batch operations job request:

    curl -X DELETE \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      "https://storagebatchoperations.googleapis.com/v1/projects/PROJECT_ID/locations/global/jobs/JOB_ID"

    Where:

    • PROJECT_ID is the ID or number of the project. For example, my-project.

    • JOB_ID is the name of the storage batch operations job.

Create a storage batch operations job using Storage Insights datasets

To create a storage batch operations job using Storage Insights datasets, complete the steps in the following sections.

To get the permissions that you need to create a storage batch operations job, ask your administrator to grant you the Storage Admin (roles/storage.admin) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

You might also be able to get the required permissions through custom roles or other predefined roles.

Create a manifest using Storage Insights datasets

You can create the manifest for your storage batch operations job by extracting data from BigQuery. To do so, you'll need to query the linked dataset, export the resulting data as a CSV file, and save it to a Cloud Storage bucket. The storage batch operations job can then use this CSV file as its manifest.

Running the following SQL query in BigQuery on a Storage Insights dataset view retrieves objects larger than 1 KiB that are named Temp_Training:

  EXPORT DATA OPTIONS(
   uri=`URI`,
   format=`CSV`,
   overwrite=OVERWRITE_VALUE,
   field_delimiter=',') AS
  SELECT bucket, name, generation
  FROM DATASET_VIEW_NAME
  WHERE bucket = BUCKET_NAME
  AND name LIKE (`Temp_Training%`)
  AND size > 1024 * 1024
  AND snapshotTime = SNAPSHOT_TIME
  

Where:

  • URI is the URI to the bucket that contains the manifest. For example, gs://bucket_name/path_to_csv_file/*.csv. When you use the *.csv wildcard, BigQuery exports the result to multiple CSV files.
  • OVERWRITE_VALUE is a boolean value. If set to true, the export operation overwrites existing files at the specified location.
  • DATASET_VIEW_NAME is the fully qualified name of the Storage Insights dataset view in PROJECT_ID.DATASET_ID.VIEW_NAME format. To find the name of your dataset, view the linked dataset.

    Where:

    • PROJECT_ID is the ID or number of the project. For example, my-project.
    • DATASET_ID is the name of the dataset. For example, objects-deletion-dataset.
    • VIEW_NAME is the name of the dataset view. For example, bucket_attributes_view.
  • BUCKET_NAME is the name of the bucket. For example, my-bucket.

  • SNAPSHOT_TIME is the snapshot time of the Storage Insights dataset view. For example, 2024-09-10T00:00:00Z.

Create a storage batch operations job using a manifest file

To create a storage batch operations job to process objects contained in the manifest, complete the following steps:

Console

  1. In the Google Cloud console, go to the Cloud Storage Buckets page.

    Go to Buckets

  2. In the list of buckets, click the name of the bucket that contains the objects on which you want to perform batch operations.

    The Bucket details page opens, with the Objects tab selected.

  3. Click Create batch operations.
  4. In the Select operation pane, choose the operation type:
    • Manage object holds: Select Temporary hold or Event-based hold. For more information, see object holds.
    • Update object metadata: To add object metadata, do the following:
      • To add custom metadata, complete the following steps:
        1. In the Key field, enter a key name.
        2. In the Value field, enter a value for that key.
        3. Optional: Click + Add item to add more key-value pairs.
      • To update fixed-key metadata, complete the following steps:
        1. To expand the Update fixed-key metadata section, click the expander arrow.
        2. In the Select one or more metadata to update list, select metadata items to edit.
    • Update/Rotate encryption key: To use or update the encryption key for objects, do the following:
      1. In the Select a Cloud KMS key list, select a customer-managed encryption key (CMEK).
      2. Optional: Select Switch project to pick a key from another project or select Enter key manually to fill details.
    • Delete objects: To delete objects, do the following:
      1. Check whether Object Versioning is enabled.
      2. If Object Versioning is enabled, choose one of the following deletion options:

        • Select Delete all versions of the objects to remove both live and noncurrent versions.
        • Select Permanently delete live versions to remove only the live version.

        If Object Versioning is not enabled, any objects selected for deletion are permanently deleted.

  5. Click Next.
  6. In the Name operation & specify objects pane, do the following:
    1. In the Name field, enter a name.
    2. Optional: In the Description field, enter a description.
    3. In the Specify objects section, select Upload lists of objects using manifest CSV files, and then do the following:

      1. Upload your manifest CSV file to a bucket. This file must contain headers for Bucket name, Object key, and Generation number.
      2. In the Select manifest file mode list, choose one of the following options:
        • If you select Select a manifest file from Cloud Storage, click Browse in the Select a manifest file from Cloud Storage field. In the Select object dialog that appears, navigate to your manifest CSV file, then click Select.
        • If you select Select multiple manifest files using wildcard, enter the file path in the Enter manifest file location using wildcard field. For example, bucket-name/folder/manifest_*.
  7. Click Create.

Command line

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. In your development environment, run the gcloud storage batch-operations jobs create command:

    gcloud storage batch-operations jobs create \
    JOB_ID \
    --bucket=SOURCE_BUCKET_NAME \
    --manifest-location=URI \
    JOB_TYPE_FLAG

    Where:

    • JOB_ID is the name of the storage batch operations job.

    • SOURCE_BUCKET_NAME is the bucket that contains one or more objects you want to process. For example, my-bucket.

    • URI is the URI to the bucket that contains the manifest. For example, gs://bucket_name/path_to_csv_file/*.csv. When you use the *.csv wildcard, BigQuery exports the result to multiple CSV files.

    • JOB_TYPE_FLAG is one of the following flags, depending on the job type.

      • --delete-object: Delete one or more objects.

      • --put-metadata: Update object metadata. Object metadata is stored as key-value pairs. Specify the key-value pair for the metadata you want to modify. You can specify one or more key-value pairs as a list. You can also provide object retention configurations using the --put-metadata flag.

      • --rewrite-object: Update the customer-managed encryption keys for one or more objects.

      • --put-object-event-based-hold: Enable event-based object holds.

      • --no-put-object-event-based-hold: Disable event-based object holds.

      • --put-object-temporary-hold: Enable temporary object holds.

      • --no-put-object-temporary-hold: Disable temporary object holds.

      • --clear-all-object-custom-contexts: Delete all existing object contexts.

        The following example shows how to create a job to clear all object contexts for objects listed in manifest.csv:

        gcloud storage batch-operations jobs create my-job \
        --bucket=my-bucket \
        --manifest-location=gs://my-bucket/manifest.csv \
        --clear-all-object-custom-contexts
      • --clear-object-custom-contexts: Remove contexts with specific keys. You can also update specific contexts along with removing keys by using both the --clear-object-custom-contexts flag and one of the following flags:

        • --update-object-custom-contexts: Provide a map of key-value pairs.

          The following example shows how to create a job to remove the context with key temp-id and update or insert context with key project-id and cost-center for all objects listed in manifest.csv:

          gcloud storage batch-operations jobs create my-job \
          --bucket=my-bucket \
          --manifest-location=gs://my-bucket/manifest.csv \
          --clear-object-custom-contexts=temp-id \
          --update-object-custom-contexts=project-id=project-A,cost-center=engineering
        • --update-object-custom-contexts-file: Provide the path to a JSON or YAML file with key-value pairs.

          The following example shows how to create a job to process objects defined in manifest.csv. The job does the following:

          • Removes all contexts with the temp-id key.

          • Updates existing contexts with the project-id and cost-center keys defined in the /tmp/context_updates.json file.

          gcloud storage batch-operations jobs create my-job \
          --bucket=my-bucket \
          --manifest-location=gs://my-bucket/manifest.csv \
          --clear-object-custom-contexts=temp-id \
          --update-object-custom-contexts-file=/tmp/context_updates.json

          Where /tmp/context_updates.json contains the following object contexts:

          {
          "project-id": {"value": "project-A"},
          "cost-center": {"value": "engineering"}
          }

Integration with VPC Service Controls

You can provide an additional layer of security for storage batch operations resources by using VPC Service Controls. When you use VPC Service Controls, you add projects to service perimeters that protect resources and services from requests that originate from outside of the perimeter. To learn more about VPC Service Controls service perimeter details for storage batch operations, see Supported products and limitations.

Use Cloud Audit Logs for storage batch operations jobs

Storage batch operations jobs record transformations on Cloud Storage objects in Cloud Storage Cloud Audit Logs. You can use Cloud Audit Logs with Cloud Storage to track the object transformations that storage batch operations jobs perform. For information about enabling audit logs, see Enabling audit logs. In the audit log entry, the callUserAgent metadata field with the value StorageBatchOperations indicates a storage batch operations transformation.

Next Steps