Migrate from Standard edition to Enterprise edition
To migrate data from a Firestore Standard edition database to a Firestore Enterprise edition database, we recommend using one of the following options:
The import and export features. The data files from an import operation are compatible with both Enterprise edition and Standard edition.
The
firestore-to-firestoreDataflow template. The Dataflow service lets you build data pipelines and thefirestore-to-firestoretemplate creates a batch pipeline between Firestore databases.
Import and export is the simpler option to run with less configuration options.
The Dataflow template is more customizable. You can extend the template code to perform partial migrations or transform data. You can also control worker counts and size.
Both options support migrations across projects and regions.
Migrate data with export and import
To migrate data with export and import operations, see Export and import data. To move data to a database in another project, see Move data between projects.
Migrate data with the Dataflow template
Use the following instructions to migrate data with the
firestore-to-firestore Dataflow template.
Before you begin
Before you start the data migration, make sure that point-in-time recovery (PITR) is enabled on the source database. The Dataflow job uses PITR to read data at a PITR timestamp. If PITR is disabled, the job fails if it runs longer than one hour.
Assign the required roles described in the next section.
Required roles
To migrate data from one database to another, assign the following roles. You might also be able to get the required permissions through custom roles or other predefined roles:
- To get the permissions that you need to create a new database and access
Firestore data, ask your administrator to grant you the
Cloud Datastore Owner
(
roles/datastore.owner) Identity and Access Management (IAM) role on your project. -
To give the Dataflow job read and write access to your Firestore databases, assign the Dataflow worker service account (for example,
PROJECT_NUMBER-compute@developer.gserviceaccount.com) the Cloud Datastore User (roles/datastore.user) IAM role on your project.For more information about Dataflow security, see Dataflow security and permissions.
For more information about granting IAM roles, see Manage access to projects, folders, and organizations.
1. Create a new Firestore Enterprise edition database
To migrate data from a Standard edition database to an Enterprise edition database, you must first create the Enterprise edition destination database. See Create a database.
2. Run the Dataflow firestore-to-firestore template
Configure and run your Dataflow job with the firestore-to-firestore template.
The templates supports migrating the entire database or only specified collection
groups.
Limitations
Consider the following limitations for the firestore-to-firestore Dataflow template:
- The source database must be a Standard edition database.
- The migration reads data at a specific read-time. We suggest enabling point-in-time recovery (PITR) in the source database. If PITR is not enabled, data expires after one hour and that might not be enough time for the data migration to complete. PITR extends data retention to seven days.
- Indexes are not migrated.
The Dataflow job doesn't migrate database configurations like time to live (TTL) policies, backups, PITR, and customer-managed encryption keys (CMEK).
You must configure these settings on the new database. To improve the speed of the data migration, wait until after the migration to configure TTL, backups, and PITR on the destination database.
The following examples demonstrate how to run the template using the Google Cloud CLI.
Migrate all data
To migrate all data, use the following command:
gcloud dataflow flex-template run "JOB_NAME" \ --project "PROJECT" \ --template-file-gcs-location gs://dataflow-templates-REGION_NAME/VERSION/flex/Cloud_Firestore_to_Firestore \ --region REGION_NAME \ --parameters "sourceProjectId=SOURCE_PROJECT_ID" \ --parameters "sourceDatabaseId=SOURCE_DATABASE_ID" \ --parameters "destinationProjectId=DESTINATION_PROJECT_ID" \ --parameters "destinationDatabaseId=DESTINATION_DATABASE_ID" \ --parameters "readTime=READ_TIME"
Replace the following:
JOB_NAME: a name for the job.PROJECT: the ID of your Google Cloud project.REGION_NAME: the Google Cloud location where you want to run the Dataflow job. Use a location that is near your databases.VERSION: the version of the template that you want to use. You can use the following values:latestto use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
SOURCE_PROJECT_ID: the ID of the source Google Cloud project that contains the Firestore Standard edition database.SOURCE_DATABASE_ID: the ID of the source Firestore database.DESTINATION_PROJECT_ID: the ID of the destination Google Cloud project for the new Firestore database.DESTINATION_DATABASE_ID: the ID of the destination Firestore database.READ_TIME: the timestamp to read data from the source database. Set to a timestamp in the RFC 3339 format, at minute granularity such as2026-05-15T16:31:00.00Z.The earliest valid timestamp depends on your point-in-time recovery (PITR) settings. See Get earliest version time.
Migrate specified collection groups
To migrate only certain collection groups, use the following command:
gcloud dataflow jobs run "JOB_NAME" \ --project "PROJECT" \ --gcs-location gs://dataflow-templates-REGION_NAME/VERSION/Cloud_Firestore_to_Firestore \ --region REGION_NAME \ --parameters "sourceProjectId=SOURCE_PROJECT_ID" \ --parameters "sourceDatabaseId=SOURCE_DATABASE_ID" \ --parameters "collectionGroupIds=COLLECTION_GROUP_IDS" \ --parameters "destinationProjectId=DESTINATION_PROJECT_ID" \ --parameters "destinationDatabaseId=DESTINATION_DATABASE_ID" \ --parameters "readTime=READ_TIME"
Replace the following:
JOB_NAME: a name for the job.PROJECT: the ID of your Google Cloud project.REGION_NAME: the Google Cloud location where you want to run the Dataflow job. Use a location that is near your databases.VERSION: the version of the template that you want to use. You can use the following values:latestto use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
SOURCE_PROJECT_ID: the ID of the source Google Cloud project that contains the Firestore Standard edition database.SOURCE_DATABASE_ID: the ID of the source Firestore database.COLLECTION_GROUP_IDS: a comma-separated list of collection group IDs to migrate.Sub-collections are not included recursively. For example, if you specify the
userscollection group, then the migration won't include amessagessub-collection at/users/userid/messagesunless you also specify themessagescollection group.DESTINATION_PROJECT_ID: the ID of the destination Google Cloud project for the new Firestore database.DESTINATION_DATABASE_ID: the ID of the destination Firestore database.READ_TIME: the timestamp to read data from the source database. Set to a timestamp in the RFC 3339 format, at minute granularity such as2026-05-15T16:31:00.00Z.The earliest valid timestamp depends on your point-in-time recovery (PITR) settings. See Get earliest version time.
3. Configure the database
The firestore-to-firestore job migrates only data.
Indexes and other database settings are not migrated. In addition to
migrating data, consider configuring the following on the new database:
Indexes: Firestore Enterprise edition databases don't strictly require indexes to run queries and don't create automatic indexes by default. See the following to create indexes for your queries:
- Firestore Enterprise edition index overview.
- Optimize query performance with indexes.
- You can use the Firebase CLI to export indexes and deploy them to the new database.
- Use Query Insights to identify queries you can optimize with an index.
TTL: Create TTL policies.
Backups: Set up backups.
PITR: Enable PITR.
After configuring your database, you can continue testing your app with the new database. For a complete migration, update your applications to use the new database.
Troubleshooting
For large databases, the job might fail if it reads too much data at once. To address this:
Increase the
maxNumWorkersvalue.
What's next
- Learn about querying data with Pipeline operations.
- Learn about how to optimize queries in Firestore Enterprise edition
- Understand how an Enterprise edition database scales.