This page provides details on the limits that apply to using Cloud Logging.
Logging quotas and usage limits
The following table lists quotas and limits that apply to the usage of Cloud Logging. In general, you can request an increase for a quota. However, limits can't be changed.
Some entries are listed as being per project but the note indicates that the entry also applies to billing accounts, folders, and organizations and isn't hierarchical. For example, if you have multiple Google Cloud projects in an organization, then you could configure up to 200 sinks for each Google Cloud project; for that same organization, you could also configure up to 200 sinks at the organization level.
| Category | Maximum value of limit or initial quota value |
Notes |
|---|---|---|
Size of a
LogEntry |
256 KB | This limit is approximate and is based on internal data sizes, not the actual REST API request size. Cannot be increased. |
| Size of an audit log entry | 512 KiB | Cannot be increased. |
| Number of labels | 64 per LogEntry |
Cannot be increased. |
Length of a LogEntry label key |
512 B | Cloud Logging truncates oversized label keys when their associated log entry is written. Cannot be increased. |
Length of a LogEntry label value |
64 KB | Cloud Logging truncates oversized label values when their associated log entry is written. Cannot be increased. |
| Length of a Logging query language query | 20,000 characters | Cannot be increased. |
| Query fanout | 200 buckets | This limit is the maximum number of buckets that might contain log entries for a resource. For more information, see Query returns an error. Cannot be increased. For performance reasons, minimize the number of log buckets and create your log buckets in one region. If you frequently need to query log entries that originate in different resources, then consider routing log entries to a few centralized log buckets. |
| Number of sinks | 200 per Google Cloud project | Can be increased to 4,000. This quota also applies to billing accounts, folders, and organizations and isn't hierarchical. |
| Length of a sink inclusion filter | 20,000 characters | Cannot be increased. |
| Length of a sink exclusion filter | 20,000 characters | Cannot be increased. |
| Number of exclusion filters | 50 per sink | Cannot be increased. If you need more than 50 exclusion filters, then refine the inclusion filter. |
| Number of log buckets | 100 per Google Cloud project | Can be increased to 2,500. This quota is the maximum number of buckets that might contain log entries for a resource and it includes buckets that are pending deletion. This quota also applies to billing accounts, folders, and organizations and isn't hierarchical. If you increase the maximum number of log buckets, then you might have a degraded user experience because the default log scope can't include all log buckets in a project. |
| Number of custom indexed fields | 20 per log bucket | Cannot be increased. Cloud Logging supports full-text indexing. Therefore, we don't recommend that you define custom indexing. |
| Number of log views | 30 per log bucket | Can be increased to 1,000 log views on a single log bucket. If you increase the maximum number of log views, then you might have a degraded user experience because the default log scope can't include all log buckets in a project. |
| Oldest timestamp that can be stored in log buckets | Determined by the retention period of the log bucket. | For a custom log bucket with the default retention period, this value is 30 days in the past. The Logging API accepts log entries with older timestamps and those log entries are routed to sink destinations. However, if log entries with older timestamps are routed to a log bucket, then they aren't stored. |
| Future timestamp that can be stored in log buckets | Up to 1 day in the future | The Logging API rejects entries with timestamps
more than 1 day in the future and returns an
INVALID_ARGUMENT error. Cannot be increased. |
| Number of log scopes per resource | 100 | Can be increased to 10,000. |
| Number of log views and projects included in a log scope | 100 | We recommend that a log scope lists log views instead of projects, when possible. Listing log views results in reduced query time and increased clarity as to which log entries are being queried. |
| Number of projects included in a log scope | 5 | We recommend that a log scope lists log views instead of projects, when possible. The log views can be in different projects. |
| Maximum number of analytics views per Google Cloud project | 100 | Cannot be increased. This feature is in Public Preview. |
| Per Google Cloud project, the maximum number of analytics views per region | 50 | Cannot be increased. This feature is in Public Preview. |
| Per Google Cloud project, the maximum number of regions that can store analytics views | 10 | Cannot be increased. This feature is in Public Preview. |
Logging API quotas and limits
The following limits apply to your usage of the Logging API. You can request changes to your Logging API quotas and limits; for instructions, see Requesting changes to Cloud Logging API quota on this page.
To view your API quotas, go to the API dashboard, select an API, and then select Quotas.
| Category | Maximum value of limit or initial quota value |
Notes |
|---|---|---|
| Lifetime of API page tokens | 24 hours | Cannot be increased. |
| Number of open live-tailing sessions | 10 per Google Cloud project | Cannot be increased. This limit also applies to billing accounts, folders, and organizations and isn't hierarchical. |
| Number of live-tailing entries returned | 60,000 per minute | Cannot be increased. |
| Number of restricted fields | 20 per bucket | Cannot be increased. |
| Size of a restricted field | 800 B | Cannot be increased. |
Size of an entries.write request |
10 MB | Cannot be increased. |
Rate of entries.write requests,
by region |
4.8 GB per minute, per Google Cloud project, in the regions
asia-east1,
asia-northeast1,
asia-southeast1,
asia-south1,
europe-west1,
europe-west2,
europe-west3,
europe-west4,
us-central1,
us-east4,
us-west1
300 MB per minute, per Google Cloud project, in all remaining regions |
You can request a quota increase. This quota also applies to billing accounts, folders, and organizations and isn't hierarchical. For more information about these quotas, see Per-region ingestion quotas. Using exclusion filters
doesn't reduce the rate of write requests because log entries are
excluded after the |
Number of entries.list requests |
60 per minute, per Google Cloud project | Cannot be increased. This limit also applies to billing accounts, folders, and organizations and isn't hierarchical. To query large volumes of logs, consider using BigQuery APIs. For transferring large volumes of logs, consider using a log sink or Copy log entries. |
Number of different resource names in a
single entries.write command |
1000 | Cannot be increased. The |
| Control requests per minute | 600 | Cannot be increased. This quota applies to everything also included in the daily control-request quota, plus API requests for deleting logs and managing log-based metrics. |
| Control requests per day | 1,000 per Google Cloud project | Cannot be increased. This quota applies to API requests for creating and updating exclusions and sinks. |
| Number of Google Cloud projects or other resource names in a single entries.list request |
100 | Cannot be increased. |
| Number of concurrent copy operations | 1 per Google Cloud project | Cannot be increased. This limit also applies to billing accounts, folders, and organizations and isn't hierarchical. |
| Rate of exports to Pub/Sub topics | 60 GB per minute per Google Cloud project, folder, or organization where the sink is defined | You can file a support request to raise your quota. Increased quota is granted based on Pub/Sub availability. If the rate of exports exceeds the quota, then the error is
recorded in a log entry. The summary field indicates sink
configuration error and the error code is listed as
|
Per-region ingestion quotas
To improve isolation and protect regional resources from ingestion overload, Cloud Logging quotas restrict, on a per-Google Cloud project basis, the volume of data written to an ingestion region.
The following table shows the default quota for each region:
| Ingestion region | Default value |
|---|---|
asia-east1,
asia-northeast1,
asia-southeast1,
asia-south1,
europe-west1,
europe-west2,
europe-west3,
europe-west4,
us-central1,
us-east4,
us-west1
|
4.8 GB per minute, per Google Cloud project |
| All remaining regions | 300 MB per minute, per Google Cloud project |
The default values for the regional quotas exceed the ingestion volumes of most users. However, if before the quotas were enforced, your project's ingestion volume from a region in the previous six or more months was close to or above the default quota for that region, then your initial quota includes an automatic, one-time increase. Therefore, your quotas might be higher than the defaults in the preceding table. For information about reviewing quotas, see Review your Logging quotas.
If you exceed a regional quota, your write requests to the Cloud Logging API might be rejected with a "resource exhausted" error message. For recommendations to help you avoid exceeding quotas, see Manage and monitor your Logging quotas.
Review your Cloud Logging API quotas
To review your Cloud Logging API quotas, do the following:
-
In the Google Cloud console, go to the Quotas & System Limits page:
If you use the search bar to find this page, then select the result whose subheading is IAM & Admin.
- Filter the list of quotas for the Cloud Logging API service.
Request changes to Cloud Logging API quota
You can request higher or lower Logging API limits using the Google Cloud console. For more information, see View and manage quotas.
If you get an error Edit is not allowed for this quota, you can
contact Support to request changes to
the quota. Note also that billing must be enabled on the
Google Cloud project to click the checkboxes.
Manage and monitor your Cloud Logging API quotas
To prevent service disruptions caused by exceeding quotas, you can do the following:
- Use automatic quota adjustment to monitor your quota usage and request increases for you. For more information, see Quota adjuster.
- Create alerts on quotas to notify you about usage. For more information, see Set up quota alerts.
Optimize usage of entries.list
The expected usage of entries.list is to search for
matching logs. This method isn't intended for high-volume retrieval of
log entries. If
you're regularly exhausting your entries.list quota, then consider the
following options:
Ensure that you are using the Cloud Logging API effectively. For more information, see Optimize usage of the API.
If you know in advance that the log entries you want to analyze exceed the
entries.listquota, then configure a log sink to export your logs to a supported destination.
- To analyze log entries outside of Logging, you can retroactively copy log entries that already exist in Logging to Cloud Storage buckets. When you copy logs to a Cloud Storage bucket, you can share log entries with auditors outside of Logging, and run scripts in Cloud Storage.
To aggregate and analyze your log entries within Logging, store your log entries in a log bucket, and then upgrade that log bucket to use Log Analytics. For information about these steps, see Configure log buckets.
Log Analytics lets you query your log entries by using BigQuery-standard SQL.
Log-based metrics
The following limits apply to your usage of user-defined log-based metrics. With the exception of the number of metric descriptors, these limits are fixed; you can't increase or decrease them.
| Category | Maximum value |
|---|---|
| Number of labels | 10 per metric |
| Length of label value | 1,024 B |
| Length of label description | 800 B |
| Length of the filter1 | 20,000 characters |
| Length of metric descriptors | 8,000 B |
| Number of metric descriptors | 500 per Google Cloud project2 |
| Number of active time series3 | 30,000 per metric |
| Number of histogram buckets | 200 per custom distribution metric |
| Data retention | See Cloud Monitoring: Data retention |
1 Each log-based metric contains a filter. When a log entry
matches the filter, the log entry is counted. Filters are defined by using
the Logging query language.
2 This limit also applies to billing accounts, folders, and
organizations and isn't hierarchical.
3A time series is active if you have written
data points to it within the last 24
hours.
Audit logging
The maximum sizes of audit logs are shown in the following table. These values can help you estimate the space you need in your sink destinations.
| Audit log type | Maximum size |
|---|---|
| Admin Activity | 512 KiB |
| Data Access | 512 KiB |
| System Event | 512 KiB |
| Policy Denied | 512 KiB |
Logs retention periods
The following Cloud Logging retention periods apply to log buckets, regardless of which types of logs are included in the bucket or whether they were copied from another location. The retention information is as follows:
| Bucket | Default retention period | Custom retention |
|---|---|---|
_Required |
400 days | Not configurable |
_Default |
30 days | Configurable |
| User-defined | 30 days | Configurable |
For the _Default and user-defined log buckets, you can configure
Cloud Logging to retain your logs between
1 day and
3650 days. For information on setting retention
rules, see
Configure custom retention.
Pricing
For pricing information, see Google Cloud Observability pricing page. If you route log data to other Google Cloud services, then see the following documents: