Service limits

This page provides details on the limits that apply to Google Security Operations. You can request API limit increases by contacting Cloud Customer Care.

Backstory API quotas

The Backstory API quotas are enforced at the server level using an interceptor. Each service which integrates with the Backstory API must specify the appropriate quota server key to enable quota enforcement.

The following table lists the enforced quotas. The abbreviations are:

  • QPS - Queries Per Second
  • QPM - Queries Per Minute
  • QPD - Queries Per Day
Group API Query Quota
Search ListAlerts 1 QPS
Search ListCuratedRuleDetections 10 QPM
Search ListEvents 1 QPS
Search ListIocs 1 QPS
Search ListIocDetails 1 QPS
Search ListAssets 5 QPS
Search SearchRawLogs 1 QPM
Partner CreateCustomer 10 QPD
Tools RetrieveSampleLogs 10 QPM
Tools ValidateCBNParser 1 QPS
Tools ListCbnParsers 1 QPS
Tools SubmitCbnParser 1 QPM
Tools GetCbnParser 1 QPS
Tools ListCbnParserHistory 1 QPS
Tools ListCbnErrors 1 QPS
Rules GetRule 1 QPS
Rules GetDetection 1 QPS
Rules GetRetrohunt 1 QPS
Rules ListRules 1 QPS
Rules ListRuleVersions 1 QPS
Rules ListDetections 10 QPM
Rules ListRetrohunts 1 QPS
Rules CreateRule 1 QPS
Rules CreateRuleVersion 1 QPS
Rules DeleteRule 1 QPS
Rules EnableLiveRule 1 QPS
Rules DisableLiveRule 1 QPS
Rules EnableAlerting 1 QPS
Rules DisableAlerting 1 QPS
Rules RunRetrohunt 1 QPS
Rules CancelRetrohunt 1 QPS
Rules WaitRetrohunt 1 QPS
Rules StreamTestRule 3 QPM
Rules ArchiveRule 1 QPS
Rules UnarchiveRule 1 QPS
Health GetError 1 QPS
Health ListErrors 1 QPS
ReferenceList CreateReferenceList 1 QPS
ReferenceList GetReferenceList 1 QPS
ReferenceList ListReferenceLists 1 QPS

See additional API documentation:

Chronicle API and Google Cloud project quotas

You can view quotas for the Chronicle API and your Google Cloud project using the Google Cloud console. See View quotas in the Google Cloud console for more information.

Dashboard data sources supported

Dashboards include the following data sources, each with its own query time limit and YARA-L prefix:

Data source Query time limit YARA-L prefix Schema
Cases and alerts 365 days case Fields
Case history 365 days case_history Fields
Detections 365 days detection Fields
Entity graph 365 days graph Fields
Events 90 days no prefix Fields
Ingestion metrics 365 days ingestion Fields
IOCs 365 days ioc Fields
Playbooks 365 days playbook Fields
Rules No time limit rules Fields
Rule sets 365 days ruleset Fields

Dashboard search limit

For search, the quota is per user per hour, but for dashboards it is per Google SecOps instance. For more information about dashboards, see Dashboards.

Data ingestion burst limits

Data ingestion burst limits restrict the amount of data a customer can send to Google SecOps. These limits ensure fairness and prevent issues for other customers caused by ingestion spikes from a single customer. Burst limits ensure that customer data ingestion operates smoothly. You can adjust them using a support ticket. To apply burst limits, Google SecOps uses the following classifications based on ingestion volume:

Burst limit Annual equivalent data at maximum per second burst limit
20 MBps 600 TB
88 MBps 2.8 PB
350 MBps 11 PB
886 MBps 28 PB
2.6 GBps 82 PB

The following guidelines apply to burst limits:

  • When your burst limit is reached, configure ingestion sources to buffer additional data. Don't configure them to drop data.

    • For pull-based ingestion, such as Google Cloud and API feeds, systems automatically buffer ingestion and require no further configuration.
    • For push-based ingestion methods, such as forwarders, webhooks, and API ingestion, configure the systems to automatically resend data when the burst limit is reached. For systems like Bindplane and Cribl, set up buffering to handle data overflow efficiently.
  • Before you reach your burst limit, you can increase it.

  • To determine if you are near your burst limit, see View burst limit usage.

Data table limits

  • Maximum number of data tables for a Google SecOps account: 1,000.

  • Only the CSV file type is supported for uploads.

  • The limits on the number of in statements when referencing a reference list in a query also apply to in statements in a data table.

  • Maximum number of in statements in a query: 10.

  • Maximum number of in statements in a query for String and Number data type columns: 7.

  • Maximum number of in statements with regular expression operators: 4.

  • Maximum number of in statements with CIDR operators: 2.

  • Maximum columns per data table: 1,000.

  • Maximum rows per data table: 10 million.

  • Maximum number of rows you can delete from a data table at one time: 49.

  • Maximum aggregate limit of data volume across data tables in a account: 1 TB.

  • Maximum display limit in web page for data table rows in text and table editor view: 10,000 rows.

  • Maximum row limit: 10 million rows when uploading a file through the web interface.

  • Maximum file size: 10 GB when uploading a file via the API for data table creation.

  • Placeholders aren't allowed in the setup section.

  • Unmapped columns of a data table with data type set to string can only be joined with string fields of UDM event or UDM entity.

  • Use only unmapped columns in a data table with a data type set to cidr or regex for CIDR or regular expression.

  • Data table lookups: Regular expression wildcards aren't supported and search terms are limited to 100 characters.

Joins

  • Fetching all event samples for detections isn't supported when using data table joins with events.

  • Unlike entities and UDM, data tables don't support placeholders. This means you can't:

    • Apply one set of filters to a data table and join it with a UDM entity.

    • Apply a different set of filters to the same data table while joining it with another UDM placeholder.

    For example, a data table named dt with 3 columns: my_hostname, org, and my_email and with the following rule:

    events:
    $e1.principal.hostname =  %dt.my_hostname
    %dt.org ="hr"
    
    $e2.principal.email =  %dt.my_email
    %dt.org !="hr"
    

All filters on a data table are applied first, and then the filtered rows from the data table are joined with UDM. In this case, the contradictory filters (%dt.org ="hr" and %dt.org !="hr") on the dt table result in an empty data table, which is then joined with both e1 and e2.

Use data tables with rules

The following limitations apply to data tables when used with rules.

Run frequency

Real-time run frequency isn't supported for rules with data tables.

Output to data tables

  • any and all modifiers aren't supported for repeated field columns in data tables.

  • Array indexing isn't supported for repeated fields columns in data tables.

  • You can only export outcome variables to a data table. You can't export event path or data table columns directly.

  • Column lists must include the primary key columns for data tables.

  • You can have a maximum of 20 outcomes.

  • If a data table doesn't exist, a new table is created with the default string data type for all columns, following the order specified.

  • Only one rule can write to a data table at a time. If a rule tries to write to a data table that another rule is already writing to, the rule compilation fails.

  • There's no guarantee that a producer rule can add rows to a data table before a consumer rule for that data table starts.

  • A single rule has a limit on the number of outcomes rows. A maximum 10,000-row limit applies over the result and persisted data and to data tables.

  • If a row with the same primary key already exists in the data table, it's non-primary key columns are replaced with the new values.

Entity enrichment from data tables

  • You can apply only one enrichment operation (either override, append, or exclude) to a single entity graph variable.

  • Each enrichment operation can use only one data table.

  • You can define a maximum of two enrichment operations of any type in the setup section of a YARA-L rule.

In the following example, an override operation is applied to the entity graph variable $g1 and an append operation is applied to the entity graph variable $g2.

    setup:
    graph_override($g1.graph.entity.user.userid = %table1.myids)
    graph_append [$g2, %table1]

In the preceding example, the same data table (table1) is used to enhance different entity graphs. You can also use different data tables to enhance the different entity graphs, as follows:

    setup:
    graph_override($g1.graph.entity.user.userid = %table1.myids)
    graph_append [$g2, %table2]

The following limitations apply to data tables when used with Search.

  • You can't run search queries on data tables using the Chronicle API. Queries are only supported through the web interface.

  • A single query execution can output a maximum of 1 million rows to a data table or 1 GB, whichever limit comes first.

  • Search output to a data table skips event rows if they exceed 5 MB.

  • Entity enrichment is not supported with Search.

  • Data tables are not supported for customer-managed encryption keys (CMEK) users.

  • Writes are limited to 6 per minute per customer.

  • API support is not available Search-related data table operations.

  • Statistics queries aren't supported with data table joins.

  • Data table and data table joins are only supported with UDM events, and not with entities.

    Supported: %datatable1.column1 = %datatable2.column1 Not supported: graph.entity.hostname = %sample.test

  • You can't include a match variable in statistics query in the export section of a statistics query.

    For example, the following is not supported:

  match:
      principal.hostname
  export:
      %sample.write_row(
      row: principal.hostname
    )

Ingestion rate

When the data ingestion rate for a tenant reaches a certain threshold, Google Security Operations dynamically adjusts the ingestion rate to ensure availability for new data feeds. The ingestion volume and tenant's usage history determines the threshold. For information on volume of data which can be ingested into Google SecOps by a single customer, see Burst limits.

  • Rate limit of 15,000 queries per second (QPS).

  • The maximum size for a single log line is 1 MB.

Reference list limits

A reference list is a generic list of values which can be used to analyze your data. For more information, see Reference lists.

String lists

String lists have the following limits:

  • Maximum list size: 6MB
  • Maximum length of any single list content line: 5000 characters

Regular expression lists

Regular expression lists have the following size limits:

  • Maximum list size: 0.1MB
  • Maximum number of lines: 100
  • Maximum length of each content line: 5000 characters

CIDR lists

CIDR lists have the following size limits:

  • Maximum list size: 0.1MB
  • Maximum number of lines: 150
  • Maximum length of each content line: 5000 characters

Rule limits

Google Security Operations has the following limitations with regards to rule detections:

  • Each rule version has a limit of 10,000 detections per day. This limit resets at midnight UTC.

    For example, a rule version might generate 9,900 detections by 3 PM UTC on January 1. If all these detections are recorded for January 1, it generates only 100 more detections that day. On January 2, the rule version can generate 10,000 new detections.

  • If the rule version is updated, the limit is reset and the rule can again generate 10,000 detections in that same day.

    For example, if a rule version produces 9,900 detections by 3 PM UTC on January 1, and all of these detections have a detection time on January 1, it generates only 100 more detections for that day. If rule version is updated at 4 PM on January 1, the rule version can generate 10,000 detections with a detection time on January 1 until the end of day. On January 2, the rule version can generate another 10,000 new detections.

  • The Rules Dashboard can display up to 50 MB of detection data. If the total size of the detections exceeds this limit, the interface shows a message indicating that the data is incomplete. This means the system generated more detections than the interface can display, but the detections still exist and are not lost.

  • Running a retrohunt after updating the reference list doesn't reset the existing detections limits or generate new ones. If the existing detection limit has already been reached, no new detections are generated.

  • Retrohunts limitations:

    • Run a maximum of 3 concurrent retrohunt jobs for each Google SecOps instance or tenant.
    • The combined text size of all rules must not exceed 1 MB.
    • If you run multiple retrohunts in parallel, the system allocates resources from the same Google SecOps instance. This can lead to slower performance or delays in job completion.

Search limits

When conducting searches, the following factors can limit the number of results returned:

  • Maximum search results: 1M events. When results exceed 1M, only 1M results are shown.

  • Use search settings to specify a lower limit: By default, Google SecOps limits the number of events displayed to 30K. You can change the limit to any value between 1 and 1M from the search settings on the Results page.

  • Search results are limited to 10K: If your search returns more than 10,000 results, the console displays only the first 10,000. This limitation doesn't alter the total number of returned events.