MCP Reference: chronicle.googleapis.com

A Model Context Protocol (MCP) server acts as a proxy between an external service that provides context, data, or capabilities to a Large Language Model (LLM) or AI application. MCP servers connect AI applications to external systems such as databases and web services, translating their responses into a format that the AI application can understand.

Server Setup

You must enable MCP servers and set up authentication before use. For more information about using Google and Google Cloud remote MCP servers, see Google Cloud MCP servers overview.

Chronicle API provides tools for security analysts to investigate and mitigate threats.

Server Endpoints

An MCP service endpoint is the network address and communication interface (usually a URL) of the MCP server that an AI application (the Host for the MCP client) uses to establish a secure, standardized connection. It is the point of contact for the LLM to request context, call a tool, or access a resource. Google MCP endpoints can be global or regional.

The chronicle.googleapis.com MCP server has the following MCP endpoint:

The chronicle.googleapis.com MCP server uses regional endpoints:

  • For example, https://chronicle.northamerica-northeast2.rep.googleapis.com/mcp

MCP Tools

An MCP tool is a function or executable capability that an MCP server exposes to a LLM or AI application to perform an action in the real world.

The chronicle.googleapis.com MCP server has the following tools:

MCP Tools
get_case

Retrieves a single case by its resource name.

Fetches all details for a specific case, including its properties (such as priority, stage, and assignee), as well as any associated tasks, tags, and products. This tool is fundamental for drilling down into a specific incident after identifying it through a list view or search.

Usage Guidelines: When using this tool, you should typically also call the list_case_alerts tool for the same case ID to fetch the associated alerts. You should then synthesize the information from both tools to provide a comprehensive response. The response should include case details, associated alerts, and a summary of comments if available.

Here are the details for Case ID [Case ID]:

Case Details:

Name: [Name] Display Name: [Display Name] Status: [Status] Stage: [Stage] Priority: [Priority] Assignee: [Assignee] Last Modifying User ID: [Last Modifying User ID] Created Time: [Create Time] Updated Time: [Update Time] Alert Count: [Alert Count (calculate from list_case_alerts)] Important: [Important] Incident: [Incident] Type: [Type] Overflow Case: [Overflow Case] Environment: [Environment] Workflow Status: [Workflow Status] Source: [Source] Involved Suspicious Entity: [Involved Suspicious Entity] Move Environment: [Move Environment]

Associated Alerts:

Alert ID: [Alert ID] Display Name: [Alert Display Name] Rule Generator: [Rule Generator] Product: [Product] Vendor: [Vendor] Source System: [Source System (if available)] Priority: [Priority] Status: [Status] Start Time: [Start Time (if available)] End Time: [End Time (if available)] Source URL: [Source URL (if available)]

Comments: There are [Number of comments] comments associated with this case, including notes about [Topic 1], [Topic 2], and entries like "[Comment Snippet]" by [Author].

Example: There are 23 comments associated with this case, including notes about Meta-Analysis Reports, Triage Reports, and entries like "OneMCP works!" by Soary Siem.

Workflow Integration:

  • Used when an analyst clicks on a case in a queue or dashboard to view its full details in an investigation UI.
  • Essential for automated playbooks that need to retrieve the current state of a case before taking action, such as enrichment or remediation.
  • Provides the necessary data to populate a case investigation screen, showing all relevant information in one place.
  • Can be used to verify the result of an update operation by fetching the case after the update has been applied.

Use Cases:

  • An analyst retrieves a case to begin an investigation and understand its context.
  • An automated system fetches a case to extract entities for further enrichment from other threat intelligence sources.
  • A manager views a case to review its progress, check the SLA status, and see the latest comments.
  • A reporting script fetches case details to generate a detailed incident report.

Example Usage:

  • get_case(projectId='123', region='us', customerId='abc', caseId='456')
  • get_case(projectId='123', region='us', customerId='abc', caseId='456', expand='tasks,tags')

Next Steps (using MCP-enabled tools):

  • Use 'list_case_alerts' to fetch the details of all alerts associated with this case.
  • Use 'list_case_comments' to see the full history of comments and actions taken on the case.
list_case_comments

Lists all case comments for a given case in Google SecOps.

Retrieves a paginated list of all comments associated with a specific SOAR case, allowing for a comprehensive overview of the investigation history. This tool is essential for understanding the timeline of a case, reviewing actions taken, and gathering context from analyst notes.

Workflow Integration:

  • Used to build a complete timeline of an investigation in a SOAR UI or report.
  • Essential for generating audit trails or summaries of case activity for compliance or review.
  • Enables analysts to programmatically search and filter through all comments to find relevant information, such as notes from a specific user or comments made during a certain time frame.
  • Provides the necessary context for automated playbooks to make decisions based on the history of a case.

Use Cases:

  • Generate a complete audit trail of all actions and notes for a specific case to understand the investigation process.
  • Find a specific comment by filtering based on the user who wrote it, its content, or other metadata.
  • Display a chronological history of comments on a case detail page in a custom security dashboard.
  • Automate the process of reviewing cases by searching for keywords in comments.

Filtering and Ordering:

  • The 'filter' parameter allows for precise searching within comments. You can filter on fields like 'user', 'comment' content, 'create_time', and more.
  • The 'order_by' parameter controls the sorting of the returned comments. You can sort by fields like 'create_time' or 'update_time' in ascending or descending order.

Example Usage:

  • list_case_comments(projectId='123', region='us', customerId='abc', caseId='456')
  • list_case_comments(projectId='123', region='us', customerId='abc', caseId='456', filter="user='user@example.com'", orderBy="update_time desc")

Next Steps (using MCP-enabled tools):

  • Iterate through the list of comments to extract key information or indicators.
  • Use 'create_case_comment' to add a new comment to the case based on your findings.
  • Use 'get_case_comment' with a comment's resource name to fetch its full details if needed.
list_cases

Lists all cases for a given Chronicle instance.

Retrieves a paginated list of all cases, allowing for a comprehensive overview of security incidents and investigations. This tool is essential for security operations, enabling analysts and managers to view, filter, and prioritize cases based on various criteria.

Workflow Integration:

  • Used to populate a case queue or dashboard in a security management UI, providing a real-time view of the incident landscape.
  • Essential for generating reports on case metrics, such as case volume, time to resolution, and analyst workload.
  • Enables automated systems to query for specific types of cases that may require automated enrichment or triage actions.
  • Provides the foundational data for building custom analytics and visualizations on top of Chronicle's case management system.

Use Cases:

  • An analyst lists all open cases assigned to them to prioritize their daily workload.
  • A SOC manager generates a report of all critical-priority cases created in the last week.
  • An automated playbook queries for all new cases related to a specific environment to begin an automated investigation.
  • Search for all cases with a specific tag to track a particular threat campaign.

Filtering and Ordering:

  • The 'filter' parameter allows for powerful, SQL-like queries to narrow down the list of cases. You can filter on fields like 'display_name', 'assignee', 'priority', 'stage', 'status', 'tags', 'products', 'environment', 'important', 'incident', 'description', 'CreateTime', 'UpdateTime', and more.
  • The Priority options are: ['PRIORITY_UNSPECIFIED', 'PRIORITY_INFO', 'PRIORITY_LOW', 'PRIORITY_MEDIUM', 'PRIORITY_HIGH', 'PRIORITY_CRITICAL'].
  • The Stage options are: ['Research', 'Improvement', 'Incident', 'Investigation', 'Assessment', 'Triage'].
  • The Status options are: ['OPENED', 'CLOSED'].
  • The 'CreateTime' and 'UpdateTime' fields are Unix timestamps in milliseconds and can be filtered using comparison operators (e.g., '>', '<', '>=', '<=').
  • The 'order_by' parameter controls the sorting of the returned cases. You can sort by fields like 'CreateTime', 'priority', or 'UpdateTime' in ascending or descending order.

Example Usage:

  • list_cases(projectId='123', region='us', customerId='abc', pageSize=10)
  • list_cases(projectId='123', region='us', customerId='abc', filter="priority='PRIORITY_CRITICAL'", orderBy="CreateTime desc")
  • list_cases(projectId='123', region='us', customerId='abc', filter="CreateTime > 1730801400000")

Next Steps (using MCP-enabled tools):

  • Iterate through the list of cases to perform bulk operations or analysis.
  • Use 'get_case' with a case's resource name to fetch its full details.
  • Use 'update_case' to modify the properties of a specific case, such as its assignee or stage.
  • Use 'list_case_comments' to retrieve the discussion and history for a particular case.
update_case

Updates an existing case in Google SecOps.

Modifies various properties of a specific case. Only the fields provided in the arguments will be updated. Note: It is not possible to change the status of a case to 'CLOSED' using this tool. This can only be done via the 'execute_bulk_close_case' tool.

Workflow Integration:

  • A core function for managing the lifecycle of a security case, used in both manual and automated workflows.
  • Integrates with UI actions like assigning a case, changing its status, or adding a description.
  • Essential for automated playbooks that need to update a case's status after performing an action, such as "Case moved to 'Remediation' after host was isolated."
  • Can be used to synchronize case status with external ticketing or project management systems.

Use Cases:

  • An analyst assigns a case to themselves or another team member.
  • A SOC manager escalates a case by changing its priority from "Medium" to "Critical".
  • A user adds a detailed description or updates the title of a case to better reflect the investigation's findings.
  • Add or modify tags and products associated with the case.

Example Usage:

  • update_case(projectId='123', region='us', customerId='abc', caseId='456', assignee='new_user@example.com', priority='PRIORITY_CRITICAL')
  • update_case(projectId='123', region='us', customerId='abc', caseId='789', stage='Investigation', description='Escalated for further investigation due to new IOCs.')
  • update_case(projectId='123', region='us', customerId='abc', caseId='101', important=True)

Next Steps (using MCP-enabled tools):

  • Use 'get_case' with the case's resource name to verify that the case has been updated correctly.
  • Use 'list_case_comments' to see if any comments were added as part of the update.
  • Use 'create_case_comment' to add a note explaining why the case was updated.
get_case_alert

Retrieves a single alert by its resource name.

Fetches all details for a specific alert within a case, including its properties (such as status, priority, and product), as well as optionally expanding to include related information like its SLA and involved entities. This tool is fundamental for drilling down into a specific alert after identifying it through a list view or search.

Workflow Integration:

  • Used when an analyst clicks on an alert within a case to view its full details in an investigation UI.
  • Essential for automated playbooks that need to retrieve the current state of an alert before taking action, such as enrichment or remediation.
  • Provides the necessary data to populate an alert investigation screen, showing all relevant information in one place.
  • Can be used to verify the result of an update operation by fetching the alert after the update has been applied.

Use Cases:

  • An analyst retrieves an alert to begin an investigation and understand its context, such as the rule that triggered it and the involved entities.
  • An automated system fetches an alert to extract entities (IPs, domains, hashes) for further enrichment from other threat intelligence sources.
  • A manager views an alert to review its details, check the SLA status, and see if a playbook has been run.
  • A reporting script fetches alert details to generate a detailed incident report.

Example Usage:

  • get_case_alert(projectId='123', region='us', customerId='abc', caseId='456', caseAlertId='789')
list_case_alerts

Lists all alerts within a case. This tool also provides alert_group_identifiers for each alert.

Workflow Integration:

  • Used when an analyst needs to see all alerts within a case, such as when investigating a specific case or reviewing the status of multiple alerts.
  • Essential for automated playbooks that need to check the status of multiple alerts before taking action.
  • Provides a comprehensive view of all alerts within a case, allowing for easy navigation and status monitoring.
  • Can be used to verify the result of an update operation by fetching the alert after the update has been applied.

Use Cases:

  • An analyst views all alerts within a case to see if any other alerts are firing for the same host or user.
  • A SOC manager reviews all alerts within a case to prioritize their investigation.
  • An automated playbook checks the status of multiple alerts before taking action.
  • A reporting script fetches all alerts within a case to generate a detailed incident report.

Example Usage:

  • list_case_alerts(projectId='123', region='us', customerId='abc', caseId='456')
  • list_case_alerts(projectId='123', region='us', customerId='abc', caseId='456', filter='status="OPEN"')
  • list_case_alerts(projectId='123', region='us', customerId='abc', caseId='456', orderBy='createTime desc')

Next Steps (using MCP-enabled tools):

  • Use 'get_case_alert' with the alert's resource name to retrieve its full details.
  • Use 'create_case_comment' to add a note to the parent case explaining why the alert status was changed.
  • Use 'update_case_alert' to change the status of an alert.
  • Use 'list_case_comments' to see if any comments were added as part of the update.
update_case_alert

Updates an existing case alert in Google SecOps.

Important Note:

  • This tool CANNOT be used to close the last open alert in a case. Attempting to change the status of the final open alert to 'CLOSE' will result in an error.
  • To close a case (which happens implicitly when the last alert is closed), you should use the execute_bulk_close_case tool, even if you are only closing a single case containing the last open alert.

Workflow Integration:

  • A core function for managing the lifecycle of an alert, used in both manual and automated workflows.
  • Integrates with UI actions for changing an alert's priority, status (e.g., open, closed), or closing it with details.
  • Essential for automated playbooks that need to programmatically update an alert's state, such as escalating priority based on new findings or closing an alert after remediation.
  • Can be used to synchronize alert status and details with external ticketing or project management systems.

Use Cases:

  • A playbook automatically updates an alert's status to 'CLOSE' and sets closure_details with a reason of 'NOT_MALICIOUS' after automated analysis.
  • An analyst manually changes the priority of an alert to 'HIGH' based on their assessment.
  • An analyst updates the status to 'OPEN' to indicate they are actively investigating.
  • A SOC manager updates the closure_details for a set of related alerts after a breach investigation is complete.

Example Usage:

  • update_case_alert(projectId='123', region='us', customerId='abc', caseId='456', caseAlertId='789', status='CLOSE', closureDetails={'reason': 'NOT_MALICIOUS', 'rootCause': 'Benign activity verified', 'comment': 'Verified benign.'})
  • update_case_alert(projectId='123', region='us', customerId='abc', caseId='456', caseAlertId='789', priority='HIGH')
  • update_case_alert(projectId='123', region='us', customerId='abc', caseId='456', caseAlertId='789', status='OPEN')

Next Steps (using MCP-enabled tools):

  • Use 'get_case_alert' with the alert's resource name to verify that the alert has been updated correctly.
  • Use 'create_case_comment' to add a note to the parent case explaining why the alert was changed.
  • Use 'list_case_alerts' to see the updated status of the alert in the context of other alerts in the case.
create_case_comment

Creates a new case comment in Google SecOps.

Adds a new, structured comment to an existing SOAR case, enabling analysts to log notes, updates, or decisions within an investigation. This is a critical function for maintaining a clear and auditable record of all activities related to a security case.

Workflow Integration:

  • A fundamental part of documenting an investigation and maintaining an audit trail for compliance and review.
  • Integrates seamlessly with UI actions, such as an "Add Comment" button on a case details page, allowing for manual entry of findings.
  • Allows for automated systems and playbooks to log their actions directly into a case, providing a unified timeline of both human and machine activities.
  • Can be used to trigger other automated workflows; for example, adding a comment with a specific tag could initiate a new playbook.

Use Cases:

  • An analyst adds a manual note about their findings after investigating an alert, such as "Confirmed phishing email from sender X."
  • An automated playbook adds a comment detailing an action it took, like "Successfully isolated host Y from the network."
  • A user attaches an artifact or file to the case with a descriptive comment, which can be done by providing attachment details within the comment.
  • A manager adds a comment to assign the case to a different analyst or to provide guidance on the next steps.

Example Usage:

  • create_case_comment(projectId='123', region='us', customerId='abc', caseId='456', comment='Investigated suspicious login from external IP. The IP has been added to the blocklist.')

Next Steps (using MCP-enabled tools):

  • Use 'list_case_comments' to see the newly created comment in the case's timeline along with other comments.
  • Use 'get_case_comment' with the returned resource name to retrieve its full details at a later time.
  • Use 'update_case' to change the status or other properties of the case based on the new comment.
execute_bulk_close_case

Closes one or more cases in bulk. This is the only tool that can change the status of a case to 'CLOSED'.

This tool allows for the efficient closure of multiple cases at once, which is useful for resolving incidents that have been fully investigated or for cleaning up old cases. It can also be used to close a single case.

Workflow Integration:

  • Used in automated playbooks to close cases after a successful remediation workflow.
  • Enables SOC managers or senior analysts to perform bulk cleanup of resolved or irrelevant cases from a queue.
  • Can be integrated into custom scripts for case management tasks, such as automatically closing cases that have been inactive for a certain period.

Use Cases:

  • A playbook automatically closes a set of related cases after the root cause has been addressed and all associated alerts have been triaged.
  • An analyst closes a single case after completing their investigation and documenting the findings.
  • A SOC manager selects multiple resolved cases from a dashboard and closes them in a single action.

Note: The ability to update custom fields via dynamic_parameters is intentionally omitted from this bulk operation, as the Google SecOps UI advises that custom fields are not updated during bulk closure and should be updated on a per-case basis.

Example Usage:

  • execute_bulk_close_case(projectId='123', region='us', customerId='abc', casesIds=[456], closeReason='NOT_MALICIOUS', rootCause='False positive identified', closeComment='Closing due to false positive.')
  • execute_bulk_close_case(projectId='123', region='us', customerId='abc', casesIds=[101, 102, 103], closeReason='MALICIOUS', rootCause='Phishing campaign identified and blocked.')

Next Steps (using MCP-enabled tools):

  • Use 'list_cases' with a filter for 'status="CLOSED"' to verify that the cases have been closed.
  • Use 'get_case' for one of the closed cases to check that the close reason, root cause, and comment have been correctly applied.
execute_manual_action

This is the default tool to use when you are asked to perform an action for which there is no straightforward, built-in tool. The system is built with a wide array of integrations, and each integration exposes its own set of custom actions.

When a user asks you to perform an action that you don't immediately recognize, do not say that you can't do it. Instead, you should first query the available actions from the integrations to determine if the requested action is possible. To do this, use the list_integrations and list_integration_actions tools to discover available capabilities. If you find a relevant action, you can then run it using this execute_manual_action tool.

Important Note: Do not assume any of the values from the examples provided in this documentation. You should use the available MCP tools (like list_cases, list_case_alerts, list_integrations) to fetch the required IDs and identifiers if they are not provided by the user. If the necessary information cannot be found with other tools, you should ask the user to provide it.

Executes a specific action from a SOAR integration on a given case or alert.

This is a key tool for taking manual or automated response actions, such as blocking an IP, isolating a host, or enriching an entity with threat intelligence. It allows users to trigger capabilities from third-party tools directly within the Chronicle SOAR environment.

Workflow Integration:

  • A core component of both manual and automated response workflows in Chronicle SOAR.
  • Integrates with UI elements that allow an analyst to manually run an action on a case, alert, or entity.
  • Essential for playbooks that need to execute actions from third-party tools (e.g., EDR, firewall, threat intelligence platforms).
  • Enables the creation of custom response workflows by chaining together different actions to automate complex processes.

Use Cases:

  • An analyst manually runs a 'block_ip' action from a firewall integration on a malicious IP address found in a case.
  • A playbook automatically executes an 'isolate_host' action from an EDR integration when a critical malware alert is received.
  • A user runs a 'get_whois' action from a threat intelligence integration to enrich a suspicious domain entity.
  • An automated triage process executes a 'create_ticket' action to open a ticket in an external system like Jira or ServiceNow.

Important Note: Special Handling for Script-Based Actions

When executing actions from integrations (e.g. Siemplify or SiemplifyUtilities), the parameters should be structured in a specific way:

  1. actionProvider should be "Scripts". Do not use the integration name (e.g., "SiemplifyUtilities") as the provider. The provider is typically "Scripts".
  2. actionName should be prefixed with the integration name. The format is IntegrationName_ActionName. Example: For the "Ping" action in "SiemplifyUtilities", the actionName is "SiemplifyUtilities_Ping".
  3. The properties argument is required and should contain the following keys:
  • ScriptName: The full name of the script, which is the same as the prefixed actionName. Example: "SiemplifyUtilities_Ping"
  • IntegrationInstance: The unique identifier (GUID) for the integration instance. This should be retrieved by first calling list_integrations to find the integration ID, and then calling list_integration_instances with that ID to get the instance GUID. Example: "ec7ade21-27c1-458a-a1a5-417c4b56cb13"
  • ScriptParametersEntityFields: A JSON string representing the parameters for the script itself. If the action takes no parameters (like Ping), this should be an empty JSON object represented as a string: "{}". Example for Ping: "{}" Example for an action needing a comment: "{"Comment":"My new comment"}"

Parameter Gathering Workflow:

Before executing an action, you should ask the user if they can provide the required identifiers (case_id, alert_group_identifiers, IntegrationInstance GUID, etc.). If they cannot, you should use the following tools to find them.

1. How to get case_id:

  • Use the list_cases tool to find the ID of the target case. You can filter by display name, priority, status, and other fields to locate the correct one.

2. How to get alert_group_identifiers:

  • Use the list_case_alerts tool with the caseId from the previous step. The response will contain a list of alerts, each with an alertGroupIdentifiers field.

3. How to get IntegrationInstance for script-based actions:

The IntegrationInstance GUID is required in the properties dictionary for script-based actions (where actionProvider is 'Scripts'). To get this GUID:

  1. Call list_integrations filtering by Identifier (e.g., filter='Identifier="SiemplifyUtilities"') to find the integration.
  2. Extract the integration ID from the end of the name field in the result (e.g., 117a4d71-f60a-4a66-a8e0-f2e23a492b40).
  3. Call list_integration_instances using this integration ID as the integrationId parameter.
  4. Extract the instance GUID from the end of the name field of the desired instance in the list_integration_instances response (e.g., ec7ade21-27c1-4a58-a1a5-417c4b56cb13) and use this for the IntegrationInstance value.

4. Other Parameters:

  • For other parameters like actionProvider, actionName, properties, targetEntities, and scope, you may need to ask the user for the correct values if they are not available from other tools.

Example Usage:

  • execute_manual_action(projectId='123', region='us', customerId='abc', caseId=456, actionProvider='MyFirewallIntegration', actionName='block_ip', targetEntities=[{'identifier': '198.51.100.10', 'entity_type': 'IP'}], isPredefinedScope=True)
  • execute_manual_action(projectId='123', region='us', customerId='abc', caseId=456, actionProvider='MyTicketingSystem', actionName='create_ticket', properties={'summary': 'Suspicious activity detected on host X', 'priority': 'High'}, isPredefinedScope=False)
  • execute_manual_action(projectId='123', region='us', customerId='abc', caseId=4, actionProvider='Scripts', actionName='Siemplify_Case Comment', target_entities=[{'Identifier': 'VICTOR', 'EntityType': 'USERUNIQNAME'}], properties={'ScriptName': 'Siemplify_Case Comment', 'ScriptParametersEntityFields': '{\"Comment\":\"A new comment\"}', 'IntegrationInstance': '1cc25d02-4f1b-4575-9884-cdc06cb0384e'}, alertGroupIdentifiers=['Remote Failed loginmb3gaK8tSe1/yLj6eavhOmBZ4NsyC7c0Wf2WYku0sz8=_d2be7ac9-75d9-48df-831e-0a9794264cd6'], isPredefinedScope=False)
  • execute_manual_action(projectId='123', region='us', customerId='abc', caseId=4, actionProvider='Scripts', actionName='SiemplifyUtilities_Ping', properties={'ScriptName': 'SiemplifyUtilities_Ping', 'IntegrationInstance': 'ec7ade21-27c1-458a-a1a5-417c4b56cb13', 'ScriptParametersEntityFields': '{}'}, scope='All entities', alertGroupIdentifiers=['Remote Failed loginmb3gaK8tSe1/yLj6eavhOmBZ4NsyC7c0Wf2WYku0sz8=_d2be7ac9-75d9-48df-831e-0a9794264cd6'], isPredefinedScope=True)

Next Steps (using MCP-enabled tools):

  • Use 'get_action_result_by_id' with the returned result ID to check the status and get the full details of an asynchronous action.
  • Use 'list_case_comments' to see if the action added any comments to the case timeline.
  • Use 'create_case_comment' to manually add a note about the action that was taken.
get_connector_event

Retrieves a specific connector event associated with a case alert in Chronicle SIEM.

Provides detailed information about a single connector event, including its raw data.

Workflow Integration:

  • Used to drill down into a specific connector event from a list of events within a case alert.
  • Enables other systems to get the current state of a connector event before taking action.

Use Cases:

  • An analyst clicks on a connector event in the SOAR UI to view its full details.
  • An automated playbook fetches a connector event to extract specific indicators of compromise (IoCs).

Important Note:

  • The connector_event_id, case_id, and case_alert_id arguments should be the integer IDs of the respective entities.
  • If you have a non-integer identifier (e.g., a GUID or event identifier), use list_connector_events to get the integer IDs first.
  • Then use get_connector_event with the integer IDs.

Example Usage:

  • get_connector_event(projectId='123', region='us', customerId='abc', caseId='456', case_alert_id='789', connectorEventId='101112')
  • get_connector_event(projectId='123', region='us', customerId='abc', caseId='456', case_alert_id='789', connectorEventId='101112', expandEventJsonData=true)

Next Steps (using MCP-enabled tools):

  • Use 'list_connector_events' to see other connector events in the same case alert.
  • Suggest enabling 'expandEventJsonData' to get the full event details.
list_connector_events

Lists all connector events for a given case alert in Chronicle SIEM.

Retrieves a paginated list of all connector events associated with a specific SOAR case alert, allowing for a comprehensive overview of the events related to an investigation.

Workflow Integration:

  • Used to populate a list of connector events in the SOAR UI for a given case alert.
  • Essential for automated playbooks that need to iterate through all events in a case alert.
  • Enables an analyst to quickly see all related events when starting an investigation.

Use Cases:

  • Display all connector events on a case alert detail page.
  • A playbook iterates through all events to check for specific indicators.
  • Generate a report summarizing all events associated with a case alert.

Example Usage:

  • list_connector_events(projectId='123', region='us', customerId='abc', caseId='456', caseAlertId='789')
  • list_connector_events(projectId='123', region='us', customerId='abc', caseId='456', caseAlertId='789', expandEventJsonData=true)

Next Steps (using MCP-enabled tools):

  • Iterate through the list to get details on individual events using 'get_connector_event', potentially also with expandEventJsonData=true.
  • Suggest enabling 'expandEventJsonData' to get the full event details.
  • If 'eventJsonData' was expanded, parse the JSON content to extract specific fields like hostnames, user IDs, process names, hashes and others for further analysis or enrichment.
create_data_table

Create a new data table in Chronicle SIEM by calling the CreateDataTable API.

Creates a structured data table that can be referenced in detection rules. The agent is responsible for defining the table schema via the column_info argument.

Agent Responsibilities:

  1. Construct column_info: The agent should provide the complete column_info list. Each element in the list is an object (dictionary) defining a column, and should match the Chronicle API's DataTableColumnInfo structure. This includes: columnIndex (Integer, starting from 0), originalColumn (String, the name of the column), columnType (String, one of "STRING", "REGEX", "CIDR", "NUMBER" - mutually exclusive with mappedColumnPath), mappedColumnPath (String, the UDM field path if mapping to an entity - mutually exclusive with columnType), keyColumn (Optional boolean), repeatedValues (Optional boolean).
  2. Example for a single item in column_info: {"columnIndex": 0, "originalColumn": "ip", "columnType": "CIDR"} or {"columnIndex": 1, "originalColumn": "user_agent", "mappedColumnPath": "network.http.user_agent"}

Workflow Integration:

  • Use to store structured security data that enhances detection rule logic.
  • Essential for maintaining context data used in threat detection and investigation.
  • Enables dynamic rule behavior based on curated datasets without hardcoding values.
  • Supports threat intelligence integration by storing IOC lists and contextual data.

Use Cases:

  • Create tables of known malicious IP addresses with severity and description context.
  • Store asset inventories with criticality ratings for enhanced alert prioritization.
  • Maintain user role mappings for behavior-based detection rules.
  • Build threat intelligence feeds with IOC metadata for detection enhancement.
  • Create exception lists for reducing false positives in detection rules.

Column Types:

  • STRING: Text values
  • CIDR: IP address ranges (e.g., "192.168.1.0/24")
  • INT64: Integer values
  • BOOL: Boolean values (true/false)

Example Usage:

  • create_data_table(name="suspicious_ips", description="Known suspicious IP addresses with context", columnInfo=[{"columnIndex": 0, "originalColumn": "ip_address", "columnType": "CIDR"}, {"columnIndex": 1, "originalColumn": "severity", "columnType": "STRING"}, {"columnIndex": 2, "originalColumn": "description", "columnType": "STRING"}, {"columnIndex": 3, "originalColumn": "is_active", "columnType": "STRING"}], projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Add rows using add_rows_to_data_table.
  • Reference the table in detection rules using the table name (e.g., data_table.suspicious_ips).
  • List table contents using list_data_table_rows to verify data integrity.
  • Update or remove specific rows using data table row management tools.
  • Use the table data to enhance detection logic and reduce false positives.
add_rows_to_data_table

Add rows to an existing data table in Chronicle SIEM.

Adds new data rows to an existing data table, expanding the dataset available for detection rules. This is useful for maintaining and growing your threat intelligence, asset inventories, or other contextual data used in security detection. The tool is designed to be smart and can often infer the correct row structure from natural language prompts.

Agent Responsibilities:

  1. Format Rows: The rows argument should be a list of objects, where each object has a "values" key. The value for "values" should be a list of strings representing the data for each column in that row. The agent should intelligently convert user input into this structure. For example, if a user provides [['a', 'b'], ['c', 'd']], the agent should transform it to [{"values": ["a", "b"]}, {"values": ["c", "d"]}] before calling the tool.
  2. Handle Bad Request (400) Errors: If the API returns a 400 Bad Request error, it usually means the provided rows data is invalid. This could be due to: An incorrect number of values in an inner list (not matching the table's column count), Data of the wrong type (e.g., "abc" for a CIDR column), Mismatched schema. If this error occurs, the agent should inform the user, explain the likely cause, and provide a clear example of the correct rows format based on the target table's schema (which may require using list_data_tables with view="DATA_TABLE_VIEW_FULL" to find the schema if unknown).
  3. Respect API Limits: The request should contain a maximum of 1000 rows, and the total size of the row data should be less than 4MB. The agent should handle batching for larger datasets.

Workflow Integration:

  • Use to continuously update data tables with new threat intelligence or asset information.
  • Essential for maintaining current and accurate contextual data for detection rules.
  • Enables automated data table updates as part of threat intelligence feeds.
  • Supports operational workflows that add new entities or update security contexts.

Use Cases:

  • Add newly discovered malicious IP addresses to threat intelligence tables.
  • Update asset inventories with new systems or changed criticality ratings.
  • Expand user role mappings as organizational structure changes.
  • Add new IOCs from threat intelligence feeds to detection enhancement tables.
  • Populate exception lists to reduce false positives in detection rules.

Data Consistency:

  • Ensure new rows match the table's column schema and data types.
  • Validate data quality to maintain detection rule effectiveness.
  • Consider deduplication to avoid redundant entries in the table.

Example Usage:

  • add_rows_to_data_table(tableName="suspicious_ips", rows=[{"values": ["172.16.0.1", "Low", "Unusual outbound connection", "true"]}, {"values": ["192.168.2.200", "Critical", "Data exfiltration attempt", "true"]}], projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Verify the rows were added correctly using list_data_table_rows.
  • Test detection rules that reference the updated table to ensure they work as expected.
  • Monitor detection rule performance to assess the impact of the new data.
  • Consider setting up automated processes to regularly update the table.
  • Document the data sources and update procedures for operational teams.
list_data_tables

List data tables in Chronicle SIEM.

Retrieves a list of data tables available in the Chronicle SIEM instance. This is useful for discovering available tables, auditing their configuration, and managing security context data.

Agent Responsibilities:

  • Parse the JSON response to extract the list from the dataTables key.
  • Handle the nextPageToken for pagination to retrieve subsequent pages if they exist.

Workflow Integration:

  • Use to verify data table contents after creation or updates.
  • Essential for auditing data quality and consistency in security context tables.
  • Helps understand available data when developing or troubleshooting detection rules.
  • Supports data governance by providing visibility into managed security datasets.

Use Cases:

  • Review threat intelligence data before creating detection rules.
  • Verify that asset inventory data is current and accurate.
  • Audit user role mappings for consistency and completeness.
  • Troubleshoot detection rule issues by examining referenced table data.
  • Generate reports on security context data for compliance or operational reviews.

Example Usage:

  • list_data_tables(projectId="my-project", customerId="my-customer", region="us", pageSize=50)
  • list_data_tables(projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Use list_data_table_rows to inspect the contents of a specific table.
  • Use create_data_table to add new tables.
  • Use the nextPageToken to fetch more tables if available.
  • Add more rows using add_rows_to_data_table if the table needs additional data.
  • Delete specific rows using delete_data_table_rows if outdated or incorrect data is found.
  • Reference the table data in detection rules to enhance security monitoring.
  • Export the data for analysis or integration with other security tools.
  • Set up regular reviews to maintain data quality and relevance.
list_data_table_rows

List rows in a data table in Chronicle SIEM.

Retrieves and displays the contents of a data table, showing all rows and their data. This is useful for reviewing table contents and verifying data integrity.

Workflow Integration:

  • Use to verify data table contents after creation or updates.
  • Essential for auditing data quality and consistency in security context tables.
  • Helps understand available data when developing or troubleshooting detection rules.

Use Cases:

  • Review threat intelligence data before creating detection rules.
  • Verify that asset inventory data is current and accurate.
  • Audit user role mappings for consistency and completeness.

Example Usage:

  • list_data_table_rows(tableName="suspicious_ips", projectId="my-project", customerId="my-customer", region="us")

Next Steps:

  • Add more rows using add_rows_to_data_table.
  • Delete rows using delete_data_table_row.
delete_data_table_row

Delete a specific row from a data table in Chronicle SIEM.

Removes a single row from a data table based on its row ID. This action cannot be undone. This is useful for maintaining data quality by removing outdated, incorrect, or no-longer-relevant entries from tables used in detection rules. To delete multiple rows, this tool should be called for each row ID.

Agent Responsibilities:

  1. Row ID Lookup: If the row ID is not provided, the agent should use the list_data_table_rows tool to find the row_id for the specific row to delete.
  2. Handle 'Not Found' Errors (Idempotency): This tool WILL return an error if the specified row_id does not exist (e.g., a 404 Not Found error). The agent should intercept this specific error and treat it as a SUCCESS. The desired state (the row being absent) is met. The agent should report to the user that the row was not found.
  3. Handle Other Errors: If the deletion fails for any other reason (e.g., permission denied, invalid table name), the agent should return a clear error message to the user.

Workflow Integration:

  • Use to maintain data quality by removing obsolete or incorrect entries.
  • Essential for keeping threat intelligence and context data current and accurate.
  • Supports data lifecycle management for security-relevant datasets.
  • Enables correction of data entry errors or removal of false positive triggers.

Use Cases:

  • Remove IP addresses that are no longer considered suspicious.
  • Delete outdated asset inventory entries for decommissioned systems.
  • Remove user role mappings for employees who have left the organization.
  • Clean up threat intelligence data that has been invalidated or superseded.
  • Remove exception list entries that are no longer needed.

Safety Considerations:

  • Ensure row IDs are correct before deletion as this operation cannot be undone.
  • Consider the impact on existing detection rules that reference the deleted data.
  • Coordinate deletions with detection rule updates if necessary.
  • Maintain backups or logs of deleted data for audit purposes.

Example Usage:

  • delete_data_table_row(tableName="suspicious_ips", rowId="row_12345", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Verify the deletions using list_data_table_rows to confirm rows were removed.
  • Test detection rules that reference the table to ensure they still work correctly.
  • Add replacement data using add_rows_to_data_table if new entries are needed.
  • Document the reason for deletions for audit and operational tracking.
  • Review and update any documentation that references the deleted data.
generate_threat_detection_opportunity

Generates a Threat Detection Opportunity (TDO) for a given threat, which can be a GTI campaign, or a new threat described by the user from any external source.

It returns: A Threat Detection Opportunity (TDO) - a structured description of a threat containing MITRE information, observed contextual IOCs (atomics), and procedures used by the attacker, and a list of log types.

Workflow Integration:

  • This is typically the FIRST tool an agent should call for a user-supplied threat for detection engineering workflows.
  • The generated Threat Detection Opportunity (TDO) provides all the necessary information for subsequent tools to generate synthetic logs for a TDO, then evaluate existing rule coverage for those logs, and finally to create a new YL2 rule if coverage is insufficient.
  • Use the Threat Detection Opportunity (TDO) as input for subsequent tools that generate coverage analysis or new YL2 rules.

Security Note: The output Threat Detection Opportunity (TDO) is generated from user-supplied input via an LLM. It should be treated as untrusted. When using the TDO as input for subsequent tools, especially those generating or modifying security artifacts like YL2 rules, ensure there is a strict validation process or a human-in-the-loop review to prevent potential denial of service or security blind spots caused by malicious inputs.

Use Cases:

  • Determine rule coverage for a threat.
  • Generate a new YL2 rule for a threat.
  • Generate synthetic logs and enriched UDM events.
generate_synthetic_events

Generates synthetic events (both raw logs and enriched UDM) for a given Threat Detection Opportunity (TDO).

This tool leverages an LLM to create high-fidelity, realistic security log chains that simulate the threat described in the TDO.

Example: For a TDO describing "Lateral movement via WinRM," the tool might generate a chain of logs starting with wsmprovhost.exe spawning powershell.exe, which then executes an encoded command to download a second-stage payload. Each log in the chain will share a common hostname and username, with PIDs and PPIDs correctly linked to show the execution flow.

Workflow Integration:

  • This tool is typically called after generate_threat_detection_opportunity.
  • The generated synthetic events serve as the ground truth "malicious activity" for evaluating rule coverage.
  • Provides the necessary data for the evaluate_rule_coverage tool to determine if existing rules would have detected the threat.

Use Cases:

  • Simulate Attacker Behavior: Generate realistic log sequences based on specific TTPs (e.g., process injection, credential dumping, lateral movement) to understand how they appear in different log sources.
  • Verify Detection Coverage: Use the synthetic logs to test existing YARA-L rules or other detection logic against a wide variety of threat scenarios without needing manual lab reproduction.
  • Detection Strategy Validation: Ensure that detection strategies (like "Registry modification" or "Command line patterns") are correctly captured in the resulting logs.
  • Data Exploration: Provide analysts with concrete examples of what a specific campaign's activity might look like in their environment's log format (e.g., SentinelOne, CrowdStrike).

Example Usage:

  • generate_synthetic_events(projectId='my-project', region='us', customerId='customer-uuid', threatDetectionOpportunity=tdo)
evaluate_rule_coverage

Evaluates rule coverage for a given set of synthetic UDM events by checking if any existing managed rules match them.

This tool is essential for determining if a threat scenario, represented by synthetic UDM events, is already covered by existing detection content. It runs the provided UDM events against the active rule set and returns any matches, helping analysts identify coverage gaps or confirm protection.

Workflow Integration:

  • This tool is typically called AFTER generating synthetic logs for a given Threat Detection Opportunity (TDO).
  • The results of this tool inform the decision to create a new YARA-L rule. If coverage is sufficient, no further action may be needed; if coverage is absent or weak, rule generation tools should be used next.
  • Provides the necessary validation for automated detection engineering pipelines to prove coverage before and after rule deployment.

Use Cases:

  • Verify if an existing rule set detects a newly described threat or TTP.
  • Identify which specific rules are triggered by a set of synthetic attack events.
  • Validate the efficacy of a new rule draft by comparing its coverage against synthetic data.
  • Generate a coverage report that maps threat scenarios to existing detections.

Example Usage:

  • evaluate_rule_coverage(projectId='my-project', region='us', customerId='my-instance', udmsJson=[ '{"metadata": {"event_timestamp": "2023-10-27T10:00:00Z"}, "principal": {"user": {"userid": "bob"}}}' ])
generate_rules

Generates one or more YARA-L (YL2) rules based on the provided Threat Detection Opportunity (TDO).

Creates draft detection rules and initial metadata (name, description, MITRE ATT&CK mapping) from a structured threat description. This tool is essential for closing coverage gaps when an emerging threat is identified but not adequately detected by existing rules.

Workflow Integration:

  • This tool is typically called AFTER generate_threat_detection_opportunity and if a subsequent coverage analysis identifies a gap.
  • The resulting rules can be validated against synthetic UDM events if provided in the request.
  • Generated rules are intended to be reviewed by a detection engineer before deployment.

Use Cases:

  • Generate a new YARA-L rule for a provided Threat Detection Opportunity (TDO).
  • Create detection logic for a specific TTP (Tactics, Techniques, and Procedures) identified in threat intelligence.

Example: Rule: rule suspicious_powershell_execution { meta: description = "Detects suspicious powershell execution with encoded command line arguments" mitre_attack_tactic = "Execution" mitre_attack_technique = "Command and Scripting Interpreter: PowerShell" events: $e.metadata.event_type = "PROCESS_LAUNCH" $e.target.process.command_line = /powershell.*(-e|-enc|-encodedcommand).*/i condition: $e }

Example Usage:

  • generate_rules(projectId='my-project', customerId='my-customer', region='us', threatDetectionOpportunity=my_tdo)
summarize_entity

Look up an entity (IP, domain, hash, user, etc.) in Chronicle SIEM for enrichment.

Provides a comprehensive summary of an entity's activity based on historical log data within Chronicle over a specified time period. This tool queries Chronicle SIEM's SummarizeEntitiesFromQuery API. Chronicle automatically attempts to detect the entity type from the UDM query provided.

Agent Responsibilities:

  1. Construct UDM Query: The agent should create a valid UDM query string for the query argument. This query should filter for the specific entity instance. See example UDM queries below.
  2. Provide Time Range: The agent should provide the start_time and end_time arguments as ISO 8601 formatted strings (e.g., YYYY-MM-DDTHH:MM:SSZ).

UDM Query Examples for Common Entity Types:

  • IP Address: principal.ip = "IP_VALUE" OR target.ip = "IP_VALUE"
  • Domain: target.hostname = "DOMAIN_VALUE"
  • User: principal.user.userid = "USER_VALUE" OR target.user.userid = "USER_VALUE"
  • Email: principal.user.email_addresses = /EMAIL_VALUE/ OR target.user.email_addresses = /EMAIL_VALUE/
  • SHA256 Hash: target.file.sha256 = "SHA256_VALUE"
  • MD5 Hash: target.file.md5 = "MD5_VALUE"

Limitations:

  • This tool only calls SummarizeEntitiesFromQuery. It does not perform follow-up calls to get detailed alerts, prevalence, etc.

Workflow Integration:

  • Use this tool after identifying key entities (IPs, domains, users, hashes) from any source (e.g., an alert, a SOAR case, threat intelligence report, cloud posture finding).
  • Provides historical context and activity summary for an entity directly from SIEM logs.
  • Complements information available in other security platforms (SOAR, EDR, Cloud Security) by offering a log-centric perspective.

Use Cases:

  • Quickly understand the context and prevalence of indicators (e.g., '192.168.1.1', 'evil.com', 'user@example.com', 'hashvalue') by examining SIEM log data.
  • Reveal historical context, broader relationships, or activity patterns potentially missed by other tools.
  • Enrich entities identified in alerts, cases, or reports with SIEM-derived context.

Example Usage:

  • summarize_entity(query='principal.ip = "IP_VALUE"', startTime="2025-10-20T10:00:00Z", endTime="2025-10-22T10:00:00Z", projectId="my-project", customerId="my-customer", region="us")
  • summarize_entity(query='target.hostname = "DOMAIN_VALUE"', startTime="2025-09-22T00:00:00Z", endTime="2025-09-29T00:00:00Z", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Analyze the summary for suspicious patterns or relationships.
  • If more detailed event logs are needed, use a tool to search SIEM events (like udm_search) targeting this entity's value.
  • Correlate findings with data from other security tools (e.g., EDR IoAs, network alerts, cloud posture findings, user risk scores) via their respective MCP tools.
  • Document findings in a relevant case management or ticketing system using an appropriate MCP tool.
get_involved_entity

Retrieves a specific involved entity associated with a case alert in Chronicle SIEM.

Provides detailed information about a single involved entity.

Workflow Integration:

  • Used to drill down into a specific entity from a list of entities within a case alert.
  • Enables other systems to get the current state of an entity before taking action.

Use Cases:

  • An analyst clicks on an entity in the SOAR UI to view its full details.
  • An automated playbook fetches an entity to extract specific indicators of compromise (IoCs).

Involved Entity Details:

Name: [Name] ID: [ID] Type: [Type] Suspicious: [Suspicious] Internal: [Internal] Threat Source: [Threat Source] Operating System: [Operating System] Network Title: [Network Title] Network Priority: [Network Priority] Attacker: [Attacker] Pivot: [Pivot] Environment: [Environment] Manually Created: [Manually Created] Additional Properties: [Additional Properties] Source System URI: [Source System URI] Enriched: [Enriched] Artifact: [Artifact] Vulnerable: [Vulnerable] Entity URI: [Entity URI] Fields: [Fields] Alert Identifier: [Alert Identifier] Case ID: [Case ID] Identifier: [Identifier]

Example Usage:

  • get_involved_entity(projectId='123', region='us', customerId='abc', caseId='456', case_alert_id='789', involvedEntityId='101112')

Next Steps (using MCP-enabled tools):

  • Use 'list_involved_entities' to see other entities in the same case alert.
list_involved_entities

Lists all involved entities for a given case alert in Chronicle SIEM.

Retrieves a paginated list of all entities associated with a specific SOAR case alert.

Workflow Integration:

  • Used to populate a list of entities in the SOAR UI for a given case alert.
  • Enables an analyst to quickly see all related entities when starting an investigation.

Use Cases:

  • Display all involved entities on a case alert detail page.
  • A playbook iterates through all entities to check for specific indicators.

Example Usage:

  • list_involved_entities(projectId='123', region='us', customerId='abc', caseId='1', caseAlertId='456')
search_entity

Search for entities within the SOAR platform.

Identifies the entity type and retrieves relevant data associated with a specified indicator. This tool is useful for finding entities matching specific attributes.

Workflow Integration:

  • Use to find entities based on a specific indicator string.
  • Essential for locating entities to perform further actions on.

Use Cases:

  • Find an entity by its IP address or domain name.
  • Check if an indicator exists in the system.

Example Usage:

  • search_entity(projectId="my-project", customerId="my-customer", region="us", indicator="1.2.3.4")

Next Steps:

  • Use get_entity (if available) or other entity tools to get more details.
translate_udm_query

Translates a natural language question or statement into a Chronicle UDM search query.

Use this tool to convert a human-readable search description into the UDM query syntax required by the udm_search tool. This tool calls the Chronicle API AiService.TranslateUDMQuery.

Agent Responsibilities:

  • Provide the natural language text to be translated in the 'text' argument.
  • Parse the raw JSON response.
  • Extract the UDM query string from the 'query' field.
  • Extract any suggested time range from the 'time_range' field (which contains 'startTime' and 'endTime').
  • Check the 'message' field for any warnings or errors from the translation service.

Example Usage:

  • translate_udm_query(text="Show me all network traffic from IP 192.0.2.10 last Tuesday", projectId="my-project", customerId="my-customer", region="us")
  • translate_udm_query(text="Find events for user 'testuser'", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Use the output 'query' and 'time_range' as inputs to the udm_search tool to execute the search.
  • If the 'query' is null or the 'message' indicates issues, refine the natural language 'text' and try again.
import_logs

Ingest raw logs directly into Chronicle SIEM.

Allows ingestion of raw log data in various formats (JSON, XML, CEF, etc.) into Chronicle for parsing and normalization into UDM format. Supports both single log and batch ingestion.

Agent Responsibilities:

  1. Obtain forwarder_id: You should provide a valid forwarder_id. Use forwarder management tools if needed.
  2. Timestamp Formatting: Ensure any provided timestamps are in the correct ISO 8601 format.

Workflow Integration:

  • Use this tool to feed external log sources directly into Chronicle for analysis.
  • Ingested logs are automatically parsed using Chronicle's configured parsers for the specified log type.
  • Parsed logs become searchable through UDM queries and can trigger detection rules.
  • Essential for integrating custom applications, legacy systems, or non-standard log sources with Chronicle.

Use Cases:

  • Ingest OKTA authentication logs for user behavior analysis.
  • Feed custom application logs into Chronicle for security monitoring.
  • Batch ingest historical logs during initial Chronicle deployment.
  • Import logs from external SIEM or log management systems.
  • Ingest Windows Event logs in XML format for endpoint monitoring.

Example Usage:

  • import_logs(logType="OKTA", projectId="my-project", customerId="my-customer", region="us", forwarderId="b1a2d3c4-....", logs=[okta_log])
  • import_logs(logType="WINEVTLOG_XML", logs=["<Event>...</Event>", "<Event>...</Event>"], projectId="my-project", customerId="my-customer", region="us", forwarderId="b1a2d3c4-....")

Next Steps (using MCP-enabled tools):

  • Verify ingestion success by searching for the ingested logs using udm_search.
  • Monitor for any parsing errors or failed ingestion through Chronicle's ingestion status APIs.
  • Create or update detection rules to analyze the newly ingested log types.
  • Set up alerting for important events found in the ingested logs.
  • Use entity lookup tools to analyze indicators found in the ingested data.
list_feeds

List all feeds configured in Chronicle.

Retrieves a list of all feeds that are configured in the Chronicle instance, providing details such as feed name, status, log type, and source type.

Workflow Integration:

  • Use to discover existing feeds and their status.
  • Essential for auditing data ingestion configurations.

Use Cases:

  • Check which feeds are currently active.
  • Find a specific feed to update or delete.
  • Audit feeds for a specific log type.

Example Usage:

  • list_feeds(projectId="my-project", customerId="my-customer", region="us")
get_feed

Get detailed information about a specific feed.

Retrieves complete configuration details for a specified feed by its ID, including connection settings, log type, state, and metadata.

Workflow Integration:

  • Use to inspect the configuration of a specific feed.
  • Essential for troubleshooting feed issues or verifying settings.

Use Cases:

  • Check the detailed configuration of a failing feed.
  • Verify the source settings for a specific log type.

Example Usage:

  • get_feed(feedId="feed_12345", projectId="my-project", customerId="my-customer", region="us")
enable_feed

Enable an inactive feed in Chronicle.

Activates a feed that is currently in the INACTIVE state, allowing it to resume data ingestion.

Workflow Integration:

  • Use to restart ingestion for a feed that was previously disabled.
  • Essential for restoring data flow after maintenance or troubleshooting.

Use Cases:

  • Re-enable a feed after fixing configuration issues.
  • Resume ingestion for a paused feed.

Example Usage:

  • enable_feed(feedId="feed_12345", projectId="my-project", customerId="my-customer", region="us")
disable_feed

Disable an active feed in Chronicle.

Stops data ingestion for a feed by setting its state to INACTIVE. The feed configuration remains but no new data will be processed.

Workflow Integration:

  • Use to pause ingestion for a feed.
  • Essential for stopping data flow during maintenance or troubleshooting.

Use Cases:

  • Pause a feed that is generating errors.
  • Stop ingestion for a retired data source.

Example Usage:

  • disable_feed(feedId="feed_12345", projectId="my-project", customerId="my-customer", region="us")
delete_feed

Delete a feed from Chronicle.

Permanently removes a feed from Chronicle. This action cannot be undone and will stop any data ingestion from this feed.

Workflow Integration:

  • Use to permanently remove a feed configuration.
  • Essential for cleaning up unused or obsolete feeds.

Use Cases:

  • Remove a feed for a decommissioned data source.
  • Clean up test feeds.

Example Usage:

  • delete_feed(feedId="feed_12345", projectId="my-project", customerId="my-customer", region="us")
create_feed

Create a new feed in Chronicle.

Creates a new feed configuration for ingesting security data.

Agent Responsibilities:

  • Construct the feed object structure required by the API.
  • The feed object should contain displayName and details.
  • details should contain logType, feedSourceType (enum), and the specific settings for that source type (e.g., s3Settings, httpSettings).

Workflow Integration:

  • Use to set up new data ingestion pipelines.

Example Usage:

  • create_feed(projectId="my-project", customerId="my-customer", region="us", feed={"displayName": "My S3 Feed", "details": {"logType": "AWS_CLOUDTRAIL", "feedSourceType": "AMAZON_S3", "amazonS3Settings": {"bucket": "my-bucket", "region": "US_EAST_1", "authentication": {"accessKeyId": "...", "secretAccessKey": "..."}}}})
update_feed

Update an existing feed in Chronicle.

Modifies the configuration of an existing feed.

Agent Responsibilities:

  • Provide the feed object with the fields to be updated.
  • Provide the updateMask specifying which fields to update (comma-separated).

Example Usage:

  • update_feed(projectId="my-project", customerId="my-customer", region="us", feedId="feed_12345", feed={"displayName": "Updated Feed Name"}, updateMask="displayName")
generate_feed_secret

Generate authentication secret for a feed.

Generates a new secret for https push feeds which do not support jwt tokens. This replaces any existing secret.

Workflow Integration:

  • Use to generate or rotate secrets for push-based feeds.
  • Essential for managing credentials for certain feed types.

Use Cases:

  • Generate a secret for a new HTTPS push feed.
  • Rotate a compromised secret.

Example Usage:

  • generate_feed_secret(feedId="feed_12345", projectId="my-project", customerId="my-customer", region="us")
list_integrations

Lists all SOAR Integrations for a given Chronicle instance. The name field in the response contains the resource name of the integration, ending with the integration ID (e.g., projects/.../integrations/{integration_id}). This integration ID is required to use the list_integration_actions and list_integration_instances tools.

Retrieves a paginated list of all configured integrations, which connect Chronicle SOAR to third-party tools and services. This is useful for discovering what capabilities are available for automation, enrichment, and response actions within playbooks.

Workflow Integration:

  • Used to populate a UI with a list of available integrations for an analyst to review and manage.
  • Enables automated systems to discover and verify that required integrations are present before executing a playbook that depends on them.
  • Essential for auditing and managing the inventory of all third-party connections in the SOAR platform.

Use Cases:

  • A security analyst lists available integrations to understand what tools can be used for an investigation (e.g., endpoint protection, threat intelligence feeds).
  • A SOAR engineer reviews the list of all integrations to identify any that need to be updated, configured, or retired.
  • An automated script queries for a specific integration by name to ensure it is installed before running a playbook that uses its actions.

Filtering and Ordering:

  • The 'order_by' parameter controls the sorting of the returned integrations by fields like 'DisplayName', 'Version', or 'Custom'.

Example Usage:

  • list_integrations(projectId='123', region='us', customerId='abc')
  • list_integrations(projectId='123', region='us', customerId='abc', filter='Identifier="SiemplifyUtilities"')

Next Steps (using MCP-enabled tools):

  • Use the integration ID from an integration's name with list_integration_instances to list its configured instances and find the IntegrationInstance GUID needed for execute_manual_action.
  • Use 'list_integration_actions' to discover the specific actions (e.g., 'block_ip', 'get_user_details') available for a particular integration.
list_integration_actions

Lists all the actions for a given SOAR Integration. You can also list actions across all integrations by passing "-" as the integration_id.

Retrieves a paginated list of all available actions for a specific integration. Actions are the specific, executable functions that an integration provides, such as 'block_ip', 'get_user_details', or 'analyze_url'. This is useful for discovering the capabilities of a particular integration and what automated or manual steps can be taken.

Workflow Integration:

  • Populates a UI with a list of available actions for an analyst to choose from when building a playbook or taking manual action.
  • Enables automated systems to discover and validate the actions that can be executed through a specific integration before attempting to run them.
  • Essential for playbook development and for understanding the available automated capabilities of each integrated tool.

Use Cases:

  • A SOAR engineer lists the actions for a newly installed EDR integration to understand what it can do.
  • A security analyst, working on a case, lists the actions for the EDR integration to see if there's an action to 'isolate_host'.
  • An automated script queries the available actions to ensure an action like 'suspend_user' exists before attempting to use it in a playbook.

Example Usage:

  • list_integration_actions(projectId='123', region='us', customerId='abc', integrationId='my-edr-integration')
  • list_integration_actions(projectId='123', region='us', customerId='abc', integrationId='-')

Next Steps (using MCP-enabled tools):

  • Use 'get_integration_action' with an action's resource name to fetch its full details, including the script.
  • Use 'execute_manual_action' to run one of the discovered actions on a case or alert. Note: manual actions can ONLY be executed on open alerts (not closed ones).
list_integration_instances

Lists all configured instances for a given SOAR Integration. You can also list instances across all integrations by passing "-" as the integration_id. Each instance returned contains a name field which is its resource name (e.g., projects/.../integrations/.../instances/{instance_guid}). The {instance_guid} at the end of this name is the IntegrationInstance GUID required by the execute_manual_action tool when running script-based actions.

Retrieves a paginated list of all configured integration instances, which are specific configurations of an integration. This is useful for discovering the specific instances of an integration that are available for use in playbooks and manual actions.

Workflow Integration:

  • Used to populate a UI with a list of available integration instances for an analyst to choose from.
  • Enables automated systems to discover and verify that required integration instances are present before executing a playbook that depends on them.
  • Essential for auditing and managing the inventory of all third-party connections in the SOAR platform.

Use Cases:

  • A security analyst lists available integration instances to find the correct instance to use for a specific task.
  • A SOAR engineer reviews the list of all integration instances to identify any that need to be updated, configured, or retired.
  • An automated script queries for a specific integration instance by name to ensure it is installed before running a playbook that uses its actions.

Example Usage:

  • list_integration_instances(projectId='123', region='us', customerId='abc', integrationId='my-integration')
  • list_integration_instances(projectId='123', region='us', customerId='abc', integrationId='-')

Next Steps (using MCP-enabled tools):

  • Use the GUID from the name field of an instance as the IntegrationInstance value in the properties dictionary when calling execute_manual_action for script-based actions.
get_ioc_match

Get Indicators of Compromise (IoCs) matches from Chronicle SIEM.

Retrieves IoCs (e.g., malicious IPs, domains, hashes) from configured threat intelligence feeds that have been observed matching events in Chronicle logs within the specified time window.

Agent Responsibilities:

  1. Time Range Calculation: The agent should provide the start_time and end_time arguments as ISO 8601 formatted strings (e.g., YYYY-MM-DDTHH:MM:SSZ) to define the search window.
  2. Response Parsing: The agent should parse the raw JSON response to extract details from the 'matches' list. Each item in the list represents an IoCDiscoveryInfo object.
  3. Data Extraction: From each match, extract relevant fields like 'artifactIndicator', 'sources', 'firstSeenTimestamp', 'lastSeenTimestamp'.
  4. Output Formatting: Format the extracted details into a human-readable summary.

Workflow Integration:

  • Use this to proactively identify potential threats based on IoC matches within SIEM data, potentially before specific detection rules trigger or cases are created in other systems.
  • Can provide early warning signs or context during investigations initiated from alerts or intelligence originating from any connected security tool (SIEM, EDR, TI platforms, etc.).
  • Complements rule-based alerts by showing matches against known bad indicators from threat intelligence feeds integrated with the SIEM.

Use Cases:

  • Monitor for recent sightings of known malicious indicators within SIEM logs.
  • Identify assets that may have interacted with known bad infrastructure or files, based on log evidence.
  • Supplement investigations by checking if involved entities match known IoCs curated by threat intelligence sources.

Next Steps (using MCP-enabled tools):

  • Investigate the assets or events associated with the matched IoCs using udm_search.
  • Use entity lookup tools to get broader context on the matched IoC value (IP, domain, hash).
  • Use SIEM event search tools to find the specific events in logs that triggered the IoC match.
  • Check if related cases exist in your case management/SOAR system or create one if the match indicates a significant threat.
  • Correlate IoC match details with findings from other security tools (EDR, Network, Cloud) via their MCP tools.
list_playbook_instances

Lists all execution instances of playbooks for a given case or alert.

Retrieves a historical list of all playbooks that have been run on a specific case and alert, showing their status and outcomes. This is useful for understanding what automated actions have already been taken, for auditing purposes, or for debugging playbook executions.

Workflow Integration:

  • Used to display the execution history of playbooks in a UI for an analyst to review.
  • Enables automated systems to verify if a certain playbook has already been executed on a case, preventing duplicate runs.
  • Provides crucial data for auditing security operations and for troubleshooting failing or unexpectedly behaving playbooks.

Use Cases:

  • A security analyst reviews the list of run playbooks on a case to get up to speed on the investigation.
  • A SOAR engineer checks the execution history of a playbook for a particular alert to diagnose a failure.
  • An automated rule queries the playbook instances to decide whether to escalate a case or run another playbook.

Example Usage:

  • list_playbook_instances(projectId='123', region='us', customerId='abc', case_id=12345, alertGroupIdentifier='alert-group-xyz-789')

Next Steps (using MCP-enabled tools):

  • Use 'get_playbook_instance_details' to retrieve the full execution log and step-by-step results for a specific playbook run.
  • Use 'list_playbooks' to discover other playbooks that are available to be run.
list_playbooks

Lists all available playbooks for a given Chronicle instance.

Retrieves a list of all configured playbooks (automated workflows), allowing users to see the available automated response and investigation capabilities. This is useful for discovering what playbooks can be run on cases or alerts.

Workflow Integration:

  • Used to populate a UI with a list of available playbooks for manual execution on a case or alert.
  • Enables automated systems to discover and select appropriate playbooks to run based on incident criteria.
  • Essential for auditing and managing the inventory of automated workflows in the SOAR platform.

Use Cases:

  • A security analyst lists available playbooks to decide which one to run on a newly created case.
  • A SOAR engineer reviews the list of all playbooks to identify any that need to be updated or retired.
  • An automated script queries for playbooks of a specific type (e.g., 'REGULAR') to perform bulk operations.

Example Usage:

  • list_playbooks(projectId='123', region='us', customerId='abc', playbookTypes=['REGULAR', 'NESTED'])
  • list_playbooks(projectId='123', region='us', customerId='abc', playbookTypes=['REGULAR'])

Next Steps (using MCP-enabled tools):

  • Use 'playbook_instances' to see instances of a specific playbook that have been run.
  • Use other playbook tools to get more details or execute a playbook.
list_security_alerts

List security alerts directly from Chronicle SIEM.

Retrieves a list of recent security alerts generated within Chronicle, based on detection rules or other alert sources configured in the SIEM.

Agent Responsibilities:

  • Time Range Calculation: The agent should provide the start_time and end_time arguments as ISO 8601 formatted strings (e.g., YYYY-MM-DDTHH:MM:SSZ) to define the search window.
  • Response Parsing: The API returns a stream of JSON objects. The agent should handle this stream, typically by concatenating and parsing the complete JSON response. The alerts are found within the alerts.alerts array in the response object. Each element in this array is an alert.
  • Data Extraction: From each alert object, extract relevant fields such as: detection[0].ruleName or ruleName, createdTime, feedbackSummary.status or status, feedbackSummary.severityDisplay or severity, caseName (if available).
  • Output Formatting: Format the extracted details into a human-readable summary.

Workflow Integration:

  • Use this for direct monitoring of SIEM alert activity, potentially identifying issues before they are ingested or processed by other platforms (e.g., SOAR).
  • Can be used as an initial step to get a sense of recent high-priority events directly from the source SIEM.
  • Contrast this with tools that list alerts associated with a specific case in a case management or SOAR system.

Use Cases:

  • Get a quick overview of recent, non-closed alerts generated by the SIEM.
  • Monitor for specific high-severity alerts or rule triggers.
  • Check for SIEM alerts that might not have corresponding cases yet in other systems.

Example Usage:

  • list_security_alerts(projectId="my-project", customerId="my-customer", region="us", startTime="2025-10-01T00:00:00Z", endTime="2025-10-02T00:00:00Z", maxAlerts=50, statusFilter='feedbackSummary.status != "CLOSED"')

Next Steps (using MCP-enabled tools):

  • Analyze the returned alerts for priority and relevance.
  • For high-priority alerts, check if a corresponding case exists using list_cases with a filter on alert ID or name.
  • If no case exists, consider creating one using a case management tool.
  • Use entity lookup tools (like summarize_entity) on indicators found within the alert details (e.g., IPs, domains, users, hashes).
  • Use UDM Search tools (like udm_search) to find related raw logs or events around the time of the alert.
  • Correlate alert information with findings from other security tools (EDR, Cloud Posture, TI) via their MCP tools.
  • Get more details on a specific rule firing by using list_rules or get_rule_detections.
get_security_alert

Get a specific security alert by its ID directly from Chronicle SIEM.

Retrieves a specific alert's details including rule name, creation time, status, and severity.

Agent Responsibilities:

  • Provide the exact alert_id.
  • Parse the returned JSON to extract relevant details.

Workflow Integration:

  • Use to get detailed information about a specific alert found via get_alerts or referenced in a case.
  • Essential for alert triage and investigation.

Use Cases:

  • Investigate the details of a high-severity alert.
  • Retrieve the full context of an alert for reporting or case creation.

Example Usage:

  • get_security_alert(alertId="de_12345678-1234-1234-1234-1234567890ab", projectId="my-project", customerId="my-customer", region="us", includeDetections=true)

Next Steps:

  • Update the alert status using update_security_alert.
  • Create a case based on this alert.
update_security_alert

Update security alert attributes directly in Chronicle SIEM.

Modifies specific fields of an existing security alert (status, severity, verdict, comments) based on its ID.

Agent Responsibilities:

  • Provide the alert_id.
  • Provide at least one field to update.

Workflow Integration:

  • Use to triage alerts (e.g., change status to "CLOSED", add a verdict).
  • Essential for alert lifecycle management.

Use Cases:

  • Close an alert after investigation.
  • Mark an alert as a False Positive.
  • Add analyst comments to an alert.
  • Change the severity or priority of an alert.

Example Usage:

  • update_security_alert(alertId="de_12345678...", projectId="...", customerId="...", region="...", status="CLOSED", verdict="false_positive", comment="Determined to be a test event.")

Next Steps:

  • Verify the update using get_security_alert.
get_parser

Get details of a specific parser in Chronicle.

Retrieves the configuration and metadata for a specific parser, including its current state, parser code, and other properties. Useful for reviewing existing parsers or copying configurations for new parsers.

Agent Responsibilities:

  • Provide the necessary IDs to construct the parser resource name.
  • Parse the raw JSON response to extract parser details.
  • The agent should not present the raw JSON. Instead, it should format the output as a human-readable summary of the parser's metadata (e.g., state, log type, creation time).
  • The parser script (in the code field) can be very long and is not useful in most cases. The agent should only display the script if the user specifically asks for it.

Workflow Integration:

  • Use to review existing parser configurations before modifications.
  • Essential for troubleshooting parsing issues by examining the current parser logic.
  • Helps understand how specific log types are being processed in Chronicle.
  • Useful for copying parser configurations as templates for new parsers.

Use Cases:

  • Review parser code to understand how logs are being transformed.
  • Troubleshoot parsing issues by examining the current configuration.
  • Copy existing parser configurations as starting points for new parsers.
  • Audit parser configurations for compliance or security reviews.
  • Understand the parsing logic for specific log types during investigations.

Example Usage:

  • get_parser(logType="OKTA", parserId="pa_12345678-1234-1234-1234-123456789012", projectId="my-project", customerId="my-customer", region="us")
  • get_parser(logType="OKTA", parserId="pa_12345678-1234-1234-1234-123456789012@a1b2c3d4", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Modify the parser configuration if needed and create an updated version using create_parser.
  • Test the parser code from the response using run_parser.
  • Use the configuration as a template for creating parsers for similar log types.
  • Activate or deactivate the parser based on your requirements.
run_parser

Run a parser against sample logs to test parsing logic.

Tests parser configuration against sample log entries to validate parsing logic before deployment. This is essential for ensuring parsers work correctly with your specific logs.

Workflow Integration:

  • Essential testing step before creating or activating parsers in production.
  • Use during parser development to iteratively refine parsing logic.
  • Validate parser behavior with real log samples from your environment.
  • Verify that parsing produces the expected UDM fields and values.

Use Cases:

  • Test new parser configurations with representative log samples.
  • Validate parser changes before deploying to production.
  • Troubleshoot parsing issues by examining parser output step-by-step.
  • Verify that parser handles edge cases and varied log formats correctly.
  • Understand how specific log fields are mapped to UDM structure.

Parser Testing Best Practices:

  • Use diverse log samples that represent different scenarios and edge cases.
  • Include both typical and edge-case log formats in your test samples.
  • Verify that critical fields are correctly parsed and mapped to appropriate UDM fields.
  • Test with logs that might cause parsing failures to ensure robust error handling.

Input Constraints:

  • Maximum of 1000 sample logs.
  • Each log entry max size: 10MB.
  • Total size of all logs max: 50MB.

Example Usage:

Define parser code and sample logs in plain text: parser_text: "filter { json { source => "message" } ... }" sample_logs: ["{"message": "ERROR: Failed authentication attempt", "timestamp": "2024-02-09T10:30:00Z"}"]

  • run_parser(logType="WINEVTLOG_XML", parserCode=parser_text, sampleLogs=sample_logs, projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Analyze the parsing results to ensure UDM events are generated correctly.
  • Refine the parser code based on the test results and retest as needed.
  • Create the parser using create_parser once testing is successful.
  • Activate the parser using activate_parser to put it into production.
  • Ingest real logs using import_logs and verify parsing works in production.
activate_parser

Activate a parser for a specific log type in Chronicle.

Activates a parser, making it the active parser for the specified log type. Once activated, the parser will be used to process all incoming logs of that type. Only one parser can be active for each log type at a time.

Workflow Integration:

  • Use after creating and testing a parser to make it operational.
  • Essential step for putting new or updated parsers into production.
  • Enables the parser to process incoming logs and generate searchable UDM events.
  • Required before logs of the specified type can be properly parsed and analyzed.

Use Cases:

  • Activate a newly created parser after successful testing.
  • Switch to an updated parser version with improved parsing logic.
  • Restore a previously working parser after troubleshooting parsing issues.
  • Deploy parser changes as part of log ingestion pipeline updates.

Example Usage:

  • activate_parser(logType="CUSTOM_APP", parserId="pa_12345678-1234-1234-1234-123456789012", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Verify the parser is active using get_parser.
  • Ingest test logs using import_logs to verify the parser is working correctly.
  • Monitor parsing success rates and troubleshoot any issues.
  • Search for parsed events using udm_search to confirm proper UDM conversion.
  • Create detection rules that leverage the newly parsed UDM fields.
  • Set up monitoring for the log type to ensure continued parsing success.
create_parser

Create a new parser for a specific log type in Chronicle.

Creates a custom parser using Chronicle's parser configuration language to transform raw logs into Chronicle's Unified Data Model (UDM) format. The tool automatically handles the required Base64 encoding of the parser code.

Agent Responsibilities:

  • Provide the parser_code argument as a plain text string.

Workflow Integration:

  • Use when you need to ingest custom log formats that Chronicle doesn't natively support.
  • Essential for integrating custom applications, proprietary systems, or modified log formats.
  • Enables normalization of diverse log sources into a consistent UDM structure for analysis.
  • Prerequisite for meaningful analysis of custom log sources through Chronicle's detection capabilities.

Use Cases:

  • Create parsers for custom application logs with unique formats.
  • Parse proprietary security tool outputs into UDM format.
  • Handle modified versions of standard log formats that existing parsers can't process.
  • Transform legacy log formats for Chronicle ingestion during SIEM migrations.
  • Parse structured data from APIs or databases into security events.

Example Usage:

Define the parser code string: parser_text: "filter { json { source => "message" } ... }"

  • create_parser(logType="CUSTOM_APP", parserCode=parser_text, projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Test the parser using run_parser with sample log data.
  • Activate the parser using activate_parser once testing is complete.
  • Ingest logs using ingest_raw_log with the specified log_type.
  • Monitor parsing success and adjust the parser configuration if needed.
  • Create detection rules that leverage the parsed UDM fields.
deactivate_parser

Deactivate a parser for a specific log type in Chronicle.

Deactivates a parser, stopping it from processing incoming logs of the specified type. After deactivation, logs of this type will not be parsed until another parser is activated or the same parser is reactivated.

Workflow Integration:

  • Use when you need to temporarily stop parsing for a specific log type.
  • Essential for troubleshooting parsing issues by stopping problematic parsers.
  • Useful before deploying updated parser versions to prevent conflicts.
  • Helps manage parser lifecycle during development and testing phases.

Use Cases:

  • Temporarily stop parsing while troubleshooting issues with the current parser.
  • Deactivate a parser before activating an updated version.
  • Stop parsing for log types that are no longer needed or relevant.
  • Prevent parsing during maintenance windows or system changes.
  • Disable problematic parsers that are causing ingestion errors.

Warning: After deactivation, incoming logs of this type will not be parsed into UDM format and may not be searchable or usable for detection until a parser is reactivated.

Example Usage:

  • deactivate_parser(logType="CUSTOM_APP", parserId="pa_12345678-1234-1234-1234-123456789012", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Verify the parser's status using get_parser.
  • Activate an updated parser version if this was part of a parser update process.
  • Monitor log ingestion to ensure no critical parsing is stopped unintentionally.
  • Test and validate any replacement parser before activating it.
  • Document the reason for deactivation for operational tracking.
list_parsers

List all parsers for a given log type, returning only metadata.

Retrieves a list of parser metadata for a specific log type, or for all log types if "-" is specified. This tool is useful for getting an overview of existing parsers and their states without fetching the full parser code.

Agent Responsibilities:

  • The response is a JSON object. The agent should access the parsers key to get a list of parser objects.
  • Each object in the list contains parser metadata such as name, log_type, state, create_time, etc.
  • The name field contains the full resource name, from which the parser ID can be extracted.
  • If the response contains a next_page_token, it indicates that more results are available. The agent should use this token in a subsequent call to retrieve the next page.
  • The agent should not present the raw JSON. Instead, it should format the output as a human-readable list, for example, using a table or a bulleted list.

Workflow Integration:

  • Use to discover existing parsers for a specific log type.
  • Helpful for finding a parser ID to use with other tools like get_parser or activate_parser.
  • Use to audit which parsers exist for a customer and their current states (ACTIVE, INACTIVE, etc.).

Example Usage:

  • list_parsers(projectId="my-project", customerId="my-customer", region="us", logType="OKTA")
  • list_parsers(projectId="my-project", customerId="my-customer", region="us")
list_log_types

List all log types available for a customer.

Retrieves a list of all available log types for a specific customer, which is useful for discovering what log sources are configured.

Agent Responsibilities:

  • The response is a JSON object. The agent should access the log_types key to get a list of log type objects.
  • Each object in the list contains details about a log type, such as name and display_name.
  • The name field contains the full resource name, from which the log type identifier can be extracted.
  • If the response contains a next_page_token, it indicates that more results are available. The agent should use this token in a subsequent call to retrieve the next page.
  • The agent should not present the raw JSON. Instead, it should format the output as a human-readable list, for example, using a table or a bulleted list.

Workflow Integration:

  • Use to discover the available log types for a customer before creating a new parser or feed.
  • Helpful for validating that a log_type string is correct before using it in other tools.

Example Usage:

  • list_log_types(projectId="my-project", customerId="my-customer", region="us")
  • list_log_types(projectId="my-project", customerId="my-customer", region="us", filter="display_name:OKTA")
list_rules

List security detection rules configured in Chronicle SIEM, with support for pagination and filtering.

Retrieves the definitions of detection rules currently active or configured within the Chronicle SIEM instance.

Workflow Integration:

  • Useful for understanding the detection capabilities currently deployed in the SIEM.
  • Can help identify the specific rule that generated a SIEM alert (obtained via SIEM alert tools or from case management/SOAR system details).
  • Provides context for rule tuning, development, or understanding alert logic.

Use Cases:

  • Review the logic or scope of a specific detection rule identified from an alert.
  • Audit the set of active detection rules within the SIEM.
  • Understand which rules might be relevant to a particular threat scenario or TTP.
  • Filter rules based on reference lists, data tables, or display name.

Example Usage:

  • list_rules(projectId="my-project", customerId="my-customer", region="us", filter='display_name:"suspicious"')
  • list_rules(projectId="my-project", customerId="my-customer", region="us", filter='data_tables:"projects/my-project/locations/us/instances/my-customer/dataTables/my_table"')

Next Steps (using MCP-enabled tools):

  • Analyze the rule definition (e.g., the YARA-L code) to understand its trigger conditions.
  • Correlate rule details with specific alerts retrieved from the SIEM or case management system.
  • Use insights for rule optimization, false positive analysis, or developing related detections.
  • Document relevant rule information in associated cases using a case management tool.
  • Use 'get_rule' to fetch the full details of a specific rule.
list_rule_errors

Lists execution errors for a specific Chronicle SIEM rule.

Helps in troubleshooting rules that might not be generating detections as expected or are failing during execution.

Agent Responsibilities:

  • Parse the JSON response to extract the list from the ruleExecutionErrors key.
  • Handle the nextPageToken for pagination if more results exist.

Workflow Integration:

  • Rule Troubleshooting: If a rule is not producing expected detections or alerts, check for execution errors.
  • Rule Development: After deploying a new or modified rule, check for errors to ensure it's syntactically correct and running properly.
  • SIEM Health Monitoring: Periodically check for rules with high error counts to maintain SIEM operational health.

Use Cases:

  • Investigate why a specific rule (e.g., "ru_...") has not generated any detections.
  • Check for errors after modifying and saving a YARA-L rule.
  • Get details of compilation or runtime errors for a given rule version.

Example Usage:

  • list_rule_errors(ruleId="ru_12345678-1234-1234-1234-1234567890ab", projectId="my-project", customerId="my-customer", region="us")
  • list_rule_errors(ruleId="ru_12345678-1234-1234-1234-1234567890ab@v_abcdef_123456", projectId="my-project", customerId="my-customer", region="us", pageSize=10)
  • list_rule_errors(ruleId="ru_12345678-1234-1234-1234-1234567890ab@-", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Review Rule Code: If errors are found, retrieve the rule definition using get_rule with the specific rule_id and revision.
  • Validate Rule Syntax: Use validate_rule to check for syntax issues in the rule text.
  • Modify Rule: Correct the rule based on error messages and update it using create_rule (as there's no update function).
  • Re-test: Use test_rule to verify the fix.
create_rule

Create a new detection rule in Chronicle SIEM.

Creates a new YARA-L 2.0 detection rule in Chronicle that can generate alerts when the rule conditions are met by ingested events. Rules are the core mechanism for automated threat detection and response in Chronicle.

Workflow Integration:

  • Essential for implementing custom detection logic based on your organization's security requirements.
  • Use after analyzing events, entities, or threat intelligence to codify detection patterns.
  • Complements existing detection capabilities by addressing specific use cases or threat scenarios.
  • Enables automated detection of TTPs, IOCs, or behavioral patterns identified through investigations.

Use Cases:

  • Create rules to detect specific attack patterns discovered during threat hunting.
  • Implement custom detection logic for proprietary applications or unique network configurations.
  • Detect compliance violations or policy breaches specific to your organization.
  • Create behavioral detection rules based on user or entity activity patterns.
  • Implement detection for specific threat intelligence indicators relevant to your environment.

Rule Development Best Practices:

  • Start with a clear understanding of what you want to detect and the data sources available.
  • Use precise conditions to minimize false positives while maintaining detection efficacy.
  • Include appropriate metadata (description, author, severity, MITRE ATT&CK mappings).
  • Test rules thoroughly using test_rule before deploying to production.
  • Consider the rule's performance impact on Chronicle's processing capabilities.

Example Usage:

  • create_rule(ruleText=rule_text, projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Validate the rule syntax using validate_rule.
  • Use list_rule_errors to check for any runtime errors.
  • Test the rule using test_rule with historical data to validate its effectiveness.
  • Monitor the rule's performance and adjust thresholds or conditions as needed.
  • Review generated alerts using udm_search to assess rule quality.
  • Document the rule's purpose and expected behavior for operational teams.
get_rule

Get the definition and metadata of a specific Chronicle SIEM detection rule.

Retrieves the full details of a rule, including its YARA-L code, metadata, revision history, and deployment status.

Workflow Integration:

  • Use to inspect the logic of a rule that generated an alert.
  • Essential for understanding rule behavior before modifying or disabling it.
  • Retrieve rule text for version comparison or backup.

Use Cases:

  • Get the YARA-L code for a rule ID found in a SIEM alert.
  • Review rule metadata like author, severity, and creation date.
  • Check the compilation status and diagnostics of a rule.

Example Usage:

  • get_rule(projectId="my-project", customerId="my-customer", region="us", ruleId="ru_12345678-1234-1234-1234-1234567890ab")
  • get_rule(projectId="my-project", customerId="my-customer", region="us", ruleId="ru_12345678-1234-1234-1234-1234567890ab@v_abcdef_123456", view="BASIC")
validate_rule

Validate YARA-L 2.0 rule text syntax and compilation in Chronicle SIEM.

Verifies the syntax and compilation of a YARA-L 2.0 detection rule without creating or deploying it. This tool checks for syntax errors, compilation issues, and other problems that would prevent the rule from functioning correctly when deployed.

Workflow Integration:

  • Essential validation step during rule development before creating or updating rules.
  • Use to catch syntax errors and compilation issues early in the development process.
  • Helps ensure rule quality and reduces deployment failures in production environments.
  • Can be integrated into CI/CD pipelines for automated rule validation.

Use Cases:

  • Validate new YARA-L rule syntax before attempting to create the rule in Chronicle.
  • Check existing rule modifications for syntax errors before deployment.
  • Troubleshoot rule compilation issues during development or debugging.
  • Verify rule syntax as part of automated testing or quality assurance processes.
  • Validate rule text copied from external sources or documentation.

Agent Responsibilities:

  • Provide the complete YARA-L rule text to be validated.
  • Parse the JSON response to check the 'success' field and examine any messages in 'compilationDiagnostics'.

Example Usage:

  • validate_rule(ruleText=rule_text, projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • If validation succeeds, use 'test_rule' to test the rule against historical data.
  • If validation fails, review the messages in 'compilationDiagnostics' and fix syntax errors in the rule_text.
  • Once validated and tested, use 'create_rule' to deploy the rule to Chronicle.
  • Use 'list_rule_errors' after deployment to monitor for runtime issues.
list_rule_detections

Retrieves historical detections generated by a specific Chronicle SIEM rule by calling the LegacySearchDetections API.

This tool fetches detections based on a rule ID, allowing for investigation and analysis of rule performance and threat activity. The agent is responsible for any time calculations and parsing the JSON response.

Agent Responsibilities:

  • Provide start_time and end_time in ISO 8601 format (e.g., YYYY-MM-DDTHH:MM:SSZ) to filter by time.
  • Parse the JSON response to extract the list from the detections key.
  • Handle the nextPageToken for pagination if more results exist.

Workflow Integration:

  • Alert Triage: When an alert is generated by a rule, use this tool to retrieve all historical detections for that rule to understand the context and frequency.
  • Rule Effectiveness Analysis: Analyze the volume, timestamps, and details of detections to determine if a rule is too noisy, too quiet, or performing as expected.
  • Threat Hunting: If a rule is designed to detect a specific TTP or indicator, use this tool to find all instances where the rule has matched historical data.
  • Incident Scoping: During an incident, if a particular rule is relevant, retrieve its detections to help identify the scope and timeline of related events.
  • Compliance Reporting: Gather detections for specific rules related to compliance mandates over a certain period.

Use Cases:

  • Retrieve all detections for a rule ID obtained from a SIEM alert or a case management system.
  • Filter detections by their alert state (e.g., "ALERTING", "NOT_ALERTING") to focus on actionable events.
  • Paginate through a large number of detections if a rule is particularly verbose.
  • Monitor the output of a newly deployed or recently modified detection rule.
  • Investigate past occurrences of a threat detected by a specific rule.
  • Assess the Alert to determine liklihood of maliciousness

Example Usage:

  • list_rule_detections(ruleId="ru_12345678-1234-1234-1234-1234567890ab", projectId="my-project", customerId="my-customer", region="us", pageSize=10)
  • list_rule_detections(ruleId="ru_12345678-1234-1234-1234-1234567890ab@v_abcdef_123456", projectId="my-project", customerId="my-customer", region="us", alertState="ALERTING", startTime="2025-10-01T00:00:00Z", endTime="2025-10-02T00:00:00Z", listBasis="CREATED_TIME")
  • list_rule_detections(ruleId="ru_12345678-1234-1234-1234-1234567890ab@-", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Analyze the UDM events within each detection.
  • Enrich indicators using threat intelligence tools.
  • Check for rule execution errors using list_rule_errors.
  • Correlate with alerts using get_alerts.
  • Document findings in a case using create_case_comment.
create_reference_list

Create a new reference list in Chronicle SIEM.

Creates a reference list containing a collection of values that can be referenced in detection rules. Reference lists are useful for maintaining lists of known entities like IP addresses, domains, usernames, or other indicators that enhance detection logic.

Workflow Integration:

  • Use to create curated lists of security-relevant entities for detection enhancement.
  • Essential for maintaining allowlists, blocklists, or other categorized entity collections.
  • Enables dynamic detection rule behavior without hardcoding values in rule logic.
  • Supports threat intelligence integration by storing IOC lists in a searchable format.

Use Cases:

  • Create lists of trusted domains or IP ranges to reduce false positives.
  • Maintain lists of privileged user accounts for monitoring access patterns.
  • Store lists of malicious file hashes for detection and blocking.
  • Build collections of known bad domains from threat intelligence feeds.
  • Create regex patterns for detecting specific attack signatures or behaviors.

Syntax Types:

  • STRING: Exact string matching (default)
  • CIDR: IP address ranges and CIDR blocks
  • REGEX: Regular expression patterns for flexible matching

Example Usage:

  • create_reference_list(name="admin_accounts", description="Administrative user accounts for privilege monitoring", entries=["admin", "administrator", "root", "system", "service"], projectId="my-project", customerId="my-customer", region="us", syntaxType="STRING")
  • create_reference_list(name="trusted_networks", description="Internal network ranges that are considered trusted", entries=["10.0.0.0/8", "192.168.0.0/16", "172.16.0.0/12"], projectId="my-project", customerId="my-customer", region="us", syntaxType="CIDR")

Next Steps (using MCP-enabled tools):

  • Reference the list in detection rules using the list name (e.g., reference_list.admin_accounts).
  • Update the list using update_reference_list as your data changes.
  • Retrieve the list contents using get_reference_list to verify entries.
  • Create detection rules that leverage the list for enhanced threat detection.
  • Set up automated processes to maintain the list with current threat intelligence.
get_reference_list

Get details and contents of a reference list in Chronicle SIEM.

Retrieves the metadata and optionally the full contents of a reference list. This is useful for reviewing list contents, verifying data integrity, and understanding what data is available for detection rules.

Workflow Integration:

  • Use to verify reference list contents before creating or modifying detection rules.
  • Essential for auditing data quality and consistency in security reference data.
  • Helps understand available data when troubleshooting detection rule issues.
  • Supports data governance by providing visibility into managed security datasets.

Use Cases:

  • Review threat intelligence lists before implementing new detection rules.
  • Verify that allowlists or blocklists contain the expected entries.
  • Audit reference list contents for compliance or security reviews.
  • Troubleshoot detection rule issues by examining referenced list data.
  • Generate reports on security reference data for operational documentation.

Example Usage:

  • get_reference_list(name="admin_accounts", projectId="my-project", customerId="my-customer", region="us", view="REFERENCE_LIST_VIEW_FULL")
  • get_reference_list(name="threat_ip_addresses", projectId="my-project", customerId="my-customer", region="us", view="REFERENCE_LIST_VIEW_BASIC")

Next Steps (using MCP-enabled tools):

  • Update the list using update_reference_list if changes are needed.
  • Reference the list data in detection rules to enhance security monitoring.
  • Compare with external threat intelligence sources to identify updates needed.
  • Document the list contents and update procedures for operational teams.
  • Set up regular reviews to maintain data quality and relevance.
update_reference_list

Update an existing reference list in Chronicle SIEM.

Updates the contents or description of an existing reference list. This is useful for maintaining current threat intelligence, updating allowlists/blocklists, or modifying reference data as your security requirements evolve.

Workflow Integration:

  • Use to keep reference lists current with the latest threat intelligence or policy changes.
  • Essential for maintaining accurate security reference data used in detection rules.
  • Enables automated reference list updates as part of threat intelligence feeds.
  • Supports operational workflows that modify security policies or allowlists.

Use Cases:

  • Update threat intelligence lists with newly discovered IOCs.
  • Modify allowlists to include new trusted domains or IP ranges.
  • Remove outdated or invalid entries from reference lists.
  • Update user lists as organizational structure changes.
  • Refresh regex patterns to improve detection accuracy.

Update Behavior:

  • If entries are provided, they completely replace the existing entries.
  • If description is provided, it updates the reference list description.
  • At least one of entries or description should be provided.
  • An updateMask ( is automatically generated based on the arguments supplied.

Example Usage:

  • update_reference_list(name="admin_accounts", entries=["admin", "administrator", "root", "system", "service", "superuser"], projectId="my-project", customerId="my-customer", region="us")
  • update_reference_list(name="admin_accounts", description="Updated administrative user accounts for enhanced privilege monitoring", projectId="my-project", customerId="my-customer", region="us")
  • update_reference_list(name="trusted_networks", entries=["10.0.0.0/8", "192.168.0.0/16", "172.16.0.0/12", "203.0.113.0/24"], description="Updated trusted network ranges including new office location", projectId="my-project", customerId="my-customer", region="us")

Next Steps (using MCP-enabled tools):

  • Verify the updates using get_reference_list to confirm changes were applied correctly.
  • Test detection rules that reference the updated list to ensure they work as expected.
  • Monitor detection rule performance to assess the impact of the changes.
  • Document the reason for updates for audit and operational tracking.
  • Communicate significant changes to teams that rely on the reference list.
get_alert_latest_investigation

Retrieves the most recent Triage Agent investigation for a specific alert ID.

Workflow Integration:

  • Specific Alert Context: When an agent or user is focused on a particular alert, this tool directly fetches the associated investigation.
  • Status Check: Quickly determine if an alert has been investigated and get the result.
  • Avoiding Duplicates: Useful before considering triggering a new investigation for the same alert.

Use Cases:

  • User: "What is the latest investigation result for alert 'alert-123'?"
  • Agent needs to display the summary of the investigation for a specific alert in the UI.
  • Automated workflow needs to check the outcome of the latest investigation on an alert before proceeding.

Example Usage:

  • get_alert_latest_investigation(projectId='123', region='us', customerId='abc', alertId='alert-123')

Next Steps (using MCP-enabled tools):

  • The chat agent will parse the returned 'Investigation' object to answer user questions about this specific alert's investigation.
  • Depending on the findings, next steps and user's follow-up questions, further actions might involve:
  • 'get_case': If the investigation is linked to a case.
  • UDM search tools: To pivot on entities or indicators from the investigation findings.
  • 'trigger_investigation': Perhaps if the last one failed or is outdated (though this tool typically gets the latest).
  • Examine the NextSteps list to run 'SEARCHABLE' queries with UDM search tools or display 'MANUAL' actions to the user.
get_investigation_by_id

Retrieves a single, complete agent-generated investigation report (e.g., from the SecOps Triage Agent) by its full resource name. This tool is primarily used to power chat-based interactions, allowing users to ask natural language questions about the investigation.

By fetching the detailed investigation data, a chat agent can understand the context and provide informed answers to queries like "What was the outcome of this investigation?", "Summarize the findings.", "What steps did the agent take?", or "Is this a true positive?".

The tool returns all details for a specific investigation, including:

  • Core attributes: Summary, verdict (e.g., TRUE_POSITIVE, FALSE_POSITIVE), confidence score, status, and entities.
  • Associated Subjects: IDs of any alerts or cases linked to this investigation.
  • Detailed Steps: The complete list of step-by-step actions taken by the agent during the investigation.

Workflow Integration:

  • Conversational Interfaces (Primary): The main purpose is to serve as the data backend for a chat agent (e.g., within Google SecOps or other MCP clients like Claude desktop). The chat agent calls this tool to get the structured report, then uses an LLM to answer user questions about its content.
  • Foundation for Custom Agents: Enables specialized agents to retrieve and use the output of the SecOps Triage Agent.
  • UI Display: Provides data for detailed investigation report views.
  • Automation & Playbooks: Supports automated workflows requiring investigation results.

Use Cases:

  • Chat Q&A: A user asks the chat assistant: "What did the triage agent find for investigation 'agent_alert123_investigation456'?" The assistant calls this tool to fetch the details and generate a summary.
  • Chat Summarization: User: "Summarize the key findings."
  • Chat Step Review: User: "What actions did the agent perform?"
  • Chat Verdict Check: User: "Was 'agent_alert123_investigation456' a true positive?"
  • A custom-built Phishing Analysis agent calls get_investigation to fetch the SecOps Triage Agent's report.
  • An analyst manually reviews the full report details in the UI.

Example Usage:

  • get_investigation(projectId='123', region='us', customerId='abc', investigationId='agent_alert123_investigation456')

Next Steps (using MCP-enabled tools):

  • The chat agent will parse the returned 'Investigation' object to answer user questions.
  • Depending on the findings, next steps and user's follow-up questions, further actions might involve:
  • 'get_case': If the user asks for more details about a linked case.
  • UDM search tools: If the user wants to pivot on entities or indicators mentioned in the findings.
  • Examine the NextSteps list to run 'SEARCHABLE' queries with UDM search tools or display 'MANUAL' actions to the user.
trigger_investigation

Manually starts a new SecOps Triage Agent investigation for a specific alert.

This tool is used to initiate an on-demand investigation by the agent for the given alert ID. It's useful when automatic investigation didn't run or when a user wants to re-investigate. This can be invoked directly by users or through a chat agent interpreting a user's request.

Workflow Integration:

  • Conversational Interfaces: A chat agent can call this tool when a user asks to investigate a specific alert (e.g., "Run an investigation on alert 123").
  • Manual Escalation: A user can trigger this from a UI button on an alert details page.

Use Cases:

  • A user asks the chat agent: "Please investigate alert 'abc-123'".
  • An analyst clicks a "Run Investigation" button for an alert that wasn't automatically processed.
  • Running a new investigation on an alert where circumstances have changed.

Example Usage:

  • trigger_investigation(projectId='123', region='us', customerId='abc', alertId='alert-xyz-789')

Next Steps (using MCP-enabled tools):

  • Immediately after triggering, the returned Investigation object will have a unique name/ID.
  • Use 'get_investigation' with the new investigation name/ID to poll for status updates and view results once completed. A chat agent should inform the user that the investigation has started and provide the ID.
get_agent_settings

Retrieves the current configuration settings for the SecOps Investigation Agent within a specific SecOps instance.

This tool allows users or other agents to inspect the behavior of the automated investigation agent, such as whether it's enabled, how long it waits before starting an investigation, and any filters controlling which alerts it processes.

Workflow Integration:

  • Conversational Interfaces: Enables a chat agent to answer user questions about the Investigation Agent's current setup (e.g., "Is auto-investigation turned on?", "What is the current alert filter for the investigation agent?").
  • Pre-check for Updates: Useful to call before attempting an update via update_agent_settings to see the current state of the investigation agent.
  • Auditing & Verification: Allows administrators to verify that the investigation agent is configured as expected.

Use Cases:

  • Chat Q&A: User: "Is the Investigation Agent active?"
  • Chat Q&A: User: "What's the delay before the investigation agent starts an auto-investigation?"
  • Chat Q&A: User: "Are there any filters applied to the Investigation Agent?"
  • An administrator checks the settings before making changes to the investigation agent's behavior.
  • An automated script fetches the settings to ensure compliance with security policies.

Example Usage:

  • get_agent_settings(projectId='123', region='us', customerId='abc')

Next Steps (using MCP-enabled tools):

  • Based on the returned settings, the user/agent might decide to:
  • Trigger a manual investigation using trigger_investigation if auto-investigation is disabled or the alert was filtered out.
  • Inform the user of the current configuration in response to their query.
fetch_alert_data

Retrieves a comprehensive profile of a specific SIEM alert, aggregating data from multiple sources to provide full context for the enrichment process. This tool is a powerful combination of several other tools: list_case_alerts (for alert metadata), list_involved_entities (for entity details), and list_connector_events (for raw event data). It builds a "comprehensive profile" by aggregating all relevant data related to a specific alert ID.

It returns:

  • Case Alert Metadata: Basic details like rule generator, product, vendor, and source system URLs.
  • Involved Entities: A detailed list of all entities associated with the alert (e.g., IPs, Hostnames, Users), including their type, whether they are marked as suspicious, attacker, or pivot, and any additional properties from the SOAR backend.
  • Involved Events: The raw event data that triggered the alert, including source system names and key-value pairs for all raw fields.
  • Executed Actions History: A history of manual actions previously executed on this alert, including their status, result messages, and JSON outputs.
  • Most Recent Investigation: Details of the latest AI-driven or manual investigation, including the verdict, confidence level, summary, and specific investigation steps taken.
  • Comments: A list of all analyst comments and notes associated with the alert.

Workflow Integration:

  • This is typically the FIRST tool an agent should call after receiving a SIEM Alert ID.
  • It provides all the necessary context to decide which enrichment actions are needed.
  • Use the entity identifiers and types from this tool to target specific entities in subsequent enrichment calls.

Use Cases:

  • Build a complete understanding of an alert's context before planning any response or enrichment.
  • Identify all relevant entities and events that require further investigation.
  • Review previous actions and investigations to avoid duplicate work and leverage existing findings.
fetch_enrichment_actions

Retrieves a curated list of SOAR integration actions available for enriching a specific SIEM alert. This tool is similar to list_integrations and list_integration_actions, but it filters specifically for actions that are suitable for enrichment and are enabled for the environment where the alert originated.

For each integration, it provides:

  • Integration ID and Display Name: To identify the tool provider (e.g., 'VirusTotal', 'SafeBreach').
  • Available Actions: A list of specific enrichment functions (e.g., 'Get IP Report', 'Enrich Host').
  • Action Parameters: Detailed information for each parameter, including: name and description, type (e.g., 'String', 'Boolean'), mandatory flag, default_value and optional_values_json for dropdowns.
  • AI Description: A detailed, structured description of the action designed for the AI. It typically includes: General Description: What the action does and what data it retrieves. Parameters Description: A table explaining each parameter's purpose and constraints. Flow Description: A step-by-step breakdown of the action's execution logic.
  • Entity Types: A list of specific entity types that this action supports (e.g., 'ADDRESS', 'HOSTNAME', 'FILEHASH'). Crucial: You should only attempt to run this action on entities that match one of these types.

Workflow Integration:

  • Use this tool to discover what enrichment capabilities are available for the current alert.
  • Critical Step: Compare the entity_types of each available action against the actual entities found in the alert (via fetch_alert_data). Only plan to execute actions where there is a match.
  • The integration and display_name retrieved here are required for execute_actions.

Use Cases:

  • Discover available threat intelligence tools for enriching IPs or domains found in an alert.
  • Identify EDR actions that can provide host or process details for investigation.
  • Understand what parameters are required for specific enrichment actions.
execute_actions

Executes one or more enrichment actions on a specific SIEM alert. This tool provides a simplified and batch-oriented API compared to the standard execute_manual_action tool, optimized for automated enrichment workflows.

It accepts a list of actions to be performed. Each action execution requires:

  • Action Provider and Name: The integration and specific action identifier (retrieved from fetch_enrichment_actions).
  • Integration Instance: The specific instance GUID to run the action against.
  • Scope and Script Name: Operational parameters for the SOAR backend.
  • Target Entities: A list of entities (Identifier, Type, and isInternal flag) that the action should be performed on.
  • Parameters: A dictionary of key-value pairs for any specific parameters required by the action.

Critical Constraint - Entity Types:

  • You should ONLY execute an action on entities whose type matches one of the supported entity_types defined for that action in the fetch_enrichment_actions response.
  • For example, if an action supports ['ADDRESS'], do not attempt to run it on a HOSTNAME entity, even if they seem related.
  • Mismatched entity types will likely result in action failure or irrelevant results.

Workflow Integration:

  • This is the FINAL step in an enrichment loop where the agent triggers the chosen actions.
  • It returns the results of all executed actions, including status (e.g., 'COMPLETED', 'FAULTED'), human-readable messages, and detailed result values/JSON objects.
  • If an action is asynchronous, the status will indicate it, and the results can be checked later.

Use Cases:

  • Batch execute enrichment actions on multiple entities identified in an alert (e.g., enrichment for 3 different suspicious IPs).
  • Trigger complex enrichment workflows by calling multiple actions in a single tool invocation.

Get MCP tool specifications

To get the MCP tool specifications for all tools in an MCP server, use the tools/list method. The following example demonstrates how to use curl to list all tools and their specifications currently available within the MCP server.

Curl Request
                      curl --location 'https://chronicle.us.rep.googleapis.com/mcp' \
--header 'content-type: application/json' \
--header 'accept: application/json, text/event-stream' \
--data '{
    "method": "tools/list",
    "jsonrpc": "2.0",
    "id": 1
}'