This document offers informal guidance on how you can respond to findings of suspicious activities in your AI resources. The recommended steps might not be appropriate for all findings and might impact your operations. Before you take any action, you should investigate the findings; assess the information that you gather; and decide how to respond.
The techniques in this document aren't guaranteed to be effective against any previous, current, or future threats that you face. To understand why Security Command Center does not provide official remediation guidance for threats, see Remediating threats.
Before you begin
- Review the finding. Note the affected resources and the detected binaries, processes, or libraries.
- To learn more about the finding that you're investigating, search for the finding in the Threat findings index.
General recommendations
- Contact the owner of the affected resource.
- Work with your security team to identify unfamiliar resources, including Vertex AI Agent Engine instances, sessions, service accounts, and agent identities. Delete resources that were created with unauthorized accounts.
- To identify and fix overly permissive roles, use IAM Recommender. Delete or disable potentially compromised accounts.
- For the Enterprise service tier, investigate any identity and access findings.
- For further investigation, you can use incident response services, for example, Mandiant.
- For forensic analysis, collect and back up the logs of affected resources.
Potentially compromised service account or agent identity
To remove a compromised agent identity, delete its corresponding Vertex AI Agent Engine instance.
Disable the potentially compromised service account. For best practices, see Disable unused service accounts before deleting them.
Disable service account keys for the potentially compromised project.
To see when your service accounts and keys were last used to call a Google API, use Activity Analyzer. For more information, see View recent usage for service accounts and keys.
If you are confident that it is safe to delete the service account, delete it.
To use organization policies to restrict service account usage, see Restricting service account usage.
To use Identity and Access Management to restrict service account or service account key usage, see Deny access to resources.
Exfiltration and extraction
- Revoke roles for the principal that is listed in the Principal email row in the finding details until the investigation is complete.
- To stop further exfiltration, add restrictive IAM policies to the affected resources.
- To determine whether the affected datasets have sensitive information, inspect them with Sensitive Data Protection. You can configure the inspection job to send the results to Security Command Center. Depending on the quantity of information, Sensitive Data Protection costs can be significant. Follow best practices for keeping Sensitive Data Protection costs under control.
- Use VPC Service Controls to create security perimeters around data services like BigQuery and Cloud SQL to prevent data transfers to projects outside the perimeter.
Suspicious token generation
Validate the need for cross-project token generation. If it's unnecessary, remove the IAM role binding in the target project that grants the
iam.serviceAccounts.getAccessToken,iam.serviceAccounts.getOpenIdToken, oriam.serviceAccounts.implicitDelegationpermission to the principal from the source project.Investigate the logs that are specified in the finding to validate the token generation methods used by your agentic workloads.
What's next
- Learn how to work with threat findings in Security Command Center.
- Refer to the Threat findings index.
- Learn how to review a finding through the Google Cloud console.
- Learn about the services that generate threat findings.
- Refer to a list of all AI findings.