Collect Google Cloud Secure Web Proxy logs
This document explains how to ingest Google Cloud Secure Web Proxy logs to Google Security Operations using Google Cloud Storage V2.
Secure Web Proxy is a cloud first service that helps you secure egress web traffic (HTTP and HTTPS). It provides a managed proxy solution that enables flexible and granular policies based on cloud first identities and web applications. Secure Web Proxy identifies traffic that does not conform to policy and logs it to Cloud Logging, allowing you to monitor internet usage, discover threats to your network, and respond to security incidents.
Before you begin
Ensure that you have the following prerequisites:
- A Google SecOps instance
- A Google Cloud project with Cloud Storage API enabled
- Permissions to create and manage GCS buckets
- Permissions to manage IAM policies on GCS buckets
- Secure Web Proxy is active and configured in your Google Cloud environment
- Privileged access to Google Cloud and appropriate permissions to access Secure Web Proxy logs
- Permissions to create and manage Cloud Logging sinks
Create Google Cloud Storage bucket
Using Google Cloud Console
- Go to the Google Cloud Console.
- Select your project or create a new one.
- In the navigation menu, go to Cloud Storage > Buckets.
- Click Create bucket.
Provide the following configuration details:
Setting Value Name your bucket Enter a globally unique name (for example, Google Cloud-swp-logs)Location type Choose based on your needs (Region, Dual-region, Multi-region) Location Select the location (for example, us-central1)Storage class Standard (recommended for frequently accessed logs) Access control Uniform (recommended) Protection tools Optional: Enable object versioning or retention policy Click Create.
Using gcloud command-line tool
Alternatively, create a bucket using the
gcloudcommand:gcloud storage buckets create gs://gcp-swp-logs \ --location=us-central1 \ --default-storage-class=STANDARD- Replace:
gcp-swp-logs: Your desired bucket name (globally unique).us-central1: Your preferred region (for example,us-central1,europe-west1).
- Replace:
Configure Cloud Logging to export Secure Web Proxy logs to GCS
Secure Web Proxy automatically logs proxy transaction logs to Cloud Logging. To export these logs to Cloud Storage, you must create a Cloud Logging sink.
Using Google Cloud Console
- In the Google Cloud Console, go to Logging > Log Router.
- Click Create sink.
- Provide the following configuration details:
- Sink name: Enter a descriptive name (for example,
swp-export-sink). - Sink description: Optional description.
- Sink name: Enter a descriptive name (for example,
- Click Next.
- In the Select sink service section:
- Sink service: Select Cloud Storage bucket.
- Select Cloud Storage bucket: Select
gcp-swp-logsfrom the dropdown.
- Click Next.
In the Choose logs to include in sink section, enter the following filter query:
logName="projects/<YOUR_PROJECT_ID>/logs/networkservices.googleapis.com/gateway_requests"- Replace
<YOUR_PROJECT_ID>with your Google Cloud project ID.
- Replace
Click Next.
Review the configuration and click Create sink.
After creating the sink, Cloud Logging will display the sink's writer identity (a service account email). Copy this service account email for the next step.
Using gcloud command-line tool
Alternatively, create a sink using the
gcloudcommand:gcloud logging sinks create swp-export-sink \ storage.googleapis.com/gcp-swp-logs \ --log-filter='logName="projects/<YOUR_PROJECT_ID>/logs/networkservices.googleapis.com/gateway_requests"'- Replace:
swp-export-sink: Your desired sink name.gcp-swp-logs: Your GCS bucket name.<YOUR_PROJECT_ID>: Your Google Cloud project ID.
- Replace:
Grant permissions to Cloud Logging service account
The Cloud Logging sink writer identity service account needs permissions to write logs to your GCS bucket.
Using Google Cloud Console
- Go to Cloud Storage > Buckets.
- Click your bucket name (
gcp-swp-logs). - Go to the Permissions tab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Paste the Cloud Logging sink writer identity service account email (for example,
serviceAccount:service-123456789@gcp-sa-logging.iam.gserviceaccount.com). - Assign roles: Select Storage Object Admin.
- Add principals: Paste the Cloud Logging sink writer identity service account email (for example,
Click Save.
Using gcloud command-line tool
Alternatively, grant permissions using the
gcloudcommand:gcloud storage buckets add-iam-policy-binding gs://gcp-swp-logs \ --member="serviceAccount:<LOGGING_SERVICE_ACCOUNT_EMAIL>" \ --role="roles/storage.objectAdmin"- Replace:
gcp-swp-logs: Your bucket name.<LOGGING_SERVICE_ACCOUNT_EMAIL>: The Cloud Logging sink writer identity service account email.
- Replace:
Using gsutil command-line tool (legacy)
Assign the Object Admin role to your logging service account:
gsutil iam ch serviceAccount:<LOGGING_SERVICE_ACCOUNT_EMAIL>:objectAdmin \ gs://gcp-swp-logs
Verify permissions
To verify the permissions were granted correctly:
gcloud storage buckets get-iam-policy gs://gcp-swp-logs \ --flatten="bindings[].members" \ --filter="bindings.role:roles/storage.objectAdmin"
You should see the Cloud Logging service account email in the output.
Retrieve the Google SecOps service account
Google SecOps uses a unique service account to read data from your GCS bucket. You must grant this service account access to your bucket.
Configure a feed in Google SecOps to ingest GCP Secure Web Proxy logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- Click Configure a single feed.
- In the Feed name field, enter a name for the feed (for example,
GGoogle CloudCP Secure Web Proxy Logs). - Select Google Cloud Storage V2 as the Source type.
Select GCP Secure Web Proxy as the Log type.
Click Get Service Account. A unique service account email will be displayed, for example:
chronicle-12345678@chronicle-gcp-prod.iam.gserviceaccount.comCopy this email address for use in the next step.
Click Next.
Specify values for the following input parameters:
- Storage bucket URL: Enter the GCS bucket URI with the prefix path:
gs://gcp-swp-logs/- Replace:
gcp-swp-logs: Your GCS bucket name.
Source deletion option: Select the deletion option according to your preference:
- Never: Never deletes any files after transfers (recommended for testing).
- Delete transferred files: Deletes files after successful transfer.
Delete transferred files and empty directories: Deletes files and empty directories after successful transfer.
Maximum File Age: Include files modified in the last number of days. Default is 180 days.
Asset namespace: The asset namespace.
Ingestion labels: The label to be applied to the events from this feed.
Click Next.
Review your new feed configuration in the Finalize screen, and then click Submit.
Grant IAM permissions to the Google SecOps service account
The Google SecOps service account needs Storage Object Viewer role on your GCS bucket.
Using Google Cloud Console
- Go to Cloud Storage > Buckets.
- Click your bucket name.
- Go to the Permissions tab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Paste the Google SecOps service account email.
- Assign roles: Select Storage Object Viewer.
- Click Save.
Using gcloud command-line tool
Alternatively, grant permissions using the
gcloudcommand:gcloud storage buckets add-iam-policy-binding gs://gcp-swp-logs \ --member="serviceAccount:<SECOPS_SERVICE_ACCOUNT_EMAIL>" \ --role="roles/storage.objectViewer"- Replace:
gcp-swp-logs: Your bucket name.<SECOPS_SERVICE_ACCOUNT_EMAIL>: The Google SecOps service account email.
- Replace:
Using gsutil command-line tool (legacy)
Run the following command to grant the SecOps service account Object Viewer permissions:
gsutil iam ch serviceAccount:<SECOPS_SERVICE_ACCOUNT_EMAIL>:objectViewer \ gs://gcp-swp-logs
Verify permissions
To verify the permissions were granted correctly:
gcloud storage buckets get-iam-policy gs://gcp-swp-logs \ --flatten="bindings[].members" \ --filter="bindings.role:roles/storage.objectViewer"
You should see the Google SecOps service account email in the output.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
| httpRequest.latency, jsonPayload.@type, logName | additional.fields | Merged with latency_label (key "HTTPRequest Latency", value from latency), type_label (key "Log Type", value from @type), logname (key "Log Name", value from logName) |
| receiveTimestamp | metadata.collected_timestamp | Parsed as RFC3339 timestamp |
| metadata.event_type | Set to NETWORK_HTTP if has_principal true, has_target true, protocol matches (?i)http; NETWORK_CONNECTION if has_principal true, has_target true, network != ""; USER_LOGIN if has_principal true, has_target true, has_principal_user true; STATUS_UPDATE if has_principal true; GENERIC_EVENT else | |
| insertId | metadata.product_log_id | Value copied directly |
| httpRequest.protocol | network.application_protocol | Extracted protocol using grok pattern %{DATA:protocol}/%{INT:http_version}, set if in ["HTTP","HTTPS"] |
| httpRequest.protocol | network.application_protocol_version | Extracted http_version using grok pattern %{DATA:protocol}/%{INT:http_version} |
| httpRequest.requestMethod | network.http.method | Value copied directly |
| httpRequest.userAgent | network.http.parsed_user_agent | Value copied directly, converted to parseduseragent |
| httpRequest.status | network.http.response_code | Converted to string, then to integer |
| httpRequest.userAgent | network.http.user_agent | Value copied directly |
| httpRequest.responseSize | network.received_bytes | Value copied directly, converted to uinteger |
| httpRequest.requestSize | network.sent_bytes | Value copied directly, converted to uinteger |
| httpRequest.serverIp | principal.asset.ip | Extracted IP using grok pattern %{IP:server_ip}, set if not empty |
| httpRequest.serverIp | principal.ip | Extracted IP using grok pattern %{IP:server_ip}, set if not empty |
| jsonPayload.enforcedGatewaySecurityPolicy.matchedRules[].action | security_result.action | Set to ALLOW if rule.action == ALLOW, BLOCK if rule.action == DENIED |
| jsonPayload.enforcedGatewaySecurityPolicy.matchedRules[].action | security_result.action_details | Value copied directly from rule.action |
| jsonPayload.enforcedGatewaySecurityPolicy.requestWasTlsIntercepted, resource.labels.gateway_name, resource.labels.resource_container, resource.labels.gateway_type | security_result.detection_fields | Merged with tls_intercepted_label (key "requestWasTlsIntercepted", value from requestWasTlsIntercepted), gateway_name_label (key "gateway-name", value from gateway_name), resource_container_label (key "resource_container", value from resource_container), gateway_type_label (key "gateway-type", value from gateway_type) |
| jsonPayload.enforcedGatewaySecurityPolicy.matchedRules[].name | security_result.rule_name | Value copied directly |
| severity | security_result.severity | Set to CRITICAL if severity == CRITICAL; ERROR if severity == ERROR; HIGH if severity in [ALERT, EMERGENCY]; INFORMATIONAL if severity in [INFO, NOTICE]; LOW if severity == DEBUG; MEDIUM if severity == WARNING; UNKNOWN_SEVERITY else |
| jsonPayload.enforcedGatewaySecurityPolicy.hostname | target.asset.hostname | Value copied directly |
| httpRequest.remoteIp | target.asset.ip | Extracted IP using grok pattern %{IP:remote_ip}, set if not empty |
| jsonPayload.enforcedGatewaySecurityPolicy.hostname | target.hostname | Value copied directly |
| httpRequest.remoteIp | target.ip | Extracted IP using grok pattern %{IP:remote_ip}, set if not empty |
| resource.labels.location | target.resource.attribute.cloud.availability_zone | Value copied directly |
| resource.labels.network_name, resource.type | target.resource.attribute.labels | Merged with rc_network_name_label (key "rc_network_name", value from network_name), resource_type (key "Resource Type", value from resource.type) |
| httpRequest.requestUrl | target.url | Value copied directly |
Need more help? Get answers from Community members and Google SecOps professionals.