Collect SAP BTP logs
This document explains how to ingest SAP Business Technology Platform (BTP) logs to Google Security Operations using Google Cloud Storage V2.
SAP Business Technology Platform (BTP) is a cloud platform that provides application development, integration, data management, and analytics capabilities. It generates security audit logs, access logs, and service event logs that are critical for monitoring platform activity and detecting security threats. A Cloud Run function polls the SAP BTP Audit Log Retrieval API on a schedule, writes logs to a GCS bucket in NDJSON format, and Google SecOps ingests them through a GCS V2 feed.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- A GCP project with Cloud Storage API enabled
- Permissions to create and manage GCS buckets
- Permissions to manage IAM policies on GCS buckets
- Permissions to create Cloud Run services, Pub/Sub topics, and Cloud Scheduler jobs
- Administrator access to SAP BTP with permissions to configure audit log retrieval
- The SAP BTP Audit Log Retrieval API enabled in your SAP BTP subaccount
Create Google Cloud Storage bucket
- Go to the Google Cloud Console.
- Select your project or create a new one.
- In the navigation menu, go to Cloud Storage > Buckets.
- Click Create bucket.
Provide the following configuration details:
Setting Value Name your bucket Enter a globally unique name (for example, sap-btp-logs)Location type Choose based on your needs (Region, Dual-region, Multi-region) Location Select the location (for example, us-central1)Storage class Standard (recommended for frequently accessed logs) Access control Uniform (recommended) Protection tools Optional: Enable object versioning or retention policy Click Create.
Collect SAP BTP API credentials
Enable Audit Log Retrieval API
- Sign in to the SAP BTP Cockpit.
- Navigate to your subaccount.
- Go to Service Marketplace.
- Search for Audit Log Retrieval API (technical name:
auditlog-api). - Click Create to create a service instance.
- Select the service plan (for example,
default). - Click Create.
Create service key
- In the SAP BTP Cockpit, navigate to your Audit Log Retrieval API service instance.
- Go to the Service Keys tab.
- Click Create Service Key.
- Enter a name (for example,
google-secops-integration). - Click Create.
- Copy and save the following details from the service key in a secure location:
- url: The Audit Log Retrieval API base URL.
- uaa.url: The OAuth token endpoint URL.
- uaa.clientid: The OAuth client ID.
- uaa.clientsecret: The OAuth client secret.
Verify permissions
To verify the account has the required permissions:
- Sign in to the SAP BTP Cockpit.
- Navigate to your subaccount.
- Go to Security > Users.
- Verify that the user has the Audit Log Viewer role collection assigned.
- If not, assign the role collection by clicking the user and adding the role.
Test API access
Test your credentials before proceeding with the integration:
# Replace with your actual credentials from the service key UAA_URL="https://your-subdomain.authentication.region.hana.ondemand.com" CLIENT_ID="your-client-id" CLIENT_SECRET="your-client-secret" API_URL="https://your-api-url.auditlog.cfapps.region.hana.ondemand.com" # Get OAuth token TOKEN=$(curl -s -X POST "${UAA_URL}/oauth/token" \ -u "${CLIENT_ID}:${CLIENT_SECRET}" \ -d "grant_type=client_credentials" \ -H "Content-Type: application/x-www-form-urlencoded" | jq -r '.access_token') # Test API access curl -v -H "Authorization: Bearer ${TOKEN}" \ "${API_URL}/auditlog/v2/auditlogrecords?$top=1"
Create service account for Cloud Run function
The Cloud Run function needs a service account with permissions to write to GCS bucket and be invoked by Pub/Sub.
Create service account
- In the GCP Console, go to IAM & Admin > Service Accounts.
- Click Create Service Account.
- Provide the following configuration details:
- Service account name: Enter
sap-btp-collector-sa. - Service account description: Enter
Service account for Cloud Run function to collect SAP BTP logs.
- Service account name: Enter
- Click Create and Continue.
- In the Grant this service account access to project section, add the following roles:
- Click Select a role.
- Search for and select Storage Object Admin.
- Click + Add another role.
- Search for and select Cloud Run Invoker.
- Click + Add another role.
- Search for and select Cloud Functions Invoker.
- Click Continue.
- Click Done.
These roles are required for:
- Storage Object Admin: Write logs to GCS bucket and manage state files
- Cloud Run Invoker: Allow Pub/Sub to invoke the function
- Cloud Functions Invoker: Allow function invocation
Grant IAM permissions on GCS bucket
Grant the service account write permissions on the GCS bucket:
- Go to Cloud Storage > Buckets.
- Click your bucket name (for example,
sap-btp-logs). - Go to the Permissions tab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Enter the service account email (for example,
sap-btp-collector-sa@your-project.iam.gserviceaccount.com). - Assign roles: Select Storage Object Admin.
- Add principals: Enter the service account email (for example,
- Click Save.
Create Pub/Sub topic
Create a Pub/Sub topic that Cloud Scheduler will publish to and the Cloud Run function will subscribe to:
- In the GCP Console, go to Pub/Sub > Topics.
- Click Create topic.
- Provide the following configuration details:
- Topic ID: Enter
sap-btp-trigger. - Leave other settings as default.
- Topic ID: Enter
- Click Create.
Create Cloud Run function to collect logs
The Cloud Run function will be triggered by Pub/Sub messages from Cloud Scheduler to fetch audit logs from SAP BTP Audit Log Retrieval API and write them to GCS.
- In the GCP Console, go to Cloud Run.
- Click Create service.
- Select Function (use an inline editor to create a function).
In the Configure section, provide the following configuration details:
Setting Value Service name sap-btp-collectorRegion Select region matching your GCS bucket (for example, us-central1)Runtime Select Python 3.12 or later In the Trigger (optional) section:
- Click + Add trigger.
- Select Cloud Pub/Sub.
- In Select a Cloud Pub/Sub topic, choose the topic
sap-btp-trigger. - Click Save.
In the Authentication section:
- Select Require authentication.
- Check Identity and Access Management (IAM).
Expand Containers, Networking, Security.
Go to the Security tab:
- Service account: Select the service account
sap-btp-collector-sa.
- Service account: Select the service account
Go to the Containers tab:
- Click Variables & Secrets.
Click + Add variable for each environment variable:
Variable name Example value Description GCS_BUCKETsap-btp-logsGCS bucket name GCS_PREFIXsap-btpPrefix for log files STATE_KEYsap-btp/state.jsonState file path SAP_API_URLhttps://your-api-url.auditlog.cfapps.region.hana.ondemand.comAudit Log Retrieval API URL SAP_UAA_URLhttps://your-subdomain.authentication.region.hana.ondemand.comOAuth token endpoint URL SAP_CLIENT_IDyour-client-idOAuth client ID SAP_CLIENT_SECRETyour-client-secretOAuth client secret MAX_RECORDS10000Max records per run PAGE_SIZE500Records per page LOOKBACK_HOURS24Initial lookback period
In the Variables & Secrets tab, navigate to Requests:
- Request timeout: Enter
600seconds (10 minutes).
- Request timeout: Enter
Go to the Settings tab in Containers:
- In the Resources section:
- Memory: Select 512 MiB or higher.
- CPU: Select 1.
- Click Done.
- In the Resources section:
Go to the Execution environment section:
- Select Default (recommended).
In the Revision scaling section:
- Minimum number of instances: Enter
0. - Maximum number of instances: Enter
100(or adjust based on expected load).
- Minimum number of instances: Enter
Click Create.
Wait for the service to be created (1-2 minutes).
After the service is created, the inline code editor will open automatically.
Add function code
- Enter main in Function entry point.
In the inline code editor, create two files:
First file: main.py:
```python import functions_framework from google.cloud import storage import json import os import urllib3 from datetime import datetime, timezone, timedelta import time import base64 # Initialize HTTP client with timeouts http = urllib3.PoolManager( timeout=urllib3.Timeout(connect=5.0, read=30.0), retries=False, ) # Initialize Storage client storage_client = storage.Client() # Environment variables GCS_BUCKET = os.environ.get('GCS_BUCKET') GCS_PREFIX = os.environ.get('GCS_PREFIX', 'sap-btp') STATE_KEY = os.environ.get('STATE_KEY', 'sap-btp/state.json') API_URL = os.environ.get('SAP_API_URL') UAA_URL = os.environ.get('SAP_UAA_URL') CLIENT_ID = os.environ.get('SAP_CLIENT_ID') CLIENT_SECRET = os.environ.get('SAP_CLIENT_SECRET') MAX_RECORDS = int(os.environ.get('MAX_RECORDS', '10000')) PAGE_SIZE = int(os.environ.get('PAGE_SIZE', '500')) LOOKBACK_HOURS = int(os.environ.get('LOOKBACK_HOURS', '24')) def parse_datetime(value: str) -> datetime: """Parse ISO datetime string to datetime object.""" if value.endswith("Z"): value = value[:-1] + "+00:00" return datetime.fromisoformat(value) def get_oauth_token(): """Get OAuth 2.0 access token using client credentials flow.""" token_url = f"{UAA_URL.rstrip('/')}/oauth/token" credentials = f"{CLIENT_ID}:{CLIENT_SECRET}" encoded_credentials = base64.b64encode(credentials.encode('utf-8')).decode('utf-8') headers = { 'Authorization': f'Basic {encoded_credentials}', 'Content-Type': 'application/x-www-form-urlencoded', 'Accept': 'application/json' } body = 'grant_type=client_credentials' backoff = 1.0 max_retries = 3 for attempt in range(max_retries): response = http.request('POST', token_url, body=body, headers=headers) if response.status == 429: retry_after = int(response.headers.get('Retry-After', str(int(backoff)))) print(f"Rate limited (429) on token request. Retrying after {retry_after}s...") time.sleep(retry_after) backoff = min(backoff * 2, 30.0) continue if response.status != 200: raise RuntimeError(f"Failed to get access token: {response.status} - {response.data.decode('utf-8')}") data = json.loads(response.data.decode('utf-8')) return data['access_token'] raise RuntimeError(f"Failed to get token after {max_retries} retries due to rate limiting") @functions_framework.cloud_event def main(cloud_event): """ Cloud Run function triggered by Pub/Sub to fetch SAP BTP audit logs and write to GCS. Args: cloud_event: CloudEvent object containing Pub/Sub message """ if not all([GCS_BUCKET, API_URL, UAA_URL, CLIENT_ID, CLIENT_SECRET]): print('Error: Missing required environment variables') return try: bucket = storage_client.bucket(GCS_BUCKET) # Load state state = load_state(bucket, STATE_KEY) # Determine time window now = datetime.now(timezone.utc) last_time = None if isinstance(state, dict) and state.get("last_event_time"): try: last_time = parse_datetime(state["last_event_time"]) last_time = last_time - timedelta(minutes=2) except Exception as e: print(f"Warning: Could not parse last_event_time: {e}") if last_time is None: last_time = now - timedelta(hours=LOOKBACK_HOURS) print(f"Fetching audit logs from {last_time.isoformat()} to {now.isoformat()}") # Get OAuth token token = get_oauth_token() # Fetch audit logs records, newest_event_time = fetch_audit_logs( token=token, start_time=last_time, end_time=now, page_size=PAGE_SIZE, max_records=MAX_RECORDS, ) if not records: print("No new audit log records found.") save_state(bucket, STATE_KEY, now.isoformat()) return # Write to GCS as NDJSON timestamp = now.strftime('%Y%m%d_%H%M%S') object_key = f"{GCS_PREFIX}/audit_{timestamp}.ndjson" blob = bucket.blob(object_key) ndjson = '\n'.join([json.dumps(record, ensure_ascii=False) for record in records]) + '\n' blob.upload_from_string(ndjson, content_type='application/x-ndjson') print(f"Wrote {len(records)} records to gs://{GCS_BUCKET}/{object_key}") # Update state with newest event time if newest_event_time: save_state(bucket, STATE_KEY, newest_event_time) else: save_state(bucket, STATE_KEY, now.isoformat()) print(f"Successfully processed {len(records)} records") except Exception as e: print(f'Error processing audit logs: {str(e)}') raise def load_state(bucket, key): """Load state from GCS.""" try: blob = bucket.blob(key) if blob.exists(): state_data = blob.download_as_text() return json.loads(state_data) except Exception as e: print(f"Warning: Could not load state: {e}") return {} def save_state(bucket, key, last_event_time_iso: str): """Save the last event timestamp to GCS state file.""" try: state = {'last_event_time': last_event_time_iso} blob = bucket.blob(key) blob.upload_from_string( json.dumps(state, indent=2), content_type='application/json' ) print(f"Saved state: last_event_time={last_event_time_iso}") except Exception as e: print(f"Warning: Could not save state: {e}") def fetch_audit_logs(token: str, start_time: datetime, end_time: datetime, page_size: int, max_records: int): """ Fetch audit logs from SAP BTP Audit Log Retrieval API with pagination and rate limiting. Args: token: OAuth 2.0 access token start_time: Start time for log query end_time: End time for log query page_size: Number of records per page max_records: Maximum total records to fetch Returns: Tuple of (records list, newest_event_time ISO string) """ base_url = API_URL.rstrip('/') endpoint = f"{base_url}/auditlog/v2/auditlogrecords" headers = { 'Authorization': f'Bearer {token}', 'Accept': 'application/json', 'User-Agent': 'GoogleSecOps-SAPBTPCollector/1.0' } records = [] newest_time = None page_num = 0 backoff = 1.0 start_iso = start_time.strftime('%Y-%m-%dT%H:%M:%S.000Z') end_iso = end_time.strftime('%Y-%m-%dT%H:%M:%S.000Z') # SAP BTP uses OData-style pagination with $skip and $top skip = 0 while True: page_num += 1 if len(records) >= max_records: print(f"Reached max_records limit ({max_records})") break current_top = min(page_size, max_records - len(records)) time_filter = f"time ge '{start_iso}' and time le '{end_iso}'" from urllib.parse import quote url = f"{endpoint}?$filter={quote(time_filter)}&$top={current_top}&$skip={skip}&$orderby=time asc" try: response = http.request('GET', url, headers=headers) # Handle rate limiting with exponential backoff if response.status == 429: retry_after = int(response.headers.get('Retry-After', str(int(backoff)))) print(f"Rate limited (429). Retrying after {retry_after}s...") time.sleep(retry_after) backoff = min(backoff * 2, 30.0) continue backoff = 1.0 if response.status == 401: print("Token expired, refreshing...") token = get_oauth_token() headers['Authorization'] = f'Bearer {token}' continue if response.status != 200: print(f"HTTP Error: {response.status}") response_text = response.data.decode('utf-8') print(f"Response body: {response_text}") return [], None data = json.loads(response.data.decode('utf-8')) page_results = data.get('value', data.get('results', [])) if not page_results: print(f"No more results (empty page)") break print(f"Page {page_num}: Retrieved {len(page_results)} audit records") records.extend(page_results) # Track newest event time for record in page_results: try: event_time = record.get('time') or record.get('timestamp') if event_time: if newest_time is None or parse_datetime(event_time) > parse_datetime(newest_time): newest_time = event_time except Exception as e: print(f"Warning: Could not parse event time: {e}") # Check for more results if len(page_results) < page_size: print(f"Reached last page (size={len(page_results)} < top={page_size})") break skip += len(page_results) except Exception as e: print(f"Error fetching audit logs: {e}") return [], None print(f"Retrieved {len(records)} total audit records from {page_num} pages") return records, newest_time ```Second file: requirements.txt:
```none functions-framework==3.* google-cloud-storage==2.* urllib3>=2.0.0 ```
Click Deploy to save and deploy the function.
Wait for deployment to complete (2-3 minutes).
Create Cloud Scheduler job
Cloud Scheduler will publish messages to the Pub/Sub topic at regular intervals, triggering the Cloud Run function.
- In the GCP Console, go to Cloud Scheduler.
- Click Create Job.
Provide the following configuration details:
Setting Value Name sap-btp-collector-hourlyRegion Select same region as Cloud Run function Frequency 0 * * * *(every hour, on the hour)Timezone Select timezone (UTC recommended) Target type Pub/Sub Topic Select the topic sap-btp-triggerMessage body {}(empty JSON object)Click Create.
Schedule frequency options
Choose frequency based on log volume and latency requirements:
| Frequency | Cron Expression | Use Case |
|---|---|---|
| Every 5 minutes | */5 * * * * |
High-volume, low-latency |
| Every 15 minutes | */15 * * * * |
Medium volume |
| Every hour | 0 * * * * |
Standard (recommended) |
| Every 6 hours | 0 */6 * * * |
Low volume, batch processing |
| Daily | 0 0 * * * |
Historical data collection |
Test the integration
- In the Cloud Scheduler console, find your job (
sap-btp-collector-hourly). - Click Force run to trigger manually.
- Wait a few seconds and go to Cloud Run > Services > sap-btp-collector > Logs.
Verify the function executed successfully. Look for:
Fetching audit logs from YYYY-MM-DDTHH:MM:SS+00:00 to YYYY-MM-DDTHH:MM:SS+00:00 Page 1: Retrieved X audit records Wrote X records to gs://sap-btp-logs/sap-btp/audit_YYYYMMDD_HHMMSS.ndjson Successfully processed X recordsCheck the GCS bucket (
sap-btp-logs) to confirm audit logs were written.
If you see errors in the logs:
- HTTP 401: Check OAuth credentials in environment variables or token may have expired (function handles refresh automatically)
- HTTP 403: Verify user has Audit Log Viewer role collection assigned in SAP BTP
- HTTP 429: Rate limiting - function will automatically retry with backoff
- Failed to get access token: Verify
SAP_UAA_URL,SAP_CLIENT_ID, andSAP_CLIENT_SECRETare correct
Configure a feed in Google SecOps to ingest SAP BTP logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- Click Configure a single feed.
- In the Feed name field, enter a name for the feed (for example,
SAP BTP Logs). - Select Google Cloud Storage V2 as the Source type.
- Select SAP BTP as the Log type.
Click Get Service Account. A unique service account email will be displayed, for example:
chronicle-12345678@chronicle-gcp-prod.iam.gserviceaccount.comCopy this email address. You will use it in the next step.
Click Next.
Specify values for the following input parameters:
Storage bucket URL: Enter the GCS bucket URI with the prefix path:
gs://sap-btp-logs/sap-btp/- Replace:
sap-btp-logs: Your GCS bucket name.sap-btp/: Prefix/folder path where logs are stored.
- Replace:
Source deletion option: Select the deletion option according to your preference:
- Never: Never deletes any files after transfers (recommended for testing).
- Delete transferred files: Deletes files after successful transfer.
Delete transferred files and empty directories: Deletes files and empty directories after successful transfer.
Maximum File Age: Include files modified in the last number of days (default is 180 days).
Asset namespace: The asset namespace.
Ingestion labels: The label to be applied to the events from this feed.
Click Next.
Review your new feed configuration in the Finalize screen, and then click Submit.
Grant IAM permissions to the Google SecOps service account
The Google SecOps service account needs Storage Object Viewer role on your GCS bucket.
- Go to Cloud Storage > Buckets.
- Click your bucket name (
sap-btp-logs). - Go to the Permissions tab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Paste the Google SecOps service account email.
- Assign roles: Select Storage Object Viewer.
Click Save.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
| time | metadata.event_timestamp | Converted using ISO8601 format |
| metadata.event_type | Set to "GENERIC_EVENT" initially, then "USER_RESOURCE_ACCESS" if user_present and target_resource_present, "STATUS_UPDATE" if principal_present, "USER_UNCATEGORIZED" if user_present | |
| json_message_uuid | metadata.product_log_id | Value copied directly |
| host | principal.asset.hostname | Value copied directly |
| json_message.ip, json_message.data.ip | principal.asset.ip | Value from json_message.ip if valid IP, or from json_message.data.ip if key=="ip" and valid IP |
| host | principal.hostname | Value copied directly |
| json_message.ip, json_message.data.ip | principal.ip | Value from json_message.ip if valid IP, or from json_message.data.ip if key=="ip" and valid IP |
| role | principal.user.attribute.roles | Set to "ADMINISTRATOR" if role contains "admin" |
| json_message.attributes.emails.value, json_message.attributes.userName | principal.user.email_addresses | Merged from emails.value if matches email regex, or from userName if matches email regex |
| user | principal.user.user_display_name | Value copied directly |
| json_message.data_subject.id.userid, json_message.attributes.userName | principal.user.userid | Value from json_message.data_subject.id.userid if not empty, else from json_message.attributes.userName if not email |
| category | security_result.category_details | Value copied directly |
| json_message.uuid, json_message.id, json_message.customDetails.success, json_message.customDetails.type, json_message.customDetails.status, json_message.customDetails.attributes, json_message.data.json_message, json_message.data.msgNo, json_message.data.msgId, json_message.data.version, json_message.object, json_message.attributes, app_or_service_id, als_service_id, format_version, org_id, space_id, tenant | security_result.detection_fields | Merged as key-value pairs from various JSON paths |
| json_message.data.level | security_result.severity | Set to "ERROR" if level in ["ERROR", "CRITICAL"], "INFORMATIONAL" if "INFO", "MEDIUM" if "WARN", else "UNKNOWN_SEVERITY" |
| json_message.data.level | security_result.severity_details | Value copied if level != "INFO" |
| json_message.object.id.clusterID | target.resource.product_object_id | Value copied directly |
| json_message.object.id.clusterID | target.resource.resource_type | Set to "CLUSTER" if clusterID present |
| metadata.product_name | Set to "SAP_BTP" | |
| metadata.vendor_name | Set to "SAP_BTP" |
Need more help? Get answers from Community members and Google SecOps professionals.