This document explains how to ingest logs from your SAP systems managed by SAP as part of the SAP RISE offering and forward them to Google SecOps.
Select your ingestion path
In an SAP RISE environment, the ingestion path depends on the log layer. To achieve comprehensive log coverage across your infrastructure and application layers, follow both paths described in this document:
| Log layer | Log type | Ingestion path |
|---|---|---|
| Infrastructure logs | ICM, Gateway, Web Dispatcher, and HANA Audit logs. | SAP LogServ and Google SecOps feeds |
| Application logs | Security Audit Logs and Change Documents. | Application Telemetry Collector, Bindplane agent, and Bindplane server |
Before you begin
Before you start the ingestion process, ensure that you have completed the following steps:
- Planning: Review the ingestion paths and technical requirements in the Plan for log ingestion guide.
- Foundation: Provision your Google SecOps instance and complete the onboarding process in Google Cloud. Review with your SAP ECS representative that SAP LogServ is provisioned for your environment.
- Bindplane setup: Install and configure the centralized Bindplane server as described in the Prepare your environment for log ingestion guide.
- Network access: Ensure your network architecture allows the host running the Bindplane server to receive incoming traffic on port
4317(gRPC) from the Application Telemetry Collector. - SAP preparation: Ensure your SAP system is prepared (service user, authorizations, and SNC) as described in the Prepare your environment for log ingestion guide.
- Google Cloud resources: Ensure that you have configured the required Google Cloud resources, such as APIs, storage buckets, and IAM roles, as described in the Prepare your environment for log ingestion guide.
Ingest infrastructure logs
Follow this path to ingest logs that are automatically extracted and delivered to your storage bucket by using SAP LogServ.
Infrastructure log ingestion in an SAP RISE environment involves the following components:
- SAP LogServ: An SAP Enterprise Cloud Services (ECS) service that extracts infrastructure logs from your RISE landscape and writes them to a storage bucket.
- Storage bucket: A bucket (Cloud Storage, Amazon S3, or Azure Blob Storage) in the cloud provider where your SAP systems are hosted.
- Notification mechanism: An event-driven service (Pub/Sub, Amazon SQS, or Azure Storage Queue) that notifies Google SecOps as soon as a new log file is written to the bucket.
- Google SecOps feed: A configured ingestion point that retrieves the logs from the storage bucket based on the notifications. For more information, see Feed management overview.
The specific components used depend on the cloud provider where your SAP RISE landscape is hosted:
| Cloud provider | Log storage | Notification mechanism |
|---|---|---|
| Google Cloud | Cloud Storage | Pub/Sub |
| AWS | Amazon S3 | Amazon SQS |
| Azure | Azure Blob Storage | Azure Storage Queue |
Before configuring the ingestion feeds, coordinate with your SAP ECS representative to enable LogServ and provision the destination storage bucket and notification mechanism as described in the Prepare your environment for log ingestion guide. This information is required to set up the ingestion feeds.
Set up Google SecOps feeds
To ingest logs from LogServ, you must create a separate feed in Google SecOps for each log type. For detailed step-by-step instructions on creating a feed, see the Google SecOps documentation.
Configure a feed for Cloud Storage
Use the following parameters to configure a feed for logs delivered to a Cloud Storage bucket.
Standard file paths for LogServ
For SAP RISE environments on Google Cloud, LogServ delivers logs to the following standard paths within your storage bucket:
- SAP Web Dispatcher Logs:
gs://BUCKET_NAME/logserv/webdispatcher/ - SAP HANA Audit Logs:
gs://BUCKET_NAME/logserv/hana/hanaaudit/ - SAP Gateway Logs:
gs://BUCKET_NAME/logserv/abap/gateway/ - SAP ICM Logs:
gs://BUCKET_NAME/logserv/abap/icm/
| Parameter | Description |
|---|---|
| Source type | Select Cloud Storage Event Driven. |
| Log type | Select the log type that corresponds to the logs you are ingesting, for example, SAP_ICM. For more information about the supported log types, see the Review supported log sources guide. |
| Storage bucket URI | Enter the storage bucket URI including the standard path for the log source, for example, gs://sap-logserv-bucket/logserv/hana/webdispatcher/. |
| Pub/Sub subscription name | Enter the Pub/Sub subscription name provided by SAP. |
Configure a feed for Amazon S3
Use the following parameters to configure a feed for logs delivered to an Amazon S3 bucket.
| Parameter | Description |
|---|---|
| Source type | Select Amazon SQS V2. |
| Log type | Select the log type that corresponds to the logs you are ingesting, for example, SAP_ICM. For more information about the supported log types, see the Review supported log sources guide. |
| S3 URI | Enter the S3 path provided by SAP. |
| Queue name | Enter the SQS queue URL or name provided by SAP. |
Configure a feed for Azure Blob Storage
Use the following parameters to configure a feed for logs delivered to Azure Blob Storage.
| Parameter | Description |
|---|---|
| Source type | Select Microsoft Azure Blob Storage. |
| Log type | Select the log type that corresponds to the logs you are ingesting, for example, SAP_ICM. For more information about the supported log types, see the Review supported log sources guide. |
| Azure URI | Enter the Azure Blob Storage URI provided by SAP. |
| Connection string | Enter the token or connection string provided by SAP. |
Confirm your log ingestion
To verify that your infrastructure logs are being correctly ingested from LogServ, do the following:
- In Google SecOps, go to SIEM Settings > Feeds.
- Confirm that the feeds you created are in Active status.
- Check the Last Run Status to ensure that log files are being successfully retrieved from your storage bucket.
- Go to Search and enter a UDM query for your log type, for example,
metadata.log_type = "SAP_ICM". For more information, see Search and filter SAP logs. - Confirm that logs are appearing and are correctly parsed into UDM fields.
- Open the Event Viewer to inspect the raw log and the normalized
UDM fields. For example, verify that the
Principal Host (
principal.host.hostname) and relevant network or database details are correctly populated.
Ingest application logs
Follow this path to ingest logs such as Security Audit Logs and Change Documents directly from the SAP application layer using the RFC protocol.
Application log ingestion involves the following components:
- Application Telemetry Collector: A containerized Java application that connects to your SAP application servers using the RFC protocol to extract security logs.
- Bindplane agent: A lightweight agent installed on the collector host that receives OTLP data and forwards the data to the Google SecOps gateway.
- Bindplane server: Receives the logs from the Bindplane agents and forwards them to Google SecOps.
Prepare the collector host and dependencies
Provision the infrastructure to run the collector and upload the necessary SAP libraries.
- Infrastructure instance: Provision a host to run the collector container.
- Host options: You can use Compute Engine, on-premises servers, or container orchestration platforms such as GKE.
- Specifications: Use a standard Linux distribution, such as Debian 11 or 12, with Docker Engine 20.10 or later.
- Network: The host requires a network path to your SAP RISE instance, such as VPC peering or Cloud Interconnect. Open egress to SAP Gateway ports
32INSTANCE_NUMBERor33INSTANCE_NUMBER(whereINSTANCE_NUMBERis your instance number). - Continuity: Ensure the host is "always-on" to prevent log gaps. Don't use serverless platforms like Cloud Run.
- Identity (Within Google Cloud): Attach the service account created in the Configure Google Cloud resources section to the host. In the console, set Access Scopes to Allow full access to all Cloud APIs.
- Identity (Outside Google Cloud): Use service account keys or Workload Identity Federation.
- SAP Java Connector (JCo) and SNC libraries: Upload the SAP connector and cryptographic files to the
jco/folder in your Cloud Storage bucket. You create thejco/folder as described in the Configure Google Cloud resources guide.- SAP JCo: Download
sapjco3.jarandlibsapjco3.sofrom the SAP Support Portal and upload them to thejco/folder. Ensure you download the version matching your host architecture (ARM or x86). - SNC Libraries: If you use SNC, download
libslcryptokernel.soandlibsapcrypto.sofrom the SAP portal, and then upload them to thejco/folder. For instructions on how to download the.SARfile and extract the files by using thesapcarutility, see the SAP documentation.
- SAP JCo: Download
Configure the collector
Create a collector_config.json file to define your SAP connections and the logs you want to ingest.
Once you have defined the configuration, upload the collector_config.json
file to the config/ folder in the Cloud Storage bucket you created in the Configure Google Cloud resources section.
Configuration example and field descriptions
Use the following JSON example to create your collector configuration file and refer to the field descriptions for detailed parameter information.
{
"systems": [
{
"system_id": "PRD-HANA",
"connection": {
"host": "sap-prd.internal.net",
"client": "100",
"system_number": "00",
"language": "en"
},
"auth": {
"basic": {
"username_secret": "projects/my-project-123/secrets/sap-collector-user/versions/latest",
"password_secret": "projects/my-project-123/secrets/sap-collector-pass/versions/latest"
}
},
"log_sources": [
{
"log_type": "SAP_SECURITY_AUDIT",
"interval": "60s"
},
{
"log_type": "SAP_CHANGE_DOCUMENT",
"interval": "300s",
"change_document_object_classes": ["PFCG", "IDENTITY"]
}
],
"initial_lookback_window": "86400s",
"sap_timezone": "UTC"
},
{
"system_id": "DEV-HANA-SNC",
"connection": {
"host": "sap-dev.internal.net",
"client": "200",
"system_number": "01",
"language": "en"
},
"auth": {
"x509": {
"snc_name": "p:CN=SAP-Collector,O=MyCompany,C=US",
"snc_partner_name": "p:CN=SAP-Server-DEV,O=MyCompany,C=US",
"snc_qop": "3",
"x509_cert_secret": "projects/my-project-123/secrets/sap-x509-cert/versions/latest"
}
},
"log_sources": [
{
"log_type": "SAP_SECURITY_AUDIT",
"interval": "120s"
}
],
"initial_lookback_window": "3600s",
"sap_timezone": "UTC"
}
],
"bindplane_host": "127.0.0.1",
"bindplane_port": 4317,
"heartbeat_enabled": true,
"heartbeat_interval": "30s",
"heartbeat_metric_name": "custom/sap/collector_heartbeat",
"jco_pse_secret": "projects/my-project-123/secrets/sap-collector-pse/versions/latest",
"jco_cred_secret": "projects/my-project-123/secrets/sap-collector-cred/versions/latest"
}
The following table describes the fields in the collector_config.json file.
| Section | Field | Description |
|---|---|---|
System Connection (systems)List of systems to monitor. |
system_id |
A unique label for the system, for example, PRD. In the example, replace PRD-HANA with your specific label. |
sap_timezone |
The timezone of the SAP application server, for example, UTC. This is critical for accurate log querying. |
|
initial_lookback_window |
How far back the collector looks for logs on its first run, for example, "86400s" for 24 hours. |
|
connection |
Contains the following RFC connection parameters:
|
|
Authentication (auth)Choose one of the authentication methods. |
basic |
Username and password authentication. Provide the full Secret Manager paths for username_secret and password_secret. These secrets must contain the credentials of the SAP service user created in the preparation guide. For more information, see Basic authentication. |
x509 (SNC) |
Secure Network Communication authentication. For setup instructions, see Secure Network Communication (SNC). Provide the following:
|
|
Log Sources (log_sources)Defines logs and frequency. |
log_type |
The type of log to ingest. Options include:
|
interval |
The polling frequency for this specific log source, for example, "30s". |
|
change_document_object_classes |
(Optional) Used only for SAP_CHANGE_DOCUMENT. A list of specific SAP object classes to track, for example, ["MATERIAL", "USER"]. |
|
| Global Collector Settings Applied to the entire instance. |
bindplane_host |
The hostname or IP address of your Bindplane agent, such as We recommend deploying Bindplane agent on the same host as Application Telemetry Collector. In this configuration, set the |
bindplane_port |
The port number the Bindplane server listens on. Default is 4317. |
|
heartbeat_enabled |
A boolean, which is true or false, to toggle a "stay-alive" metric. |
|
heartbeat_interval |
How often the heartbeat metric is sent to Monitoring through Bindplane. Must be a string followed by "s", for example, "30s". Defaults to 60s. |
|
heartbeat_metric_name |
(Optional) The name that appears in your monitoring dashboard for this heartbeat, for example, sap_collector_status. Defaults to sap_appl_telemetry_collector_heartbeat. |
|
jco_pse_secret |
The full Secret Manager path to the secret containing the PSE file (for example, sapcrypto.pse). For more information, see Store SNC artifacts in Secret Manager. |
|
jco_cred_secret |
The full Secret Manager path to the secret containing the cred_v2 file. For more information, see Store SNC artifacts in Secret Manager. |
Deploy the collector
Run the Application Telemetry Collector as a Docker container on your host machine.
Prepare host directory and credentials (external hosts)
If your host machine is outside Google Cloud, such as on-premises or in another cloud, and you don't use a VM service account, then prepare the configuration directory and credentials:
Create the directory: Create the designated configuration directory:
sudo mkdir -p /etc/sap-collectorPrepare credentials: For hosts outside of Google Cloud, you can provide a service account key or configure Workload Identity Federation (WIF).
- Service account key: Create a service account key
and configure the key as application default credentials on your host. Rename the downloaded JSON key to
creds.jsonand move the key to the directory:bash sudo mv /path/to/downloaded-key.json /etc/sap-collector/creds.json - Workload Identity Federation: Configure Workload Identity Federation to authenticate using an external identity provider.
- Service account key: Create a service account key
and configure the key as application default credentials on your host. Rename the downloaded JSON key to
Run the collector container
Choose the deployment command that matches your host environment:
Google Cloud
Use this command if your host runs on Google Cloud and uses an attached Service Account for authentication.
docker run -d \
--name sap-telemetry-collector \
--restart always \
--network host \
-e COLLECTOR_GCS_BUCKET=gs://YOUR_BUCKET_NAME \
COLLECTOR_IMAGE_PATH
External host
Use this command if your host is outside Google Cloud. This command
uses the Docker volume mount (-v) to link your host's /etc/sap-collector
directory to the container's /tmp/keys directory, and the
GOOGLE_APPLICATION_CREDENTIALS environment variable (-e) to point
the collector to your creds.json key.
docker run -d \
--name sap-telemetry-collector \
--restart always \
--network host \
-v /etc/sap-collector:/tmp/keys:ro \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/creds.json \
-e COLLECTOR_GCS_BUCKET=gs://YOUR_BUCKET_NAME \
COLLECTOR_IMAGE_PATH
Replace the following:
YOUR_BUCKET_NAME: The name of the Cloud Storage bucket where you uploaded the SAP JCo libraries and thecollector_config.jsonfile. Ensure the value follows the formatgs://BUCKET_NAMEwithout trailing slashes or subdirectories.COLLECTOR_IMAGE_PATH: The URI of the collector image. The collector is available to download from the following regional Artifact Registry paths:us-docker.pkg.dev/sap-core-eng-products/sap-application-telemetry/google-cloud-sap-application-telemetry:TAGeurope-docker.pkg.dev/sap-core-eng-products/sap-application-telemetry/google-cloud-sap-application-telemetry:TAGasia-docker.pkg.dev/sap-core-eng-products/sap-application-telemetry/google-cloud-sap-application-telemetry:TAG
Replace
TAGwith the version of the collector image that you want to download, for example,latest. For more information about how tags work, see Artifact Registry container concepts.
Install the Bindplane agent on the Application Telemetry Collector host
To forward application logs, install the Bindplane agent on the Application Telemetry Collector host by running the command generated in the Bindplane UI. For detailed instructions, see the Bindplane agent installation guide.
To verify the installation, go to the Agents tab in the Bindplane UI and confirm that the agent is visible with a Connected status.
Configure the Bindplane gateway
Configure a gateway in the Bindplane UI to receive logs from the Application Telemetry Collector and forward them to Google SecOps.
Create the log forwarding configuration
Define the basic settings for the pipeline on your central Bindplane server.
- In the Bindplane UI, go to the Configurations tab.
- Click Create Configuration.
- Enter a Name, for example,
sap-app-logs. - Select BDOT 1.x (Stable) as the Agent Type.
- Select Linux as the Platform.
- Click Next.
Add the Bindplane gateway source
Configure an OpenTelemetry (OTLP) listener on the Bindplane server to receive logs from the Application Telemetry Collector.
- In the Add Sources stage, select the Bindplane Gateway source type.
- Enter a Short Description.
- In the Listen Address field, enter
0.0.0.0. - Ensure the Port is set to
4317(gRPC) and ensure that your host's firewall allows traffic on this port. - Click Save.
Add the Batch processor
To prevent overwhelming the Google SecOps API, batch your requests.
- Click Add Processor and select Batch.
- Use the recommended defaults (usually
8192units or a200mstimeout). - Click Save.
Obtain Google SecOps credentials
To securely forward logs to Google SecOps, you must obtain your ingestion authentication file and customer ID from the Google SecOps console.
Google SecOps ingestion authentication file
To download the ingestion authentication file, do the following:
- Open the Google SecOps console.
- Go to SIEM Settings > Collection Agent.
Download the Google SecOps ingestion authentication file. Your next step depends on your transfer method:
- gRPC: Use the downloaded ingestion authentication file.
- HTTPS: Create a service account in the Google Cloud project linked to Google SecOps and assign the Chronicle Editor role to the service account.
Google SecOps customer ID
To find the customer ID, do the following:
- Open the Google SecOps console.
- Go to SIEM Settings > Profile.
- Copy the customer ID from the Organization Details section.
Create the Google SecOps destination in Bindplane
Set up the final destination in Bindplane to securely forward your processed logs to Google SecOps.
- In the Bindplane UI, go to the Destinations tab and click Create Destination.
- Select Google SecOps as the destination type.
- Enter a Name, for example,
secops-destination. - In the Customer ID field, paste your Google SecOps customer ID.
- In the Credentials field, click Upload and select the ingestion authentication file you downloaded, or paste the file content directly.
- Click Save.
Add the Monitoring destination
To monitor the health of your collectors within Monitoring, ensure that heartbeat metrics are enabled and routed through the gateway.
- Click Add Destination and select Google Cloud Monitoring.
- In the Project ID field, enter your Google Cloud project ID.
- In the Authentication field, select Auto if your Bindplane server is running on a Google Cloud instance. Otherwise, provide your Service Account credentials.
- Click Save.
Deploy the configuration to agents
Once the configuration is ready, assign the configuration to your installed Bindplane agents.
- In the Bindplane UI, go to the Configurations tab.
- Select your new configuration, for example,
sap-app-logs. - Click Add Agents.
- Select the Bindplane agent on the Application Telemetry Collector host and click Apply.
Confirm your log ingestion
To ensure the setup is working correctly, follow the verification steps in Confirm your log ingestion.
When verifying application logs in Google SecOps, use a
UDM query to filter by your specific log type, for example, metadata.log_type = "SAP_SECURITY_AUDIT". For more information, see Search and filter SAP logs.
Troubleshooting
For information about diagnosing and resolving common issues related to setting up Google SecOps for SAP, see Troubleshoot SAP log ingestion.
Get support
For issues related to Google SecOps for SAP, contact Google SecOps support. Our team provides assistance or guides you to the right resource to help ensure a timely resolution.
For issues involving SAP systems or the LogServ service, contact SAP support. For issues related to other third-party products, such as Bindplane, contact the appropriate third-party vendor for assistance.
Get technical answers and peer support in the Google SecOps Community.
What's next
- Detect and investigate threats in SAP logs
- SAP to UDM field mapping reference
- Troubleshoot SAP log ingestion