Collect NGINX logs

Supported in:

This document explains how to ingest NGINX logs to Google Security Operations using the Bindplane agent.

NGINX is a web server and reverse proxy that generates syslog messages for HTTP access events, error events, authentication activity, and process information. The parser extracts fields from multiple log formats (syslog, JSON, access logs) using grok patterns and maps them to the Unified Data Model (UDM).

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance
  • Windows Server 2016 or later, or Linux host with systemd
  • Network connectivity between the Bindplane agent and the NGINX server
  • If running behind a proxy, ensure firewall ports are open per the Bindplane agent requirements
  • Administrative access to the NGINX host

Get Google SecOps ingestion authentication file

  1. Sign in to the Google SecOps console.
  2. Go to SIEM Settings > Collection Agents.
  3. Download the Ingestion Authentication File.
  4. Save the file securely on the system where the Bindplane agent will be installed.

Get Google SecOps customer ID

  1. Sign in to the Google SecOps console.
  2. Go to SIEM Settings > Profile.
  3. Copy and save the Customer ID from the Organization Details section.

Install the Bindplane agent

Install the Bindplane agent on your Windows or Linux operating system according to the following instructions.

Windows installation

  1. Open Command Prompt or PowerShell as an administrator.
  2. Run the following command:

    msiexec /i "https://github.com/observIQ/bindplane-agent/releases/latest/download/observiq-otel-collector.msi" /quiet
    
  3. Wait for the installation to complete.

  4. Verify the installation by running:

    sc query observiq-otel-collector
    

    The service should show as RUNNING.

Linux installation

  1. Open a terminal with root or sudo privileges.
  2. Run the following command:

    sudo sh -c "$(curl -fsSlL https://github.com/observiq/bindplane-agent/releases/latest/download/install_unix.sh)" install_unix.sh
    
  3. Wait for the installation to complete.

  4. Verify the installation by running:

    sudo systemctl status observiq-otel-collector
    

    The service should show as active (running).

Additional installation resources

For additional installation options and troubleshooting, see the Bindplane agent installation guide.

Configure the Bindplane agent to ingest syslog and send to Google SecOps

Locate the configuration file

  • Linux:

    sudo nano /opt/observiq-otel-collector/config.yaml
    
  • Windows:

    notepad "C:\Program Files\observIQ OpenTelemetry Collector\config.yaml"
    

Edit the configuration file

  • Replace the entire contents of config.yaml with the following configuration:

    receivers:
        udplog:
            listen_address: "0.0.0.0:514"
    
    exporters:
        chronicle/nginx:
            compression: gzip
            creds_file_path: '/etc/bindplane-agent/ingestion-auth.json'
            customer_id: '<customer_id>'
            endpoint: malachiteingestion-pa.googleapis.com
            log_type: NGINX
            raw_log_field: body
    
    service:
        pipelines:
            logs/nginx_to_chronicle:
                receivers:
                    - udplog
                exporters:
                    - chronicle/nginx
    

Configuration parameters

Replace the following placeholders:

  • Receiver configuration:

    • listen_address: IP address and port to listen on:
      • 0.0.0.0 to listen on all interfaces (recommended)
      • Port 514 is the standard syslog port (requires root on Linux; use 1514 for non-root)
  • Exporter configuration:

    • creds_file_path: Full path to ingestion authentication file:
      • Linux: /etc/bindplane-agent/ingestion-auth.json
      • Windows: C:\Program Files\observIQ OpenTelemetry Collector\ingestion-auth.json
    • customer_id: Customer ID copied from the Google SecOps console
    • endpoint: Regional endpoint URL:
      • US: malachiteingestion-pa.googleapis.com
      • Europe: europe-malachiteingestion-pa.googleapis.com
      • Asia: asia-southeast1-malachiteingestion-pa.googleapis.com
      • See Regional Endpoints for complete list

Save the configuration file

  • After editing, save the file:
    • Linux: Press Ctrl+O, then Enter, then Ctrl+X
    • Windows: Click File > Save

Restart the Bindplane agent to apply the changes

  • To restart the Bindplane agent in Linux, run the following command:

    sudo systemctl restart observiq-otel-collector
    
    1. Verify the service is running:

      sudo systemctl status observiq-otel-collector
      
    2. Check logs for errors:

      sudo journalctl -u observiq-otel-collector -f
      
  • To restart the Bindplane agent in Windows, choose one of the following options:

    • Command Prompt or PowerShell as administrator:

      net stop observiq-otel-collector && net start observiq-otel-collector
      
    • Services console:

      1. Press Win+R, type services.msc, and press Enter.
      2. Locate observIQ OpenTelemetry Collector.
      3. Right-click and select Restart.
      4. Verify the service is running:

        sc query observiq-otel-collector
        
      5. Check logs for errors:

        type "C:\Program Files\observIQ OpenTelemetry Collector\log\collector.log"
        

Configure NGINX to forward logs to Bindplane

  1. Open the NGINX configuration file (for example, /etc/nginx/nginx.conf):

    sudo vi /etc/nginx/nginx.conf
    
  2. Edit the configuration, replacing <BINDPLANE_SERVER> and <BINDPLANE_PORT> with your values:

    http {
        access_log syslog:server=<BINDPLANE_SERVER>:<BINDPLANE_PORT>,facility=local7,tag=nginx_access;
        error_log syslog:server=<BINDPLANE_SERVER>:<BINDPLANE_PORT>,facility=local7,tag=nginx_error;
    }
    
  3. Restart NGINX to apply the changes:

    sudo systemctl reload nginx
    

UDM mapping table

Log Field UDM Mapping Logic
_Internal_WorkspaceResourceId target.resource.product_object_id Directly mapped
Computer principal.asset.hostname Directly mapped
Facility additional.fields[facility] Directly mapped
HostName principal.asset.hostname Directly mapped if src_ip is not present
ProcessName principal.application Directly mapped
SeverityLevel security_result.severity Mapped to INFORMATIONAL if the value is info
SourceSystem principal.asset.platform Mapped to LINUX if the value matches Linux
SyslogMessage Multiple fields Parsed using grok to extract time, method, target_path, protocol, response_code, referral_url, user_agent, target_ip, target_host, and cache
TenantId additional.fields[TenantId] Directly mapped
acct principal.user.user_id Directly mapped if not empty or ?
addr principal.asset.ip Directly mapped
audit_epoch metadata.event_timestamp Converted to timestamp using the UNIX format. Nanoseconds are extracted from the original log message.
cache additional.fields[cache] Directly mapped
collection_time.nanos metadata.event_timestamp.nanos Used for nanoseconds of the event timestamp if available
collection_time.seconds metadata.event_timestamp.seconds Used for seconds of the event timestamp if available
data Multiple fields The main source of data, parsed differently based on the log format (Syslog, JSON, or other)
exe target.process.command_line Directly mapped after removing backslashes and quotes
hostname principal.asset.hostname OR principal.asset.ip If it is an IP address, mapped to principal.asset.ip. Otherwise, mapped to principal.asset.hostname
msg metadata.description Directly mapped as the description
node target.asset.hostname Directly mapped
pid target.process.pid Directly mapped
protocol network.application_protocol Mapped to HTTP if the value matches HTTP
referral_url network.http.referral_url Directly mapped if not empty or -
res security_result.action_details Directly mapped
response_code network.http.response_code Directly mapped and converted to integer
ses network.session_id Directly mapped
src_ip principal.asset.ip Directly mapped
target_host target.asset.hostname Directly mapped
target_ip target.asset.ip Directly mapped, after converting the string representation to a JSON array and then extracting individual IPs
target_path target.url Directly mapped
time metadata.event_timestamp Parsed to extract the timestamp using the format dd/MMM/yyyy:HH:mm:ss Z
user_agent network.http.user_agent Directly mapped if not empty or -
metadata.event_type Set to GENERIC_EVENT initially, then potentially overwritten based on other fields like terminal and protocol. Defaults to USER_UNCATEGORIZED if the main grok pattern does not match. Set to NETWORK_HTTP if protocol is HTTP and target_ip is present, and STATUS_UPDATE if protocol is HTTP but target_ip is not present
metadata.log_type Set to NGINX
metadata.product_name Set to NGINX
metadata.vendor_name Set to NGINX
network.ip_protocol Set to TCP if terminal is sshd or ssh, or if the main grok pattern does not match
principal.asset_id Set to GCP.GCE:0001 if terminal is sshd or ssh. Set to GCP.GCE:0002 if the main grok pattern does not match
extensions.auth.type Set to MACHINE if terminal is sshd or ssh

Need more help? Get answers from Community members and Google SecOps professionals.