Collect Teradata Database logs

Supported in:

This document explains how to ingest Teradata Database logs to Google Security Operations using Bindplane.

Teradata Database is an enterprise data warehouse platform that generates query, access, security audit, and system event logs. It provides high-performance analytics and data management for large-scale on-premise deployments.

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance
  • Windows Server 2016 or later, or Linux host with systemd
  • Network connectivity between the Bindplane agent and the Teradata Database server
  • If running behind a proxy, ensure firewall ports are open per the Bindplane agent requirements
  • Privileged access to the Teradata Database system with administrator permissions

Get Google SecOps ingestion authentication file

  1. Sign in to the Google SecOps console.
  2. Go to SIEM Settings > Collection Agents.
  3. Download the Ingestion Authentication File. Save the file securely on the system where Bindplane will be installed.

Get Google SecOps customer ID

  1. Sign in to the Google SecOps console.
  2. Go to SIEM Settings > Profile.
  3. Copy and save the Customer ID from the Organization Details section.

Install the Bindplane agent

Install the Bindplane agent on your Windows or Linux operating system according to the following instructions.

Windows installation

  1. Open Command Prompt or PowerShell as an administrator.
  2. Run the following command:

    msiexec /i "https://github.com/observIQ/bindplane-agent/releases/latest/download/observiq-otel-collector.msi" /quiet
    
  3. Wait for the installation to complete.

  4. Verify the installation by running:

    sc query observiq-otel-collector
    

The service should show as RUNNING.

Linux installation

  1. Open a terminal with root or sudo privileges.
  2. Run the following command:

    sudo sh -c "$(curl -fsSlL https://github.com/observiq/bindplane-agent/releases/latest/download/install_unix.sh)" install_unix.sh
    
  3. Wait for the installation to complete.

  4. Verify the installation by running:

    sudo systemctl status observiq-otel-collector
    

The service should show as active (running).

Additional installation resources

For additional installation options and troubleshooting, see Bindplane agent installation guide.

Configure Bindplane agent to ingest syslog and send to Google SecOps

Locate the configuration file

  • Linux:

    sudo nano /etc/bindplane-agent/config.yaml
    
  • Windows:

    notepad "C:\Program Files\observIQ OpenTelemetry Collector\config.yaml"
    

Edit the configuration file

  • Replace the entire contents of config.yaml with the following configuration:

    receivers:
        tcplog:
            listen_address: "0.0.0.0:514"
    
    exporters:
        chronicle/teradata_db:
            compression: gzip
            creds_file_path: '/etc/bindplane-agent/ingestion-auth.json'
            customer_id: '<customer_id>'
            endpoint: malachiteingestion-pa.googleapis.com
            log_type: TERADATA_DB
            raw_log_field: body
    
    service:
        pipelines:
            logs/teradata_db_to_chronicle:
                receivers:
                    - tcplog
                exporters:
                    - chronicle/teradata_db
    

Configuration parameters

Replace the following placeholders:

  • Receiver configuration:

    • listen_address: IP address and port to listen on:
      • 0.0.0.0 to listen on all interfaces (recommended)
      • Port 514 is the standard syslog port (requires root on Linux; use 1514 for non-root)
  • Exporter configuration:

    • creds_file_path: Full path to ingestion authentication file:
      • Linux: /etc/bindplane-agent/ingestion-auth.json
      • Windows: C:\Program Files\observIQ OpenTelemetry Collector\ingestion-auth.json
    • customer_id: Customer ID copied from the Google SecOps console
    • endpoint: Regional endpoint URL:
      • US: malachiteingestion-pa.googleapis.com
      • Europe: europe-malachiteingestion-pa.googleapis.com
      • Asia: asia-southeast1-malachiteingestion-pa.googleapis.com
      • See Regional Endpoints for complete list

Save the configuration file

  • After editing, save the file:
    • Linux: Press Ctrl+O, then Enter, then Ctrl+X
    • Windows: Click File > Save

Restart the Bindplane agent to apply the changes

To restart the Bindplane agent in Linux:

  1. Run the following command:

    sudo systemctl restart observiq-otel-collector
    
  2. Verify the service is running:

    sudo systemctl status observiq-otel-collector
    
  3. Check logs for errors:

    sudo journalctl -u observiq-otel-collector -f
    

To restart the Bindplane agent in Windows:

  1. Choose one of the following options:

    • Command Prompt or PowerShell as administrator:
    net stop observiq-otel-collector && net start observiq-otel-collector
    
    • Services console:
      1. Press Win+R, type services.msc, and press Enter.
      2. Locate observIQ OpenTelemetry Collector.
      3. Right-click and select Restart.
  2. Verify the service is running:

    sc query observiq-otel-collector
    
  3. Check logs for errors:

    type "C:\Program Files\observIQ OpenTelemetry Collector\log\collector.log"
    

Configure Teradata Database syslog forwarding

  1. Sign in to the Teradata Database server as an administrator.
  2. Configure syslog forwarding using the Teradata Database Gateway logging facility. Edit the rsyslog configuration file:

    sudo nano /etc/rsyslog.d/teradata.conf
    
  3. Add the following lines to forward Teradata logs to the Bindplane agent:

    # Forward Teradata Database logs to Bindplane agent
    if $programname == 'teradata' then @@<bindplane-ip>:514
    
    • Replace <bindplane-ip> with the IP address of the Bindplane agent host.
  4. Restart the rsyslog service:

    sudo systemctl restart rsyslog
    
  5. Verify that logs are being sent by checking the Bindplane agent logs.

UDM mapping table

Log Field UDM Mapping Logic
additional.fields Merged from various labels like collect_timestamp_label, logon_date_time_label, logon_source_label, protocol_label, remote_hostname_label, logon_details_label, port_label, connection_id_label, cid_label, jdbc_driver_info_label, request_mode_label, statement_group_label, statement_type_label
app_id target.asset.product_object_id Value copied directly
client_addr principal.asset.hostname, principal.asset.ip, principal.hostname, principal.ip Value copied directly to all
client_id target.asset.asset_id Concatenated as CLIENT_ID:%{client_id}
collect_timestamp metadata.collected_timestamp Converted using date match with formats RFC3339, UNIX_MS, UNIX, ISO8601, yyyy-MM-dd HH:mm:ss
default_database target.resource.name, target.resource.resource_type Value copied directly to name; set to DATABASE for resource_type
error_code network.http.response_code, additional.fields Converted to integer for response_code; copied to error_code_label merged into additional.fields
error_text security_result.summary Value copied directly
host_name principal.asset.hostname, principal.hostname Value copied directly to both
logon_date_time metadata.event_timestamp Converted using date match with formats ISO8601, yyyy-MM-dd HH:mm:ss
logon_source target.resource.parent Value copied directly
logon_source additional.fields Extracted logonSource using grok, then set as %{logonSource}LSS in logon_source_label merged into additional.fields
logon_source additional.fields Extracted protocol, remote_hostname, port, logon_details, connection_id, cid, jdbc_driver_info using grok, then set in respective labels merged into additional.fields
proc_id target.resource.id Value copied directly
profile_name target.resource.attribute.labels Set in profile_name_label merged into attribute.labels
query_text target.process.command_line Value copied directly
request_mode target.resource.resource_subtype, additional.fields Value copied directly to resource_subtype; set in request_mode_label merged into additional.fields
session_id network.session_id Value copied directly
statement_group principal.user.groupid, additional.fields Value copied directly to groupid; set in statement_group_label merged into additional.fields
statement_type security_result.about.labels, additional.fields Set in statementtype_value_label merged into about.labels; set in statement_type_label merged into additional.fields
ts metadata.event_timestamp Converted using date match with formats MMM d HH:mm:ss, MMM dd HH:mm:ss, SYSLOGTIMESTAMP, RFC3339, UNIX, ISO8601, UNIX_MS
tt_granularity target.resource.product_object_id Value copied directly
user_id principal.user.userid Value copied directly
user_name principal.user.user_display_name Value copied directly
metadata.event_type Set to USER_RESOURCE_ACCESS if has_principal, has_principal_user, has_target_resource true; else STATUS_UPDATE if has_principal true; else GENERIC_EVENT
metadata.product_name Set to "TERADATA_DB"
metadata.vendor_name Set to "TERADATA"

Need more help? Get answers from Community members and Google SecOps professionals.