Collect Splunk Attack Analyzer logs

Supported in:

This document explains how to ingest Splunk Attack Analyzer logs to Google Security Operations using Google Cloud Storage V2.

Splunk Attack Analyzer (formerly TwinWave) is an automated threat analysis platform that detects phishing and malware through behavioral analysis. It provides completed job results and normalized forensics data through a REST API.

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance
  • A GCP project with Cloud Storage API enabled
  • Permissions to create and manage GCS buckets
  • Permissions to manage IAM policies on GCS buckets
  • Permissions to create Cloud Run services, Pub/Sub topics, and Cloud Scheduler jobs
  • Privileged access to Splunk Attack Analyzer with API key generation permissions

Create Google Cloud Storage bucket

  1. Go to the Google Cloud Console.
  2. Select your project or create a new one.
  3. In the navigation menu, go to Cloud Storage > Buckets.
  4. Click Create bucket.
  5. Provide the following configuration details:

    Setting Value
    Name your bucket Enter a globally unique name (for example, splunk-attack-analyzer-logs)
    Location type Choose based on your needs (Region, Dual-region, Multi-region)
    Location Select the location (for example, us-central1)
    Storage class Standard (recommended for frequently accessed logs)
    Access control Uniform (recommended)
    Protection tools Optional: Enable object versioning or retention policy
  6. Click Create.

Collect Splunk Attack Analyzer API credentials

Generate API key

  1. Log in to Splunk Attack Analyzer.
  2. Select your username in the top-right corner, then select API Keys.
  3. Click + New Key.
  4. Enter a descriptive name for the key (for example, Google Security Operations Integration).
  5. Click Create.
  6. Copy and save the API secret displayed in the modal in a secure location.

Verify permissions

To verify the API key has the required access:

  1. Log in to Splunk Attack Analyzer.
  2. Select your username in the top-right corner, then select API Keys.
  3. Verify the API key is listed and active.

Test API access

  • Test your credentials before proceeding with the integration:

    # Replace with your actual API key
    API_KEY="your-api-key"
    
    # Test API access - list completed jobs
    curl -v -H "Authorization: Bearer ${API_KEY}" \
      "https://app.twinwave.io/api/v1/jobs?done=true&limit=1"
    

Create service account for Cloud Run function

The Cloud Run function needs a service account with permissions to write to GCS bucket and be invoked by Pub/Sub.

Create service account

  1. In the GCP Console, go to IAM & Admin > Service Accounts.
  2. Click Create Service Account.
  3. Provide the following configuration details:
    • Service account name: Enter saa-collector-sa.
    • Service account description: Enter Service account for Cloud Run function to collect Splunk Attack Analyzer logs.
  4. Click Create and Continue.
  5. In the Grant this service account access to project section, add the following roles:
    1. Click Select a role.
    2. Search for and select Storage Object Admin.
    3. Click + Add another role.
    4. Search for and select Cloud Run Invoker.
    5. Click + Add another role.
    6. Search for and select Cloud Functions Invoker.
  6. Click Continue.
  7. Click Done.

These roles are required for:

  • Storage Object Admin: Write logs to GCS bucket and manage state files
  • Cloud Run Invoker: Allow Pub/Sub to invoke the function
  • Cloud Functions Invoker: Allow function invocation

Grant IAM permissions on GCS bucket

Grant the service account write permissions on the GCS bucket:

  1. Go to Cloud Storage > Buckets.
  2. Click your bucket name (for example, splunk-attack-analyzer-logs).
  3. Go to the Permissions tab.
  4. Click Grant access.
  5. Provide the following configuration details:
    • Add principals: Enter the service account email (for example, saa-collector-sa@your-project.iam.gserviceaccount.com).
    • Assign roles: Select Storage Object Admin.
  6. Click Save.

Create Pub/Sub topic

Create a Pub/Sub topic that Cloud Scheduler will publish to and the Cloud Run function will subscribe to.

  1. In the GCP Console, go to Pub/Sub > Topics.
  2. Click Create topic.
  3. Provide the following configuration details:
    • Topic ID: Enter saa-trigger.
    • Leave other settings as default.
  4. Click Create.

Create Cloud Run function to collect logs

The Cloud Run function will be triggered by Pub/Sub messages from Cloud Scheduler to fetch logs from Splunk Attack Analyzer API and write them to GCS.

  1. In the GCP Console, go to Cloud Run.
  2. Click Create service.
  3. Select Function (use an inline editor to create a function).
  4. In the Configure section, provide the following configuration details:

    Setting Value
    Service name saa-collector
    Region Select region matching your GCS bucket (for example, us-central1)
    Runtime Select Python 3.12 or later
  5. In the Trigger (optional) section:

    1. Click + Add trigger.
    2. Select Cloud Pub/Sub.
    3. In Select a Cloud Pub/Sub topic, choose the topic saa-trigger.
    4. Click Save.
  6. In the Authentication section:

    1. Select Require authentication.
    2. Check Identity and Access Management (IAM).
  7. Scroll down and expand Containers, Networking, Security.

  8. Go to the Security tab:

    • Service account: Select the service account saa-collector-sa.
  9. Go to the Containers tab:

    1. Click Variables & Secrets.
    2. Click + Add variable for each environment variable:
    Variable Name Example Value Description
    GCS_BUCKET splunk-attack-analyzer-logs GCS bucket name
    GCS_PREFIX saa Prefix for log files
    STATE_KEY saa/state.json State file path
    API_KEY your-api-key Splunk Attack Analyzer API key
    API_BASE https://app.twinwave.io API base URL
    MAX_RECORDS 5000 Max records per run
    PAGE_SIZE 100 Records per page
    LOOKBACK_HOURS 24 Initial lookback period
  10. Scroll down in the Variables & Secrets tab to Requests:

    • Request timeout: Enter 600 seconds (10 minutes).
  11. Go to the Settings tab in Containers:

    • In the Resources section:
      • Memory: Select 512 MiB or higher.
      • CPU: Select 1.
  12. In the Revision scaling section:

    • Minimum number of instances: Enter 0.
    • Maximum number of instances: Enter 100 (or adjust based on expected load).
  13. Click Create.

  14. Wait for the service to be created (1-2 minutes).

  15. After the service is created, the inline code editor will open automatically.

Add function code

  1. Enter main in Function entry point.
  2. In the inline code editor, create two files:

    • First file: main.py:

      import functions_framework
      from google.cloud import storage
      import json
      import os
      import urllib3
      from datetime import datetime, timezone, timedelta
      import time
      
      # Initialize HTTP client with timeouts
      http = urllib3.PoolManager(
        timeout=urllib3.Timeout(connect=5.0, read=30.0),
        retries=False,
      )
      
      # Initialize Storage client
      storage_client = storage.Client()
      
      # Environment variables
      GCS_BUCKET = os.environ.get('GCS_BUCKET')
      GCS_PREFIX = os.environ.get('GCS_PREFIX', 'saa')
      STATE_KEY = os.environ.get('STATE_KEY', 'saa/state.json')
      API_KEY = os.environ.get('API_KEY', '')
      API_BASE = os.environ.get('API_BASE', 'https://app.twinwave.io').rstrip('/')
      MAX_RECORDS = int(os.environ.get('MAX_RECORDS', '5000'))
      PAGE_SIZE = int(os.environ.get('PAGE_SIZE', '100'))
      LOOKBACK_HOURS = int(os.environ.get('LOOKBACK_HOURS', '24'))
      
      def parse_datetime(value: str) -> datetime:
        """Parse ISO datetime string to datetime object."""
        if value.endswith("Z"):
          value = value[:-1] + "+00:00"
        return datetime.fromisoformat(value)
      
      @functions_framework.cloud_event
      def main(cloud_event):
        """
        Cloud Run function triggered by Pub/Sub to fetch Splunk Attack Analyzer logs and write to GCS.
      
        Args:
          cloud_event: CloudEvent object containing Pub/Sub message
        """
      
        if not all([GCS_BUCKET, API_KEY]):
          print('Error: Missing required environment variables')
          return
      
        try:
          bucket = storage_client.bucket(GCS_BUCKET)
      
          # Load state
          state = load_state(bucket, STATE_KEY)
      
          # Determine time window
          now = datetime.now(timezone.utc)
          last_time = None
      
          if isinstance(state, dict) and state.get("last_event_time"):
            try:
              last_time = parse_datetime(state["last_event_time"])
              last_time = last_time - timedelta(minutes=2)
            except Exception as e:
              print(f"Warning: Could not parse last_event_time: {e}")
      
          if last_time is None:
            last_time = now - timedelta(hours=LOOKBACK_HOURS)
      
          print(f"Fetching jobs from {last_time.isoformat()} to {now.isoformat()}")
      
          # Fetch completed jobs
          jobs, newest_event_time = fetch_jobs(
            start_time=last_time,
            end_time=now,
            page_size=PAGE_SIZE,
            max_records=MAX_RECORDS,
          )
      
          if not jobs:
            print("No new completed jobs found.")
            save_state(bucket, STATE_KEY, now.isoformat())
            return
      
          # Fetch forensics for each job
          all_records = []
          for job in jobs:
            job_id = job.get('id', '')
            if not job_id:
              continue
      
            forensics = fetch_forensics(job_id)
            if forensics:
              # Combine job metadata with forensics
              record = {
                'job': job,
                'forensics': forensics
              }
              all_records.append(record)
      
          if not all_records:
            print("No forensics data retrieved.")
            save_state(bucket, STATE_KEY, now.isoformat())
            return
      
          # Write to GCS as NDJSON
          timestamp = now.strftime('%Y%m%d_%H%M%S')
          object_key = f"{GCS_PREFIX}/logs_{timestamp}.ndjson"
          blob = bucket.blob(object_key)
      
          ndjson = '\n'.join([json.dumps(record, ensure_ascii=False) for record in all_records]) + '\n'
          blob.upload_from_string(ndjson, content_type='application/x-ndjson')
      
          print(f"Wrote {len(all_records)} records to gs://{GCS_BUCKET}/{object_key}")
      
          if newest_event_time:
            save_state(bucket, STATE_KEY, newest_event_time)
          else:
            save_state(bucket, STATE_KEY, now.isoformat())
      
          print(f"Successfully processed {len(all_records)} records")
      
        except Exception as e:
          print(f'Error processing logs: {str(e)}')
          raise
      
      def load_state(bucket, key):
        """Load state from GCS."""
        try:
          blob = bucket.blob(key)
          if blob.exists():
            state_data = blob.download_as_text()
            return json.loads(state_data)
        except Exception as e:
          print(f"Warning: Could not load state: {e}")
      
        return {}
      
      def save_state(bucket, key, last_event_time_iso: str):
        """Save the last event timestamp to GCS state file."""
        try:
          state = {'last_event_time': last_event_time_iso}
          blob = bucket.blob(key)
          blob.upload_from_string(
            json.dumps(state, indent=2),
            content_type='application/json'
          )
          print(f"Saved state: last_event_time={last_event_time_iso}")
        except Exception as e:
          print(f"Warning: Could not save state: {e}")
      
      def fetch_jobs(start_time: datetime, end_time: datetime, page_size: int, max_records: int):
        """
        Fetch completed jobs from Splunk Attack Analyzer API with pagination and rate limiting.
      
        Args:
          start_time: Start time for job query
          end_time: End time for job query
          page_size: Number of records per page
          max_records: Maximum total records to fetch
      
        Returns:
          Tuple of (jobs list, newest_event_time ISO string)
        """
        endpoint = f"{API_BASE}/api/v1/jobs"
      
        headers = {
          'Authorization': f'Bearer {API_KEY}',
          'Accept': 'application/json',
          'User-Agent': 'GoogleSecOps-SAACollector/1.0'
        }
      
        records = []
        newest_time = None
        page_num = 0
        backoff = 1.0
        offset = 0
      
        while True:
          page_num += 1
      
          if len(records) >= max_records:
            print(f"Reached max_records limit ({max_records})")
            break
      
          current_limit = min(page_size, max_records - len(records))
          url = f"{endpoint}?done=true&limit={current_limit}&offset={offset}"
      
          try:
            response = http.request('GET', url, headers=headers)
      
            if response.status == 429:
              retry_after = int(response.headers.get('Retry-After', str(int(backoff))))
              print(f"Rate limited (429). Retrying after {retry_after}s...")
              time.sleep(retry_after)
              backoff = min(backoff * 2, 30.0)
              continue
      
            backoff = 1.0
      
            if response.status != 200:
              print(f"HTTP Error: {response.status}")
              response_text = response.data.decode('utf-8')
              print(f"Response body: {response_text}")
              return [], None
      
            data = json.loads(response.data.decode('utf-8'))
      
            page_results = data.get('jobs', [])
      
            if not page_results:
              print(f"No more results (empty page)")
              break
      
            # Filter by time window
            filtered = []
            for job in page_results:
              created = job.get('created_at', '')
              if created:
                try:
                  job_time = parse_datetime(created)
                  if start_time <= job_time <= end_time:
                    filtered.append(job)
                    if newest_time is None or job_time > parse_datetime(newest_time):
                      newest_time = created
                except Exception as e:
                  print(f"Warning: Could not parse job time: {e}")
                  filtered.append(job)
      
            print(f"Page {page_num}: Retrieved {len(page_results)} jobs, {len(filtered)} in time window")
            records.extend(filtered)
      
            if len(page_results) < page_size:
              print(f"Reached last page (size={len(page_results)} < limit={page_size})")
              break
      
            offset += len(page_results)
      
          except Exception as e:
            print(f"Error fetching jobs: {e}")
            return [], None
      
        print(f"Retrieved {len(records)} total jobs from {page_num} pages")
        return records, newest_time
      
      def fetch_forensics(job_id: str):
        """
        Fetch normalized forensics for a specific job.
      
        Args:
          job_id: The job ID
      
        Returns:
          Forensics data dict or None
        """
        endpoint = f"{API_BASE}/api/v1/jobs/{job_id}/normalizedforensics"
      
        headers = {
          'Authorization': f'Bearer {API_KEY}',
          'Accept': 'application/json',
          'User-Agent': 'GoogleSecOps-SAACollector/1.0'
        }
      
        backoff = 1.0
        max_retries = 3
      
        for attempt in range(max_retries):
          try:
            response = http.request('GET', endpoint, headers=headers)
      
            if response.status == 429:
              retry_after = int(response.headers.get('Retry-After', str(int(backoff))))
              print(f"Rate limited (429) on forensics for job {job_id}. Retrying after {retry_after}s...")
              time.sleep(retry_after)
              backoff = min(backoff * 2, 30.0)
              continue
      
            if response.status != 200:
              print(f"Warning: Could not fetch forensics for job {job_id}: HTTP {response.status}")
              return None
      
            return json.loads(response.data.decode('utf-8'))
      
          except Exception as e:
            print(f"Warning: Error fetching forensics for job {job_id}: {e}")
            if attempt < max_retries - 1:
              time.sleep(backoff)
              backoff = min(backoff * 2, 30.0)
              continue
            return None
      
        return None
      
    • Second file: requirements.txt:

      functions-framework==3.*
      google-cloud-storage==2.*
      urllib3>=2.0.0
      
  3. Click Deploy to save and deploy the function.

  4. Wait for deployment to complete (2-3 minutes).

Create Cloud Scheduler job

Cloud Scheduler will publish messages to the Pub/Sub topic at regular intervals, triggering the Cloud Run function.

  1. In the GCP Console, go to Cloud Scheduler.
  2. Click Create Job.
  3. Provide the following configuration details:

    Setting Value
    Name saa-collector-hourly
    Region Select same region as Cloud Run function
    Frequency 0 * * * * (every hour, on the hour)
    Timezone Select timezone (UTC recommended)
    Target type Pub/Sub
    Topic Select the topic saa-trigger
    Message body {} (empty JSON object)
  4. Click Create.

Schedule frequency options

Choose frequency based on log volume and latency requirements:

Frequency Cron Expression Use Case
Every 5 minutes */5 * * * * High-volume, low-latency
Every 15 minutes */15 * * * * Medium volume
Every hour 0 * * * * Standard (recommended)
Every 6 hours 0 */6 * * * Low volume, batch processing
Daily 0 0 * * * Historical data collection

Test the integration

  1. In the Cloud Scheduler console, find your job (saa-collector-hourly).
  2. Click Force run to trigger manually.
  3. Wait a few seconds and go to Cloud Run > Services > saa-collector > Logs.
  4. Verify the function executed successfully. Look for:

    Fetching jobs from YYYY-MM-DDTHH:MM:SS+00:00 to YYYY-MM-DDTHH:MM:SS+00:00
    Page 1: Retrieved X jobs, Y in time window
    Wrote Z records to gs://splunk-attack-analyzer-logs/saa/logs_YYYYMMDD_HHMMSS.ndjson
    Successfully processed Z records
    
  5. Check the GCS bucket (splunk-attack-analyzer-logs) to confirm logs were written.

If you see errors in the logs:

  • HTTP 401: Check API key in environment variables
  • HTTP 403: Verify API key has required permissions
  • HTTP 429: Rate limiting - function will automatically retry with backoff
  • Missing environment variables: Check all required variables are set

Configure a feed in Google SecOps to ingest Splunk Attack Analyzer logs

  1. Go to SIEM Settings > Feeds.
  2. Click Add New Feed.
  3. Click Configure a single feed.
  4. In the Feed name field, enter a name for the feed (for example, Splunk Attack Analyzer Logs).
  5. Select Google Cloud Storage V2 as the Source type.
  6. Select Splunk Attack Analyzer as the Log type.
  7. Click Get Service Account. A unique service account email will be displayed, for example:

    chronicle-12345678@chronicle-gcp-prod.iam.gserviceaccount.com
    
  8. Copy this email address. You will use it in the next step.

  9. Click Next.

  10. Specify values for the following input parameters:

    • Storage bucket URL: Enter the GCS bucket URI with the prefix path:

      gs://splunk-attack-analyzer-logs/saa/
      
      • Replace:
        • splunk-attack-analyzer-logs: Your GCS bucket name.
        • saa: Optional prefix/folder path where logs are stored (leave empty for root).
    • Source deletion option: Select the deletion option according to your preference:

      • Never: Never deletes any files after transfers (recommended for testing).
      • Delete transferred files: Deletes files after successful transfer.
      • Delete transferred files and empty directories: Deletes files and empty directories after successful transfer.

    • Maximum File Age: Include files modified in the last number of days (default is 180 days).

    • Asset namespace: The asset namespace.

    • Ingestion labels: The label to be applied to the events from this feed.

  11. Click Next.

  12. Review your new feed configuration in the Finalize screen, and then click Submit.

Grant IAM permissions to the Google SecOps service account

The Google SecOps service account needs Storage Object Viewer role on your GCS bucket.

  1. Go to Cloud Storage > Buckets.
  2. Click your bucket name (splunk-attack-analyzer-logs).
  3. Go to the Permissions tab.
  4. Click Grant access.
  5. Provide the following configuration details:
    • Add principals: Paste the Google SecOps service account email.
    • Assign roles: Select Storage Object Viewer.
  6. Click Save.

UDM mapping table

Log Field UDM Mapping Logic
when metadata.event_timestamp When the event occurred
deviceName principal.hostname Hostname of the principal
messageid metadata.id Unique identifier for the event
action security_result.action Action taken by the security product
protocol network.ip_protocol IP protocol
srcAddr principal.ip IP address of the principal
srcPort principal.port Port number of the principal
dstAddr target.ip IP address of the target
dstPort target.port Port number of the target
metadata.event_type Type of event
metadata.product_name Product name
metadata.vendor_name Vendor/company name

Need more help? Get answers from Community members and Google SecOps professionals.