Collect ServiceNow audit logs
This document explains how to ingest ServiceNow audit logs to Google Security Operations using multiple methods.
Option A: AWS S3 with Lambda
This method uses AWS Lambda to periodically query the ServiceNow REST API for audit logs and store them in an S3 bucket. Google Security Operations then collects the logs from the S3 bucket.
Before you begin
- A Google SecOps instance
- Privileged access to ServiceNow tenant or API
- Privileged access to AWS (S3, IAM, Lambda, EventBridge)
Collect ServiceNow prerequisites (IDs, API keys, org IDs, tokens)
- Sign in to the ServiceNow Admin Console.
- Go to System Security > Users and Groups > Users.
- Create a new user or select an existing user with appropriate permissions to access audit logs.
- Copy and save in a secure location the following details:
- Username
- Password
- Instance URL (e.g., https://instance.service-now.com)
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket
- Save bucket Name and Region for future reference (for example,
servicenow-audit-logs). - Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select the Security credentials tab.
- Click Create Access Key in the Access Keys section.
- Select Third-party service as the Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download CSV file for save the Access Key and Secret Access Key for later use.
- Click Done.
- Select the Permissions tab.
- Click Add permissions in the Permissions policies section .
- Select Add permissions.
- Select Attach policies directly.
- Search for and select the AmazonS3FullAccess policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies > Create policy > JSON tab.
Copy and paste the policy below.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::servicenow-audit-logs/*" }, { "Sid": "AllowGetStateObject", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::servicenow-audit-logs/audit-logs/state.json" } ] }- Replace
servicenow-audit-logsif you entered a different bucket name.
- Replace
Click Next > Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy.
Name the role
servicenow-audit-lambda-roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
Provide the following configuration details:
Setting Value Name servicenow-audit-collectorRuntime Python 3.13 Architecture x86_64 Execution role servicenow-audit-lambda-roleAfter the function is created, open the Code tab, delete the stub and enter the following code (
servicenow-audit-collector.py):import urllib3 import json import os import datetime import boto3 import base64 def lambda_handler(event, context): # ServiceNow API details base_url = os.environ['API_BASE_URL'] # e.g., https://instance.service-now.com username = os.environ['API_USERNAME'] password = os.environ['API_PASSWORD'] # S3 details s3_bucket = os.environ['S3_BUCKET'] s3_prefix = os.environ['S3_PREFIX'] # State management state_key = os.environ.get('STATE_KEY', f"{s3_prefix}/state.json") # Pagination settings page_size = int(os.environ.get('PAGE_SIZE', '1000')) max_pages = int(os.environ.get('MAX_PAGES', '1000')) # Initialize S3 client s3 = boto3.client('s3') # Get last run timestamp from state file last_run_timestamp = get_last_run_timestamp(s3, s3_bucket, state_key) # Current timestamp for this run current_timestamp = datetime.datetime.now().isoformat() # Query ServiceNow API for audit logs with pagination audit_logs = get_audit_logs(base_url, username, password, last_run_timestamp, page_size, max_pages) if audit_logs: # Write logs to S3 in NDJSON format (newline-delimited JSON) timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S') s3_key = f"{s3_prefix}/servicenow-audit-{timestamp}.ndjson" # Format as NDJSON: one JSON object per line body = "\n".join(json.dumps(log) for log in audit_logs) + "\n" s3.put_object( Bucket=s3_bucket, Key=s3_key, Body=body, ContentType='application/x-ndjson' ) # Update state file update_state_file(s3, s3_bucket, state_key, current_timestamp) return { 'statusCode': 200, 'body': json.dumps(f'Successfully exported {len(audit_logs)} audit logs to S3') } else: return { 'statusCode': 200, 'body': json.dumps('No new audit logs to export') } def get_last_run_timestamp(s3, bucket, key): try: response = s3.get_object(Bucket=bucket, Key=key) state = json.loads(response['Body'].read().decode('utf-8')) return state.get('last_run_timestamp', '1970-01-01T00:00:00') except: return '1970-01-01T00:00:00' def update_state_file(s3, bucket, key, timestamp): state = {'last_run_timestamp': timestamp} s3.put_object( Bucket=bucket, Key=key, Body=json.dumps(state), ContentType='application/json' ) def get_audit_logs(base_url, username, password, last_run_timestamp, page_size=1000, max_pages=1000): """ Query ServiceNow sys_audit table with proper pagination. Uses sys_created_on field for timestamp filtering. """ # Encode credentials auth_string = f"{username}:{password}" auth_bytes = auth_string.encode('ascii') auth_encoded = base64.b64encode(auth_bytes).decode('ascii') # Setup HTTP client http = urllib3.PoolManager() headers = { 'Authorization': f'Basic {auth_encoded}', 'Accept': 'application/json' } results = [] offset = 0 for page in range(max_pages): # Build query with pagination # Use sys_created_on (not created_on) for timestamp filtering query_params = ( f"sysparm_query=sys_created_onAFTER{last_run_timestamp}" f"&sysparm_display_value=true" f"&sysparm_limit={page_size}" f"&sysparm_offset={offset}" ) url = f"{base_url}/api/now/table/sys_audit?{query_params}" try: response = http.request('GET', url, headers=headers) if response.status == 200: data = json.loads(response.data.decode('utf-8')) chunk = data.get('result', []) results.extend(chunk) # Stop if we got fewer records than page_size (last page) if len(chunk) < page_size: break # Move to next page offset += page_size else: print(f"Error querying ServiceNow API: {response.status} - {response.data.decode('utf-8')}") break except Exception as e: print(f"Exception querying ServiceNow API: {str(e)}") break return resultsGo to Configuration > Environment variables > Edit > Add new environment variable.
Enter the following environment variables, replacing with your values.
Key Example value S3_BUCKETservicenow-audit-logsS3_PREFIXaudit-logs/STATE_KEYaudit-logs/state.jsonAPI_BASE_URLhttps://instance.service-now.comAPI_USERNAME<your-username>API_PASSWORD<your-password>PAGE_SIZE1000MAX_PAGES1000After the function is created, stay on its page (or open Lambda > Functions > servicenow-audit-collector).
Select the Configuration tab.
In the General configuration panel click Edit.
Change Timeout to 5 minutes (300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (
1 hour). - Target: Your Lambda function
servicenow-audit-collector. - Name:
servicenow-audit-collector-1h.
- Recurring schedule: Rate (
- Click Create schedule.
Configure a feed in Google SecOps to ingest ServiceNow Audit logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
ServiceNow Audit logs). - Select Amazon S3 V2 as the Source type.
- Select ServiceNow Audit as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://servicenow-audit-logs/audit-logs/ - Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace.
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
Option B: Bindplane agent with syslog
This method uses a Bindplane agent to collect ServiceNow Audit logs and forward them to Google Security Operations. Since ServiceNow doesn't natively support syslog for audit logs, we'll use a script to query the ServiceNow REST API and forward the logs to the Bindplane agent via syslog.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- A Windows 2016 or later or Linux host with
systemd - If running behind a proxy, ensure firewall ports are open per the Bindplane agent requirements
- Privileged access to the ServiceNow management console or appliance
Get Google SecOps ingestion authentication file
- Sign in to the Google SecOps console.
- Go to SIEM Settings > Collection Agents.
- Download the Ingestion Authentication File. Save the file securely on the system where Bindplane will be installed.
Get Google SecOps customer ID
- Sign in to the Google SecOps console.
- Go to SIEM Settings > Profile.
- Copy and save the Customer ID from the Organization Details section.
Install the Bindplane agent
Install the Bindplane agent on your Windows or Linux operating system according to the following instructions.
Linux installation
- Open a terminal with root or sudo privileges.
Run the following command:
sudo sh -c "$(curl -fsSlL https://github.com/observiq/bindplane-agent/releases/latest/download/install_unix.sh)" install_unix.sh
Additional installation resources
- For additional installation options, consult this installation guide.
Configure the Bindplane agent to ingest Syslog and send to Google SecOps
Access the configuration file:
- Locate the
config.yamlfile. Typically, it's in the/etc/bindplane-agent/directory on Linux or in the installation directory on Windows. - Open the file using a text editor (for example,
nano,vi, or Notepad).
- Locate the
Edit the
config.yamlfile as follows:receivers: udplog: # Replace the port and IP address as required listen_address: "0.0.0.0:514" exporters: chronicle/chronicle_w_labels: compression: gzip # Adjust the path to the credentials file you downloaded in Step 1 creds_file_path: '/path/to/ingestion-authentication-file.json' # Replace with your actual customer ID from Step 2 customer_id: <YOUR_CUSTOMER_ID> # Replace with the appropriate regional endpoint endpoint: <CUSTOMER_REGION_ENDPOINT> # Add optional ingestion labels for better organization log_type: 'SERVICENOW_AUDIT' raw_log_field: body ingestion_labels: service: pipelines: logs/source0__chronicle_w_labels-0: receivers: - udplog exporters: - chronicle/chronicle_w_labels
- Replace the port and IP address as required in your infrastructure.
- Replace
<YOUR_CUSTOMER_ID>with the actual Customer ID. - Replace
<CUSTOMER_REGION_ENDPOINT>with the appropriate regional endpoint from the Regional Endpoints documentation. - Update
/path/to/ingestion-authentication-file.jsonto the path where the authentication file was saved in the Get Google SecOps ingestion authentication file section.
Restart the Bindplane agent to apply the changes
To restart the Bindplane agent in Linux, run the following command:
sudo systemctl restart bindplane-agentTo restart the Bindplane agent in Windows, you can either use the Services console or enter the following command:
net stop BindPlaneAgent && net start BindPlaneAgent
Create a script to forward ServiceNow Audit logs to syslog
Since ServiceNow doesn't natively support syslog for audit logs, we'll create a script that queries the ServiceNow REST API and forwards the logs to syslog. This script can be scheduled to run periodically.
Python Script Example (Linux)
Create a file named
servicenow_audit_to_syslog.pywith the following content:import urllib3 import json import datetime import base64 import socket import time import os # ServiceNow API details BASE_URL = 'https://instance.service-now.com' # Replace with your ServiceNow instance URL USERNAME = 'admin' # Replace with your ServiceNow username PASSWORD = 'password' # Replace with your ServiceNow password # Syslog details SYSLOG_SERVER = '127.0.0.1' # Replace with your Bindplane agent IP SYSLOG_PORT = 514 # Replace with your Bindplane agent port # State file to keep track of last run STATE_FILE = '/tmp/servicenow_audit_last_run.txt' # Pagination settings PAGE_SIZE = 1000 MAX_PAGES = 1000 def get_last_run_timestamp(): try: with open(STATE_FILE, 'r') as f: return f.read().strip() except: return '1970-01-01T00:00:00' def update_state_file(timestamp): with open(STATE_FILE, 'w') as f: f.write(timestamp) def send_to_syslog(message): sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.sendto(message.encode(), (SYSLOG_SERVER, SYSLOG_PORT)) sock.close() def get_audit_logs(last_run_timestamp): """ Query ServiceNow sys_audit table with proper pagination. Uses sys_created_on field for timestamp filtering. """ # Encode credentials auth_string = f"{USERNAME}:{PASSWORD}" auth_bytes = auth_string.encode('ascii') auth_encoded = base64.b64encode(auth_bytes).decode('ascii') # Setup HTTP client http = urllib3.PoolManager() headers = { 'Authorization': f'Basic {auth_encoded}', 'Accept': 'application/json' } results = [] offset = 0 for page in range(MAX_PAGES): # Build query with pagination # Use sys_created_on (not created_on) for timestamp filtering query_params = ( f"sysparm_query=sys_created_onAFTER{last_run_timestamp}" f"&sysparm_display_value=true" f"&sysparm_limit={PAGE_SIZE}" f"&sysparm_offset={offset}" ) url = f"{BASE_URL}/api/now/table/sys_audit?{query_params}" try: response = http.request('GET', url, headers=headers) if response.status == 200: data = json.loads(response.data.decode('utf-8')) chunk = data.get('result', []) results.extend(chunk) # Stop if we got fewer records than PAGE_SIZE (last page) if len(chunk) < PAGE_SIZE: break # Move to next page offset += PAGE_SIZE else: print(f"Error querying ServiceNow API: {response.status} - {response.data.decode('utf-8')}") break except Exception as e: print(f"Exception querying ServiceNow API: {str(e)}") break return results def main(): # Get last run timestamp last_run_timestamp = get_last_run_timestamp() # Current timestamp for this run current_timestamp = datetime.datetime.now().isoformat() # Query ServiceNow API for audit logs audit_logs = get_audit_logs(last_run_timestamp) if audit_logs: # Send each log to syslog for log in audit_logs: # Format the log as JSON log_json = json.dumps(log) # Send to syslog send_to_syslog(log_json) # Sleep briefly to avoid flooding time.sleep(0.01) # Update state file update_state_file(current_timestamp) print(f"Successfully forwarded {len(audit_logs)} audit logs to syslog") else: print("No new audit logs to forward") if __name__ == "__main__": main()
Set Up Scheduled Execution (Linux)
Make the script executable:
chmod +x servicenow_audit_to_syslog.pyCreate a cron job to run the script every hour:
crontab -eAdd the following line:
0 * * * * /usr/bin/python3 /path/to/servicenow_audit_to_syslog.py >> /tmp/servicenow_audit_to_syslog.log 2>&1
PowerShell Script Example (Windows)
Create a file named
ServiceNow-Audit-To-Syslog.ps1with the following content:# ServiceNow API details $BaseUrl = 'https://instance.service-now.com' # Replace with your ServiceNow instance URL $Username = 'admin' # Replace with your ServiceNow username $Password = 'password' # Replace with your ServiceNow password # Syslog details $SyslogServer = '127.0.0.1' # Replace with your Bindplane agent IP $SyslogPort = 514 # Replace with your Bindplane agent port # State file to keep track of last run $StateFile = "$env:TEMP\ServiceNowAuditLastRun.txt" # Pagination settings $PageSize = 1000 $MaxPages = 1000 function Get-LastRunTimestamp { try { if (Test-Path $StateFile) { return Get-Content $StateFile } else { return '1970-01-01T00:00:00' } } catch { return '1970-01-01T00:00:00' } } function Update-StateFile { param ( [string]$Timestamp ) Set-Content -Path $StateFile -Value $Timestamp } function Send-ToSyslog { param ( [string]$Message ) $UdpClient = New-Object System.Net.Sockets.UdpClient $UdpClient.Connect($SyslogServer, $SyslogPort) $Encoding = [System.Text.Encoding]::ASCII $Bytes = $Encoding.GetBytes($Message) $UdpClient.Send($Bytes, $Bytes.Length) $UdpClient.Close() } function Get-AuditLogs { param ( [string]$LastRunTimestamp ) # Create auth header $Auth = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes("${Username}:${Password}")) $Headers = @{ Authorization = "Basic ${Auth}" Accept = 'application/json' } $Results = @() $Offset = 0 for ($page = 0; $page -lt $MaxPages; $page++) { # Build query with pagination # Use sys_created_on (not created_on) for timestamp filtering $QueryParams = "sysparm_query=sys_created_onAFTER${LastRunTimestamp}&sysparm_display_value=true&sysparm_limit=${PageSize}&sysparm_offset=${Offset}" $Url = "${BaseUrl}/api/now/table/sys_audit?${QueryParams}" try { $Response = Invoke-RestMethod -Uri $Url -Headers $Headers -Method Get $Chunk = $Response.result $Results += $Chunk # Stop if we got fewer records than PageSize (last page) if ($Chunk.Count -lt $PageSize) { break } # Move to next page $Offset += $PageSize } catch { Write-Error "Error querying ServiceNow API: $_" break } } return $Results } # Main execution $LastRunTimestamp = Get-LastRunTimestamp $CurrentTimestamp = (Get-Date).ToString('yyyy-MM-ddTHH:mm:ss') $AuditLogs = Get-AuditLogs -LastRunTimestamp $LastRunTimestamp if ($AuditLogs -and $AuditLogs.Count -gt 0) { # Send each log to syslog foreach ($Log in $AuditLogs) { # Format the log as JSON $LogJson = $Log | ConvertTo-Json -Compress # Send to syslog Send-ToSyslog -Message $LogJson # Sleep briefly to avoid flooding Start-Sleep -Milliseconds 10 } # Update state file Update-StateFile -Timestamp $CurrentTimestamp Write-Output "Successfully forwarded $($AuditLogs.Count) audit logs to syslog" } else { Write-Output "No new audit logs to forward" }
Set Up Scheduled Execution (Windows)
- Open Task Scheduler.
- Click Create Task.
- Provide the following configuration:
- Name: ServiceNowAuditToSyslog
- Security options: Run whether user is logged on or not
- Go to the Triggers tab.
- Click New and set it to run hourly.
- Go to the Actions tab.
- Click New and set:
- Action: Start a program
- Program/script: powershell.exe
- Arguments: -ExecutionPolicy Bypass -File "C:\path\to\ServiceNow-Audit-To-Syslog.ps1"
- Click OK to save the task.
Need more help? Get answers from Community members and Google SecOps professionals.