Collect Citrix Analytics logs
This document explains how to ingest Citrix Analytics logs to Google Security Operations using Google Cloud Storage. Citrix Analytics for Performance (Cloud Software Group) provides aggregated performance data from Citrix Virtual Apps and Desktops environments, enabling you to fetch session, machine, and user data through the OData API. Citrix Analytics for Security provides risk insights and data source events that can be exported through Kafka-based SIEM integration.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- A GCP project with Cloud Storage API enabled
- Permissions to create and manage GCS buckets
- Permissions to create Cloud Run services, Pub/Sub topics, and Cloud Scheduler jobs
- Privileged access to a Citrix Analytics for Performance tenant
- Citrix Cloud API credentials (Client ID, Client Secret, Customer ID)
Collect Citrix Analytics API credentials
Get Citrix Cloud API credentials
- Sign in to the Citrix Cloud Console.
- Click the menu icon in the upper left corner of the screen.
- Select Identity and Access Management from the menu.
- Select the API Access tab.
- Click Create Client.
- Copy and save the following details in a secure location:
- Client ID
- Client Secret
- Customer ID (located in the Citrix Cloud URL or the IAM page)
Determine API base URL
The OData API base URL depends on your Citrix Cloud region:
| Region | API Base URL |
|---|---|
| United States | https://api.cloud.com/casodata |
| European Union | https://api.eu.cloud.com/casodata |
| Asia Pacific South | https://api.ap-s.cloud.com/casodata |
Verify permissions
To verify the account has the required permissions:
- Sign in to Citrix Cloud.
- Go to Identity and Access Management > Administrators.
- Verify that the account used to create API credentials has Full access or Custom access with Citrix Analytics for Performance permissions enabled.
- If you cannot see the required permissions, contact your Citrix Cloud administrator to grant access.
Test API access
Test your credentials before proceeding with the integration:
CITRIX_CUSTOMER_ID="your-customer-id" CITRIX_CLIENT_ID="your-client-id" CITRIX_CLIENT_SECRET="your-client-secret" # Get bearer token TOKEN=$(curl -s -X POST \ "https://api.cloud.com/cctrustoauth2/${CITRIX_CUSTOMER_ID}/tokens/clients" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "grant_type=client_credentials&client_id=${CITRIX_CLIENT_ID}&client_secret=${CITRIX_CLIENT_SECRET}" \ | python3 -c "import sys,json; print(json.load(sys.stdin)['access_token'])") # Test OData API access curl -v -H "Authorization: CwsAuth bearer=${TOKEN}" \ -H "Citrix-CustomerId: ${CITRIX_CUSTOMER_ID}" \ -H "Accept: application/json" \ "https://api.cloud.com/casodata/sessions?\$top=1"
Create Google Cloud Storage bucket
- Go to the Google Cloud Console.
- Select your project or create a new one.
- In the navigation menu, go to Cloud Storage > Buckets.
- Click Create bucket.
Provide the following configuration details:
Setting Value Name your bucket Enter a globally unique name (for example, citrix-analytics-logs)Location type Choose based on your needs (Region, Dual-region, Multi-region) Location Select the location (for example, us-central1)Storage class Standard (recommended for frequently accessed logs) Access control Uniform (recommended) Protection tools Optional: Enable object versioning or retention policy Click Create.
Create service account for Cloud Run function
The Cloud Run function needs a service account with permissions to write to GCS bucket and be invoked by Pub/Sub.
Create service account
- In the GCP Console, go to IAM & Admin > Service Accounts.
- Click Create Service Account.
- Provide the following configuration details:
- Service account name: Enter
citrix-analytics-collector-sa - Service account description: Enter
Service account for Cloud Run function to collect Citrix Analytics logs
- Service account name: Enter
- Click Create and Continue.
- In the Grant this service account access to project section, add the following roles:
- Click Select a role.
- Search for and select Storage Object Admin.
- Click + Add another role.
- Search for and select Cloud Run Invoker.
- Click + Add another role.
- Search for and select Cloud Functions Invoker.
- Click Continue.
- Click Done.
These roles are required for:
- Storage Object Admin: Write logs to GCS bucket and manage state files
- Cloud Run Invoker: Allow Pub/Sub to invoke the function
- Cloud Functions Invoker: Allow function invocation
Grant IAM permissions on GCS bucket
Grant the service account write permissions on the GCS bucket:
- Go to Cloud Storage > Buckets.
- Click your bucket name.
- Go to the Permissions tab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Enter the service account email (for example,
citrix-analytics-collector-sa@PROJECT_ID.iam.gserviceaccount.com) - Assign roles: Select Storage Object Admin
- Add principals: Enter the service account email (for example,
- Click Save.
Create Pub/Sub topic
Create a Pub/Sub topic that Cloud Scheduler will publish to and the Cloud Run function will subscribe to.
- In the GCP Console, go to Pub/Sub > Topics.
- Click Create topic.
- Provide the following configuration details:
- Topic ID: Enter
citrix-analytics-trigger - Leave other settings as default
- Topic ID: Enter
- Click Create.
Create Cloud Run function to collect logs
The Cloud Run function will be triggered by Pub/Sub messages from Cloud Scheduler to fetch logs from the Citrix Analytics OData API and write them to GCS.
- In the GCP Console, go to Cloud Run.
- Click Create service.
- Select Function (use an inline editor to create a function).
In the Configure section, provide the following configuration details:
Setting Value Service name citrix-analytics-collectorRegion Select region matching your GCS bucket (for example, us-central1)Runtime Select Python 3.12 or later In the Trigger (optional) section:
- Click + Add trigger.
- Select Cloud Pub/Sub.
- In Select a Cloud Pub/Sub topic, choose the topic
citrix-analytics-trigger. - Click Save.
In the Authentication section:
- Select Require authentication.
- Check Identity and Access Management (IAM).
Scroll down and expand Containers, Networking, Security.
Go to the Security tab:
- Service account: Select the service account
citrix-analytics-collector-sa
- Service account: Select the service account
Go to the Containers tab:
- Click Variables & Secrets.
- Click + Add variable for each environment variable:
Variable Name Example Value Description GCS_BUCKETcitrix-analytics-logsGCS bucket name GCS_PREFIXcitrix_analyticsPrefix for log files STATE_KEYcitrix_analytics/state.jsonState file path CITRIX_CLIENT_IDyour-client-idCitrix Cloud Client ID CITRIX_CLIENT_SECRETyour-client-secretCitrix Cloud Client Secret CITRIX_CUSTOMER_IDyour-customer-idCitrix Cloud Customer ID API_BASEhttps://api.cloud.com/casodataOData API base URL ENTITIESsessions,machines,usersEntity types to collect TOP_N1000Records per page LOOKBACK_MINUTES75Initial lookback period Scroll down in the Variables & Secrets section to Requests:
- Request timeout: Enter
600seconds (10 minutes)
- Request timeout: Enter
Go to the Settings tab:
- In the Resources section:
- Memory: Select 512 MiB or higher
- CPU: Select 1
- In the Resources section:
In the Revision scaling section:
- Minimum number of instances: Enter
0 - Maximum number of instances: Enter
100(or adjust based on expected load)
- Minimum number of instances: Enter
Click Create.
Wait for the service to be created (1-2 minutes).
After the service is created, the inline code editor will open automatically.
Add function code
- Enter main in the Entry point field.
In the inline code editor, create two files:
main.py:
import functions_framework from google.cloud import storage import json import os import urllib3 from datetime import datetime, timedelta, timezone import urllib.parse import time # Initialize HTTP client with timeouts http = urllib3.PoolManager( timeout=urllib3.Timeout(connect=5.0, read=30.0), retries=False, ) # Initialize Storage client storage_client = storage.Client() CITRIX_TOKEN_URL_TMPL = "https://api.cloud.com/cctrustoauth2/{customerid}/tokens/clients" DEFAULT_API_BASE = "https://api.cloud.com/casodata" @functions_framework.cloud_event def main(cloud_event): """ Cloud Run function triggered by Pub/Sub to fetch logs from Citrix Analytics OData API and write to GCS. Args: cloud_event: CloudEvent object containing Pub/Sub message """ # Get environment variables bucket_name = os.environ.get('GCS_BUCKET') prefix = os.environ.get('GCS_PREFIX', 'citrix_analytics').strip('/') state_key = os.environ.get('STATE_KEY') or f"{prefix}/state.json" customer_id = os.environ.get('CITRIX_CUSTOMER_ID') client_id = os.environ.get('CITRIX_CLIENT_ID') client_secret = os.environ.get('CITRIX_CLIENT_SECRET') api_base = os.environ.get('API_BASE', DEFAULT_API_BASE) entities = [e.strip() for e in os.environ.get('ENTITIES', 'sessions,machines,users').split(',') if e.strip()] top_n = int(os.environ.get('TOP_N', '1000')) lookback_minutes = int(os.environ.get('LOOKBACK_MINUTES', '75')) if not all([bucket_name, customer_id, client_id, client_secret]): print('Error: Missing required environment variables') return try: # Get GCS bucket bucket = storage_client.bucket(bucket_name) # Determine target hour to collect now = datetime.now(timezone.utc) fallback_target = (now - timedelta(minutes=lookback_minutes)).replace(minute=0, second=0, microsecond=0) # Load state (last processed timestamp) state = load_state(bucket, state_key) last_processed_str = state.get('last_hour_utc') if last_processed_str: last_processed = datetime.fromisoformat(last_processed_str.replace('Z', '+00:00')).replace(tzinfo=None) target_hour = last_processed + timedelta(hours=1) else: target_hour = fallback_target print(f'Processing logs for hour: {target_hour.isoformat()}Z') # Get authentication token token = get_citrix_token(customer_id, client_id, client_secret) headers = { 'Authorization': f'CwsAuth bearer={token}', 'Citrix-CustomerId': customer_id, 'Accept': 'application/json', 'Content-Type': 'application/json', } total_records = 0 # Process each entity type for entity in entities: records = [] for row in fetch_odata_entity(entity, target_hour, top_n, headers, api_base): enriched_record = { 'citrix_entity': entity, 'citrix_hour_utc': target_hour.isoformat() + 'Z', 'collection_timestamp': datetime.now(timezone.utc).isoformat() + 'Z', 'raw': row } records.append(enriched_record) # Write in batches to avoid memory issues if len(records) >= 1000: blob_name = f"{prefix}/{entity}/year={target_hour.year:04d}/month={target_hour.month:02d}/day={target_hour.day:02d}/hour={target_hour.hour:02d}/part-{datetime.now(timezone.utc).strftime('%Y%m%d%H%M%S%f')}.ndjson" write_ndjson_to_gcs(bucket, blob_name, records) total_records += len(records) records = [] # Write remaining records if records: blob_name = f"{prefix}/{entity}/year={target_hour.year:04d}/month={target_hour.month:02d}/day={target_hour.day:02d}/hour={target_hour.hour:02d}/part-{datetime.now(timezone.utc).strftime('%Y%m%d%H%M%S%f')}.ndjson" write_ndjson_to_gcs(bucket, blob_name, records) total_records += len(records) # Update state file save_state(bucket, state_key, {'last_hour_utc': target_hour.isoformat() + 'Z'}) print(f'Successfully processed {total_records} records for hour {target_hour.isoformat()}Z') except Exception as e: print(f'Error processing logs: {str(e)}') raise def get_citrix_token(customer_id, client_id, client_secret): """Get Citrix Cloud authentication token.""" url = CITRIX_TOKEN_URL_TMPL.format(customerid=customer_id) payload = { 'grant_type': 'client_credentials', 'client_id': client_id, 'client_secret': client_secret, } data = urllib.parse.urlencode(payload).encode('utf-8') response = http.request( 'POST', url, body=data, headers={ 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded', } ) if response.status != 200: print(f'Token request failed with status {response.status}') print(f'Response: {response.data.decode("utf-8")}') raise Exception(f'Failed to get Citrix token: HTTP {response.status}') token_response = json.loads(response.data.decode('utf-8')) return token_response['access_token'] def fetch_odata_entity(entity, when_utc, top, headers, api_base): """Fetch data from Citrix Analytics OData API with pagination and rate limiting.""" year = when_utc.year month = when_utc.month day = when_utc.day hour = when_utc.hour base_url = f"{api_base.rstrip('/')}/{entity}?year={year:04d}&month={month:02d}&day={day:02d}&hour={hour:02d}" skip = 0 backoff = 1.0 while True: url = f"{base_url}&$top={top}&$skip={skip}" response = http.request('GET', url, headers=headers) # Handle rate limiting with exponential backoff if response.status == 429: retry_after = int(response.headers.get('Retry-After', str(int(backoff)))) print(f'Rate limited (429). Retrying after {retry_after}s...') time.sleep(retry_after) backoff = min(backoff * 2, 30.0) continue backoff = 1.0 if response.status != 200: print(f'HTTP Error: {response.status}') response_text = response.data.decode('utf-8') print(f'Response body: {response_text}') return data = json.loads(response.data.decode('utf-8')) items = data.get('value', []) if not items: break for item in items: yield item if len(items) < top: break skip += top def load_state(bucket, key): """Load state from GCS.""" try: blob = bucket.blob(key) if blob.exists(): state_data = blob.download_as_text() return json.loads(state_data) except Exception as e: print(f'Warning: Could not load state: {str(e)}') return {} def save_state(bucket, key, state): """Save state to GCS.""" try: blob = bucket.blob(key) blob.upload_from_string( json.dumps(state, separators=(',', ':')), content_type='application/json' ) except Exception as e: print(f'Warning: Could not save state: {str(e)}') def write_ndjson_to_gcs(bucket, key, records): """Write records as NDJSON to GCS.""" body_lines = [] for record in records: json_line = json.dumps(record, separators=(',', ':'), ensure_ascii=False) body_lines.append(json_line) body = ('\n'.join(body_lines) + '\n').encode('utf-8') blob = bucket.blob(key) blob.upload_from_string(body, content_type='application/x-ndjson')requirements.txt:
functions-framework==3.* google-cloud-storage==2.* urllib3>=2.0.0
Click Deploy to save and deploy the function.
Wait for deployment to complete (2-3 minutes).
Create Cloud Scheduler job
Cloud Scheduler will publish messages to the Pub/Sub topic at regular intervals, triggering the Cloud Run function.
- In the GCP Console, go to Cloud Scheduler.
- Click Create Job.
Provide the following configuration details:
Setting Value Name citrix-analytics-collector-hourlyRegion Select same region as Cloud Run function Frequency 0 * * * *(every hour, on the hour)Timezone Select timezone (UTC recommended) Target type Pub/Sub Topic Select the topic citrix-analytics-triggerMessage body {}(empty JSON object)Click Create.
Schedule frequency options
Choose frequency based on log volume and latency requirements:
| Frequency | Cron Expression | Use Case |
|---|---|---|
| Every hour | 0 * * * * |
Standard (recommended) |
| Every 2 hours | 0 */2 * * * |
Lower volume |
| Every 6 hours | 0 */6 * * * |
Low volume, batch processing |
Test the integration
- In the Cloud Scheduler console, find your job.
- Click Force run to trigger the job manually.
- Wait a few seconds.
- Go to Cloud Run > Services.
- Click on the function name
citrix-analytics-collector. - Click the Logs tab.
Verify the function executed successfully. Look for:
Processing logs for hour: YYYY-MM-DDTHH:00:00Z Successfully processed X records for hour YYYY-MM-DDTHH:00:00ZGo to Cloud Storage > Buckets.
Click your bucket name.
Navigate to the prefix folder
citrix_analytics/.Verify that new
.ndjsonfiles were created with the current timestamp.
If you see errors in the logs:
- HTTP 401: Check API credentials in environment variables
- HTTP 403: Verify account has required permissions in Citrix Cloud
- HTTP 429: Rate limiting - function will automatically retry with backoff
- Missing environment variables: Check all required variables are set
Configure a feed in Google SecOps to ingest Citrix Analytics logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- Click Configure a single feed.
- In the Feed name field, enter a name for the feed (for example,
Citrix Analytics logs). - Select Google Cloud Storage V2 as the Source type.
- Select Citrix Analytics as the Log type.
Click Get Service Account. A unique service account email will be displayed, for example:
chronicle-12345678@chronicle-gcp-prod.iam.gserviceaccount.comCopy this email address for use in the next step.
Click Next.
Specify values for the following input parameters:
Storage bucket URL: Enter the GCS bucket URI with the prefix path:
gs://citrix-analytics-logs/citrix_analytics/- Replace:
citrix-analytics-logs: Your GCS bucket name.citrix_analytics: Optional prefix/folder path where logs are stored (leave empty for root).
- Replace:
Source deletion option: Select the deletion option according to your preference:
- Never: Never deletes any files after transfers (recommended for testing).
- Delete transferred files: Deletes files after successful transfer.
- Delete transferred files and empty directories: Deletes files and empty directories after successful transfer.
Maximum File Age: Include files modified in the last number of days (default is 180 days)
Asset namespace: The asset namespace
Ingestion labels: The label to be applied to the events from this feed
Click Next.
Review your new feed configuration in the Finalize screen, and then click Submit.
Grant IAM permissions to the Google SecOps service account
The Google SecOps service account needs Storage Object Viewer role on your GCS bucket.
- Go to Cloud Storage > Buckets.
- Click your bucket name.
- Go to the Permissions tab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Paste the Google SecOps service account email
- Assign roles: Select Storage Object Viewer
Click Save.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
occurrence_event_type |
extensions.auth.type |
Mapped: Session.Logon → AUTHTYPE_UNSPECIFIED, Session.End → AUTHTYPE_UNSPECIFIED |
server_name |
intermediary.asset.hostname |
Directly mapped |
server_name |
intermediary.hostname |
Directly mapped |
event_type |
metadata.description |
Directly mapped |
timestamp |
metadata.event_timestamp |
Parsed as ISO8601 |
udm_event_type |
metadata.event_type |
Mapped: "USER_LOGIN", "USER_LOGOUT" → GENERIC_EVENT |
tenant_id |
metadata.product_deployment_id |
Directly mapped |
occurrence_event_type |
metadata.product_event_type |
Directly mapped |
event_id |
metadata.product_log_id |
Directly mapped |
product |
metadata.product_name |
Directly mapped |
product_version |
metadata.product_version |
Directly mapped |
ui_link |
metadata.url_back_to_product |
Directly mapped |
session_key |
network.session_id |
Directly mapped |
domain |
principal.administrative_domain |
Directly mapped |
device_id |
principal.asset.hostname |
Directly mapped |
client_ip |
principal.asset.ip |
Merged |
vulnerability |
principal.asset.vulnerabilities |
Merged |
device_id |
principal.hostname |
Directly mapped |
client_ip |
principal.ip |
Merged |
os_name |
principal.platform |
Mapped values (6 total, e.g. (?i)windows → WINDOWS, (?i)windows → MAC, (?i)windows... |
os_extra_info |
principal.platform_patch_level |
Directly mapped |
os_version |
principal.platform_version |
Directly mapped |
entity_id |
principal.user.email_addresses |
Mapped: ^.+@.+$ → entity_id |
entity_type |
principal.user.email_addresses |
Mapped: user → entity_id |
session_user_name |
principal.user.user_display_name |
Directly mapped |
entity_id |
principal.user.userid |
Directly mapped |
session_user_name |
principal.user.userid |
Directly mapped |
alert_message |
security_result.action_details |
Directly mapped |
analytic |
security_result.analytics_metadata |
Merged |
category |
security_result.category |
Merged |
indicator_category |
security_result.category |
Mapped: Data exfiltration → category |
indicator_name |
security_result.description |
Directly mapped |
label |
security_result.detection_fields |
Merged |
label |
security_result.outcomes |
Merged |
severity |
security_result.severity |
Directly mapped |
app_name |
target.application |
Directly mapped |
app_name |
target.process.file.names |
Merged |
printer_name |
target.resource.name |
Directly mapped |
entity_id |
target.user.email_addresses |
Mapped: ^.+@.+$ → entity_id |
entity_type |
target.user.email_addresses |
Mapped: user → entity_id |
session_user_name |
target.user.user_display_name |
Directly mapped |
entity_id |
target.user.userid |
Directly mapped |
session_user_name |
target.user.userid |
Directly mapped |
| N/A | extensions.auth.type |
Constant: AUTHTYPE_UNSPECIFIED |
| N/A | metadata.event_type |
Constant: GENERIC_EVENT |
| N/A | metadata.vendor_name |
Constant: CITRIX_ANALYTICS |
| N/A | principal.platform |
Constant: WINDOWS |
| N/A | security_result.confidence_score |
Constant: risk_probability |
| N/A | security_result.risk_score |
Constant: cur_riskscore |
Need more help? Get answers from Community members and Google SecOps professionals.