Collect Slack Audit logs
This document explains how to ingest Slack Audit logs to Google Security Operations using either Google Cloud Run Functions or Amazon S3 with AWS Lambda.
Slack Audit logs provide a detailed record of security-related events across your Slack Enterprise Grid organization. The Audit Logs API allows you to monitor user actions such as logins, file downloads, app installations, and administrative changes for compliance and security monitoring purposes.
Before you begin
Ensure that you have the following prerequisites:
- A Google SecOps instance
- A Slack Enterprise Grid plan with Organization Owner or Admin access
Privileged access to either:
- Google Cloud (for Option 1: Cloud Run Functions and Cloud Scheduler), or
- AWS (for Option 2: S3, IAM, Lambda, EventBridge)
Collect Slack credentials
Create the Slack app
The Slack Audit Logs API requires a User OAuth Token with the auditlogs:read scope. This token must be obtained by installing an app at the Enterprise Grid organization level, not at the workspace level.
- Sign in to the Slack Admin Console with an Enterprise Grid Organization Owner or Admin account.
- Go to the Slack API Apps page.
- Click Create New App.
- Select From scratch.
- Provide the following configuration details:
- App Name: Enter a descriptive name (for example,
Google SecOps Audit Integration). - Pick a workspace to develop your app in: Select your Development Slack Workspace (any workspace in the organization).
- App Name: Enter a descriptive name (for example,
- Click Create App.
Configure OAuth scopes
- In your app's settings page, select OAuth & Permissions from the left navigation.
- Scroll down to the Scopes section.
- Under User Token Scopes (NOT Bot Token Scopes), click Add an OAuth Scope.
- Add the following scope:
auditlogs:read— This scope enables access to the Audit Logs API for the Enterprise Grid organization.
- Under Bot Token Scopes, click Add an OAuth Scope.
Add the following scope:
users:read— This scope is required to enable organization-level app installation.
Configure redirect URL
- In your app's settings page, select OAuth & Permissions from the left navigation.
- Go to the Redirect URLs section.
- Click Add New Redirect URL.
Enter the following URL:
https://slack.com/oauth/v2/authorizeClick Add.
Click Save URLs.
Activate public distribution
To install an app at the Enterprise organization level (not just a workspace), you must activate public distribution.
Why is public distribution required?
- Slack requires apps with organization-wide permissions to have public distribution enabled.
- This unlocks the ability to install at the organization level instead of individual workspaces.
- Without it, you can only install the app to a single workspace within your Enterprise Grid.
- Enabling public distribution simply removes internal restrictions on installation scope.
To activate public distribution, follow these steps:
- In your app's settings page, select Manage Distribution from the left navigation.
- Under Share Your App with Other Workspaces, verify that all four sections have green checkmarks:
- Remove Hard Coded Information
- Activate Public Distribution
- Set a Redirect URL
- Add an OAuth Scope
- If any section shows a red X, expand it and complete the required action.
- Check the checkbox next to I've reviewed and removed any hard-coded information.
- Click Activate Public Distribution.
Install app to Enterprise organization
- In your app's settings page, select OAuth & Permissions from the left navigation.
- Under Share Your App with Your Workspace, copy the Sharable URL.
- Enter the URL into your browser.
- Critical: In the installation screen, check the dropdown menu in the upper right corner.
- Ensure you are installing to the Enterprise organization, NOT an individual workspace.
- If you see a workspace name in the dropdown, click it and select your Enterprise organization name instead.
- Review the requested permissions and click Allow.
Retrieve credentials
After authorization completes, you will be redirected to the OAuth & Permissions page.
- Under OAuth Tokens for Your Workspace, locate the User OAuth Token.
- The token starts with
xoxp-. Click Copy and save this token securely.
Note your Organization ID:
- Go to the Slack Admin Console.
- Navigate to Settings & Permissions > Organization settings.
- Copy the Organization ID.
Verify permissions
To verify your app has the required permissions:
- Go to the Slack API Apps page.
- Select your app.
- Select OAuth & Permissions from the left navigation.
- Under OAuth Tokens for Your Workspace, verify that a User OAuth Token is displayed (starts with
xoxp-). - Under Scopes, verify that
auditlogs:readappears under User Token Scopes.
Test API access
Test your token before proceeding with the integration:
SLACK_TOKEN="xoxp-your-token-here" curl -H "Authorization: Bearer $SLACK_TOKEN" \ "https://api.slack.com/audit/v1/logs?limit=1"A successful response contains an
entriesarray with audit log events. If you receive aninvalid_authormissing_scopeerror, verify your token and scopes.
Option 1: Configure Slack Audit logs export using Google Cloud Run Functions
This option uses Google Cloud Run Functions and Cloud Scheduler to collect Slack Audit logs and ingest them directly into Google SecOps using the Chronicle ingestion scripts.
Before you begin (Option 1)
Ensure that you have the following additional prerequisites:
- A GCP project with the following APIs enabled:
- Cloud Functions API
- Cloud Scheduler API
- Secret Manager API
- Permissions to create Cloud Functions, Cloud Scheduler jobs, and secrets in Secret Manager
- A Google SecOps service account JSON file
Get the Google SecOps service account
- Sign in to the Google SecOps console.
- Go to SIEM Settings > Collection Agents.
- Download the Ingestion Authentication File.
Save the JSON file securely. You will upload this as a secret in Google Secret Manager.
Prepare the function code
Download the deployment files from the Chronicle ingestion-scripts GitHub repository:
- From the
slackfolder, download:.env.ymlmain.pyrequirements.txt
From the root of the repository, download the entire
commondirectory with all its files.Your deployment directory should have the following structure:
deployment_directory/ ├── common/ │ ├── __init__.py │ ├── auth.py │ ├── env_constants.py │ ├── ingest.py │ ├── status.py │ └── utils.py ├── .env.yml ├── main.py └── requirements.txt
Configure secrets in Google Secret Manager
Store sensitive credentials in Secret Manager for secure access by the Cloud Run function.
Create secret for Google SecOps service account
- In the GCP Console, go to Security > Secret Manager.
- Click Create Secret.
- Provide the following configuration details:
- Name: Enter
chronicle-service-account. - Secret value: Enter the entire contents of the Google SecOps service account JSON file.
- Name: Enter
- Click Create Secret.
Copy the secret's resource name. The format is:
projects/PROJECT_ID/secrets/chronicle-service-account/versions/latest
Create secret for Slack token
- In the Secret Manager page, click Create Secret.
- Provide the following configuration details:
- Name: Enter
slack-admin-token. - Secret value: Enter the Slack User OAuth Token (starts with
xoxp-).
- Name: Enter
- Click Create Secret.
Copy the secret's resource name. The format is:
projects/PROJECT_ID/secrets/slack-admin-token/versions/latest
Configure environment variables
Open the
.env.ymlfile in your deployment directory and configure the environment variables:CHRONICLE_CUSTOMER_ID: "<your-chronicle-customer-id>" CHRONICLE_REGION: "us" CHRONICLE_SERVICE_ACCOUNT: "projects/<PROJECT_ID>/secrets/chronicle-service-account/versions/latest" CHRONICLE_NAMESPACE: "" POLL_INTERVAL: "5" SLACK_ADMIN_TOKEN: "projects/<PROJECT_ID>/secrets/slack-admin-token/versions/latest"Replace the following:
<your-chronicle-customer-id>: Your Google SecOps customer ID.<PROJECT_ID>: Your Google Cloud project ID.CHRONICLE_REGION: Set to your Google SecOps region. Valid values:us,asia-northeast1,asia-south1,asia-southeast1,australia-southeast1,europe,europe-west2,europe-west3,europe-west6,europe-west9,europe-west12,me-central1,me-central2,me-west1,northamerica-northeast2,southamerica-east1.POLL_INTERVAL: Frequency interval (in minutes) at which the function executes. This duration must be the same as the Cloud Scheduler job interval.
Save the
.env.ymlfile.
Deploy the Cloud Run function
Open a terminal or Cloud Shell in the Google Cloud console. Navigate to your deployment directory and execute the following command:
gcloud functions deploy slack-audit-to-chronicle \ --entry-point main \ --trigger-http \ --runtime python312 \ --env-vars-file .env.yml \ --timeout 540s \ --memory 512MB \ --service-account <SERVICE_ACCOUNT_EMAIL>Replace
<SERVICE_ACCOUNT_EMAIL>with the email address of the service account you want your Cloud Run function to use.Wait for the deployment to complete. Once deployed, note the function URL from the output.
Set up Cloud Scheduler
- In the GCP Console, go to Cloud Scheduler > Create job.
Provide the following configuration details:
Setting Value Name slack-audit-schedulerRegion Select the same region where you deployed the Cloud Run function Frequency */5 * * * *(every 5 minutes, matching thePOLL_INTERVALvalue)Timezone Select timezone (UTC recommended) Click Continue.
In the Configure the execution section:
- Target type: Select HTTP.
- URL: Enter the Cloud Run function URL from the deployment output.
- HTTP method: Select POST.
In the Auth header section:
- Select Add OIDC token.
- Service account: Select the same service account used for the Cloud Run function.
Click Create.
Schedule frequency options
Choose frequency based on log volume and latency requirements:
| Frequency | Cron Expression | Use Case |
|---|---|---|
| Every 5 minutes | */5 * * * * |
Standard (recommended) |
| Every 15 minutes | */15 * * * * |
Medium volume |
| Every hour | 0 * * * * |
Low volume |
Test the integration (Option 1)
- In the Cloud Scheduler console, find your job (
slack-audit-scheduler). - Click Force run to trigger the job manually.
- Wait a few seconds.
- Go to Cloud Functions.
- Click the function name (
slack-audit-to-chronicle). - Click the Logs tab.
Verify the function executed successfully. Look for:
Retrieving the Slack Audit logs since: YYYY-MM-DDTHH:MM:SSZ Processing logs... Retrieved X audit logs from the API call Logs processed successfully.
Option 2: Configure Slack Audit Logs export using AWS S3
This option uses AWS Lambda to collect Slack Audit logs and store them in Amazon S3, then configures a Google SecOps feed to ingest the logs.
Before you begin (Option 2)
Ensure that you have the following additional prerequisite:
- An AWS account with permissions to create S3 buckets, IAM users/roles/policies, Lambda functions, and EventBridge rules.
Create Amazon S3 bucket
- Create an Amazon S3 bucket following the Creating a bucket user guide.
- Save the bucket Name and Region for future reference (for example,
slack-audit-logs).
Configure IAM policy and role for S3 uploads
- In the AWS Console, go to IAM > Policies > Create policy > JSON tab.
Enter the following policy. Replace
slack-audit-logsif you used a different bucket name:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::slack-audit-logs/*" }, { "Sid": "AllowGetStateObject", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::slack-audit-logs/slack/audit/state.json" } ] }Click Next.
Enter the policy name
SlackAuditS3Policy.Click Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy
SlackAuditS3Policy.Name the role
SlackAuditToS3Roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
Provide the following configuration details:
Setting Value Name slack_audit_to_s3Runtime Python 3.13 Architecture x86_64 Execution role SlackAuditToS3RoleClick Create function.
After the function is created, open the Code tab, delete the stub and enter the following code (
slack_audit_to_s3.py):#!/usr/bin/env python3 # Lambda: Pull Slack Audit Logs (Enterprise Grid) to S3 (JSONL format) import os, json, time, urllib.parse from urllib.request import Request, urlopen from urllib.error import HTTPError, URLError import boto3 BASE_URL = "https://api.slack.com/audit/v1/logs" TOKEN = os.environ["SLACK_AUDIT_TOKEN"] BUCKET = os.environ["S3_BUCKET"] PREFIX = os.environ.get("S3_PREFIX", "slack/audit/") STATE_KEY = os.environ.get("STATE_KEY", "slack/audit/state.json") LIMIT = int(os.environ.get("LIMIT", "200")) MAX_PAGES = int(os.environ.get("MAX_PAGES", "20")) LOOKBACK_SEC = int(os.environ.get("LOOKBACK_SECONDS", "3600")) HTTP_TIMEOUT = int(os.environ.get("HTTP_TIMEOUT", "60")) HTTP_RETRIES = int(os.environ.get("HTTP_RETRIES", "3")) RETRY_AFTER_DEFAULT = int(os.environ.get("RETRY_AFTER_DEFAULT", "2")) # Optional server-side filters (comma-separated 'action' values) ACTIONS = os.environ.get("ACTIONS", "").strip() s3 = boto3.client("s3") def _get_state() -> dict: try: obj = s3.get_object(Bucket=BUCKET, Key=STATE_KEY) st = json.loads(obj["Body"].read() or b"{}") return {"cursor": st.get("cursor")} except Exception: return {"cursor": None} def _put_state(state: dict) -> None: body = json.dumps(state, separators=(",", ":")).encode("utf-8") s3.put_object( Bucket=BUCKET, Key=STATE_KEY, Body=body, ContentType="application/json" ) def _http_get(params: dict) -> dict: qs = urllib.parse.urlencode(params, doseq=True) url = f"{BASE_URL}?{qs}" if qs else BASE_URL req = Request(url, method="GET") req.add_header("Authorization", f"Bearer {TOKEN}") req.add_header("Accept", "application/json") attempt = 0 while True: try: with urlopen(req, timeout=HTTP_TIMEOUT) as r: return json.loads(r.read().decode("utf-8")) except HTTPError as e: # Respect Retry-After on 429/5xx if e.code in (429, 500, 502, 503, 504) and attempt < HTTP_RETRIES: retry_after = 0 try: retry_after = int( e.headers.get("Retry-After", RETRY_AFTER_DEFAULT) ) except Exception: retry_after = RETRY_AFTER_DEFAULT time.sleep(max(1, retry_after)) attempt += 1 continue raise except URLError: if attempt < HTTP_RETRIES: time.sleep(RETRY_AFTER_DEFAULT) attempt += 1 continue raise def _write_page(data: dict, page_idx: int) -> str: """Extract entries from Slack API response and write as JSONL.""" entries = data.get("entries") or [] if not entries: return None lines = [json.dumps(entry, separators=(",", ":")) for entry in entries] body = "\n".join(lines).encode("utf-8") ts = time.strftime("%Y/%m/%d/%H%M%S", time.gmtime()) key = f"{PREFIX}{ts}-slack-audit-p{page_idx:05d}.json" s3.put_object( Bucket=BUCKET, Key=key, Body=body, ContentType="application/json" ) return key def lambda_handler(event=None, context=None): state = _get_state() cursor = state.get("cursor") params = {"limit": LIMIT} if ACTIONS: params["action"] = [a.strip() for a in ACTIONS.split(",") if a.strip()] if cursor: params["cursor"] = cursor else: # First run (or reset): fetch a recent window by time params["oldest"] = int(time.time()) - LOOKBACK_SEC pages = 0 total = 0 last_cursor = None while pages < MAX_PAGES: data = _http_get(params) _write_page(data, pages) entries = data.get("entries") or [] total += len(entries) # Cursor for next page meta = data.get("response_metadata") or {} next_cursor = meta.get("next_cursor") or data.get("next_cursor") if next_cursor: params = {"limit": LIMIT, "cursor": next_cursor} if ACTIONS: params["action"] = [ a.strip() for a in ACTIONS.split(",") if a.strip() ] last_cursor = next_cursor pages += 1 continue break if last_cursor: _put_state({"cursor": last_cursor}) return { "ok": True, "pages": pages + (1 if total or last_cursor else 0), "entries": total, "cursor": last_cursor, } if __name__ == "__main__": print(lambda_handler())
Configure Lambda environment variables
- Go to Configuration > Environment variables > Edit > Add environment variable.
Enter the following environment variables, replacing with your values:
Key Example Value SLACK_AUDIT_TOKENxoxp-...(org-level user token withauditlogs:read)S3_BUCKETslack-audit-logsS3_PREFIXslack/audit/STATE_KEYslack/audit/state.jsonLIMIT200MAX_PAGES20LOOKBACK_SECONDS3600HTTP_TIMEOUT60HTTP_RETRIES3RETRY_AFTER_DEFAULT2ACTIONS(optional, comma-separated action filter) Click Save.
Select the Configuration tab. In the General configuration panel click Edit.
Change Timeout to
5 minutes(300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Name: Enter
slack-audit-1h. - Recurring schedule: Select Rate-based schedule.
- Rate expression: Enter
1 hours. - Flexible time window: Select Off.
- Name: Enter
- Click Next.
- Select Target:
- Target API: Select AWS Lambda Invoke.
- Lambda function: Select
slack_audit_to_s3.
- Click Next.
- Click Next (skip optional settings).
- Review and click Create schedule.
Optional: Create read-only IAM user for Google SecOps
- Go to AWS Console > IAM > Users > Create user.
- Provide the following configuration details:
- User name: Enter
secops-reader. - Access type: Select Programmatic access.
- User name: Enter
- Click Next.
- Select Attach policies directly.
Click Create policy. In the JSON tab, enter:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::slack-audit-logs/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::slack-audit-logs" } ] }Click Next.
Enter the policy name
secops-reader-policy.Click Create policy.
Return to the user creation page, refresh the policy list, and select
secops-reader-policy.Click Next.
Click Create user.
Select the created user
secops-reader.Go to Security credentials > Access keys > Create access key.
Select Third-party service.
Click Next.
Click Create access key.
Click Download .csv file to save the credentials.
Configure a feed in Google SecOps to ingest Slack Audit logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- Click Configure a single feed.
- In the Feed name field, enter a name for the feed (for example,
Slack Audit Logs). - Select Amazon S3 V2 as the Source type.
- Select Slack Audit as the Log type.
Click Next.
Specify values for the following input parameters:
S3 URI: Enter the S3 bucket URI with the prefix path:
s3://slack-audit-logs/slack/audit/Replace
slack-audit-logswith your actual S3 bucket name.Source deletion option: Select the deletion option according to your preference.
Maximum File Age: Include files modified in the last number of days. Default is 180 days.
Access Key ID: User access key with access to the S3 bucket (from the
secops-readerIAM user).Secret Access Key: User secret key with access to the S3 bucket (from the
secops-readerIAM user).Asset namespace: The asset namespace.
Ingestion labels: The label to be applied to the events from this feed.
Click Next.
Review your new feed configuration in the Finalize screen, and then click Submit.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
action |
metadata.product_event_type |
Directly mapped from the action field |
actor.type |
principal.labels |
Directly mapped with key actor.type |
actor.user.email |
principal.user.email_addresses |
Directly mapped |
actor.user.id |
principal.user.product_object_id |
Directly mapped |
actor.user.id |
principal.user.userid |
Directly mapped |
actor.user.name |
principal.user.user_display_name |
Directly mapped |
actor.user.team |
principal.user.group_identifiers |
Directly mapped |
context.ip_address |
principal.ip |
Directly mapped |
context.location.domain |
about.resource.attribute.labels |
Directly mapped with key context.location.domain |
context.location.id |
about.resource.id |
Directly mapped |
context.location.name |
about.resource.name |
Directly mapped |
context.location.name |
about.resource.attribute.labels |
Directly mapped with key context.location.name |
context.location.type |
about.resource.resource_subtype |
Directly mapped |
context.session_id |
network.session_id |
Directly mapped |
context.ua |
network.http.user_agent |
Directly mapped |
context.ua |
network.http.parsed_user_agent |
Parsed user agent derived from the context.ua field |
country |
principal.location.country_or_region |
Directly mapped |
date_create |
metadata.event_timestamp.seconds |
Epoch timestamp converted to timestamp object |
details.inviter.email |
target.user.email_addresses |
Directly mapped |
details.inviter.id |
target.user.product_object_id |
Directly mapped |
details.inviter.name |
target.user.user_display_name |
Directly mapped |
details.inviter.team |
target.user.group_identifiers |
Directly mapped |
details.reason |
security_result.description |
Directly mapped; if array, concatenated with commas |
details.type |
about.resource.attribute.labels |
Directly mapped with key details.type |
details.type |
security_result.summary |
Directly mapped |
entity.app.id |
target.resource.id |
Directly mapped |
entity.app.name |
target.resource.name |
Directly mapped |
entity.channel.id |
target.resource.id |
Directly mapped |
entity.channel.name |
target.resource.name |
Directly mapped |
entity.channel.privacy |
target.resource.attribute.labels |
Directly mapped with key entity.channel.privacy |
entity.file.filetype |
target.resource.attribute.labels |
Directly mapped with key entity.file.filetype |
entity.file.id |
target.resource.id |
Directly mapped |
entity.file.name |
target.resource.name |
Directly mapped |
entity.file.title |
target.resource.attribute.labels |
Directly mapped with key entity.file.title |
entity.huddle.date_end |
about.resource.attribute.labels |
Directly mapped with key entity.huddle.date_end |
entity.huddle.date_start |
about.resource.attribute.labels |
Directly mapped with key entity.huddle.date_start |
entity.huddle.id |
about.resource.attribute.labels |
Directly mapped with key entity.huddle.id |
entity.huddle.participants.0 |
about.resource.attribute.labels |
Directly mapped with key entity.huddle.participants.0 |
entity.huddle.participants.1 |
about.resource.attribute.labels |
Directly mapped with key entity.huddle.participants.1 |
entity.type |
target.resource.resource_subtype |
Directly mapped |
entity.user.email |
target.user.email_addresses |
Directly mapped |
entity.user.id |
target.user.product_object_id |
Directly mapped |
entity.user.name |
target.user.user_display_name |
Directly mapped |
entity.user.team |
target.user.group_identifiers |
Directly mapped |
entity.workflow.id |
target.resource.id |
Directly mapped |
entity.workflow.name |
target.resource.name |
Directly mapped |
id |
metadata.product_log_id |
Directly mapped |
ip |
principal.ip |
Directly mapped |
user_agent |
network.http.user_agent |
Directly mapped |
user_id |
principal.user.product_object_id |
Directly mapped |
username |
principal.user.product_object_id |
Directly mapped |
metadata.event_type |
Defaults to USER_COMMUNICATION; set to USER_CREATION if action is user_created, USER_LOGIN if action is user_login or user_login_failed, USER_LOGOUT if action is user_logout, USER_RESOURCE_ACCESS if action matches file_, USER_RESOURCE_UPDATE_PERMISSIONS if action matches app_, private_, public_, or auth_policy_, USER_CHANGE_PERMISSIONS if action matches pref, legal_hold_, workflow_, channel_, user_deactivated, user_reactivated, role_change_, or user_channel_ |
|
metadata.log_type |
Hardcoded to SLACK_AUDIT |
|
metadata.product_name |
Set to Enterprise Grid if date_create exists, otherwise Audit Logs if user_id exists |
|
metadata.vendor_name |
Hardcoded to Slack |
|
extensions.auth.mechanism |
Hardcoded to REMOTE |
|
extensions.auth.type |
Set to SSO if action contains user_login or user_logout, otherwise MACHINE |
|
security_result.action |
Defaults to ALLOW; set to BLOCK if action is user_login_failed |
|
target.application |
Set to Slack if date_create exists, otherwise SLACK if user_id exists |
Need more help? Get answers from Community members and Google SecOps professionals.