BITWARDEN_EVENTS

Supported in:

This guide explains how you can ingest Bitwarden Enterprise Event Logs to Google Security Operations using Amazon S3. The parser transforms raw JSON formatted event logs into a structured format conforming to the Chronicle UDM. It extracts relevant fields like user details, IP addresses, and event types, mapping them to corresponding UDM fields for consistent security analysis.

Before you begin

  • Google SecOps instance.
  • Privileged access to Bitwarden tenant.
  • Privileged access to AWS (S3, IAM, Lambda, EventBridge).

Get Bitwarden API key and URL

  1. In the Bitwarden Admin Console.
  2. Go to Settings > Organization info > View API key.
  3. Copy and save the following details to a secure location:
    • Client ID
    • Client Secret
  4. Determine your Bitwarden endpoints (based on region):
    • IDENTITY_URL = https://identity.bitwarden.com/connect/token (EU: https://identity.bitwarden.eu/connect/token)
    • API_BASE = https://api.bitwarden.com (EU: https://api.bitwarden.eu)

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucket following this user guide: Creating a bucket
  2. Save bucket Name and Region for future reference (for example, bitwarden-events).
  3. Create a User following this user guide: Creating an IAM user.
  4. Select the created User.
  5. Select Security credentials tab.
  6. Click Create Access Key in section Access Keys.
  7. Select Third-party service as Use case.
  8. Click Next.
  9. Optional: Add description tag.
  10. Click Create access key.
  11. Click Download .csv file to save the Access Key and Secret Access Key for future reference.
  12. Click Done.
  13. Select Permissions tab.
  14. Click Add permissions in section Permissions policies.
  15. Select Add permissions.
  16. Select Attach policies directly.
  17. Search for AmazonS3FullAccess policy.
  18. Select the policy.
  19. Click Next.
  20. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. Go to AWS console > IAM > Policies > Create policy > JSON tab.
  2. Copy and paste the policy below.
  3. Policy JSON (replace bitwarden-events if you entered a different bucket name):
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowPutBitwardenObjects",
      "Effect": "Allow",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::bitwarden-events/*"
    },
    {
      "Sid": "AllowGetStateObject",
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::bitwarden-events/bitwarden/events/state.json"
    }
  ]
}

  1. Click Next > Create policy.
  2. Go to IAM > Roles > Create role > AWS service > Lambda.
  3. Attach the newly created policy.
  4. Name the role WriteBitwardenToS3Role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:
Setting Value
Name bitwarden_events_to_s3
Runtime Python 3.13
Architecture x86_64
Execution role WriteBitwardenToS3Role
  1. After the function is created, open the Code tab, delete the stub and paste the code below (bitwarden_events_to_s3.py).
#!/usr/bin/env python3

import os, json, time, urllib.parse
from urllib.request import Request, urlopen
from urllib.error import HTTPError, URLError
import boto3

IDENTITY_URL = os.environ.get("IDENTITY_URL", "https://identity.bitwarden.com/connect/token")
API_BASE = os.environ.get("API_BASE", "https://api.bitwarden.com").rstrip("/")
CID = os.environ["BW_CLIENT_ID"]          # organization.ClientId
CSECRET = os.environ["BW_CLIENT_SECRET"]  # organization.ClientSecret
BUCKET = os.environ["S3_BUCKET"]
PREFIX = os.environ.get("S3_PREFIX", "bitwarden/events/").strip("/")
STATE_KEY = os.environ.get("STATE_KEY", "bitwarden/events/state.json")
MAX_PAGES = int(os.environ.get("MAX_PAGES", "10"))

HEADERS_FORM = {"Content-Type": "application/x-www-form-urlencoded"}
HEADERS_JSON = {"Accept": "application/json"}

s3 = boto3.client("s3")


def _read_state():
    try:
        obj = s3.get_object(Bucket=BUCKET, Key=STATE_KEY)
        j = json.loads(obj["Body"].read())
        return j.get("continuationToken")
    except Exception:
        return None


def _write_state(token):
    body = json.dumps({"continuationToken": token}).encode("utf-8")
    s3.put_object(Bucket=BUCKET, Key=STATE_KEY, Body=body, ContentType="application/json")


def _http(req: Request, timeout: int = 60, max_retries: int = 5):
    attempt, backoff = 0, 1.0
    while True:
        try:
            with urlopen(req, timeout=timeout) as r:
                return json.loads(r.read().decode("utf-8"))
        except HTTPError as e:
            # Retry on 429 and 5xx
            if (e.code == 429 or 500 <= e.code <= 599) and attempt < max_retries:
                time.sleep(backoff); attempt += 1; backoff *= 2; continue
            raise
        except URLError:
            if attempt < max_retries:
                time.sleep(backoff); attempt += 1; backoff *= 2; continue
            raise


def _get_token():
    body = urllib.parse.urlencode({
        "grant_type": "client_credentials",
        "scope": "api.organization",
        "client_id": CID,
        "client_secret": CSECRET,
    }).encode("utf-8")
    req = Request(IDENTITY_URL, data=body, method="POST", headers=HEADERS_FORM)
    data = _http(req, timeout=30)
    return data["access_token"], int(data.get("expires_in", 3600))


def _fetch_events(bearer: str, cont: str | None):
    params = {}
    if cont:
        params["continuationToken"] = cont
    qs = ("?" + urllib.parse.urlencode(params)) if params else ""
    url = f"{API_BASE}/public/events{qs}"
    req = Request(url, method="GET", headers={"Authorization": f"Bearer {bearer}", **HEADERS_JSON})
    return _http(req, timeout=60)


def _write_events_jsonl(events: list, run_ts_s: int, page_index: int) -> str:
    """
    Write events in JSONL format (one JSON object per line).
    Only writes if there are events to write.
    Returns the S3 key of the written file.
    """
    if not events:
        return None
    
    # Build JSONL content: one event per line
    lines = [json.dumps(event, separators=(",", ":")) for event in events]
    jsonl_content = "\n".join(lines) + "\n"  # JSONL format with trailing newline
    
    # Generate unique filename with page number to avoid conflicts
    key = f"{PREFIX}/{time.strftime('%Y/%m/%d/%H%M%S', time.gmtime(run_ts_s))}-page{page_index:05d}-bitwarden-events.jsonl"
    
    s3.put_object(
        Bucket=BUCKET,
        Key=key,
        Body=jsonl_content.encode("utf-8"),
        ContentType="application/x-ndjson",  # MIME type for JSONL
    )
    return key


def lambda_handler(event=None, context=None):
    bearer, _ttl = _get_token()
    cont = _read_state()
    run_ts_s = int(time.time())

    pages = 0
    total_events = 0
    written_files = []
    
    while pages < MAX_PAGES:
        data = _fetch_events(bearer, cont)
        
        # Extract events array from API response
        # API returns: {"object":"list", "data":[...], "continuationToken":"..."}
        events = data.get("data", [])
        
        # Only write file if there are events
        if events:
            s3_key = _write_events_jsonl(events, run_ts_s, pages)
            if s3_key:
                written_files.append(s3_key)
                total_events += len(events)
        
        pages += 1
        
        # Check for next page token
        next_cont = data.get("continuationToken")
        if next_cont:
            cont = next_cont
            continue
        else:
            # No more pages
            break
    
    # Save state only if there are more pages to continue in next run
    # If we hit MAX_PAGES and there's still a continuation token, save it
    # Otherwise, clear the state (set to None)
    _write_state(cont if pages >= MAX_PAGES and cont else None)
    
    return {
        "ok": True,
        "pages": pages,
        "total_events": total_events,
        "files_written": len(written_files),
        "nextContinuationToken": cont if pages >= MAX_PAGES else None
    }


if __name__ == "__main__":
    print(lambda_handler())
  1. Go to Configuration > Environment variables > Edit > Add new environment variable.
  2. Enter the environment variables provided below, replacing with your values.

Environment variables

Key Example
S3_BUCKET bitwarden-events
S3_PREFIX bitwarden/events/
STATE_KEY bitwarden/events/state.json
BW_CLIENT_ID <organization client_id>
BW_CLIENT_SECRET <organization client_secret>
IDENTITY_URL https://identity.bitwarden.com/connect/token (EU: https://identity.bitwarden.eu/connect/token)
API_BASE https://api.bitwarden.com (EU: https://api.bitwarden.eu)
MAX_PAGES 10
  1. After the function is created, stay on its page (or open Lambda > Functions > your‑function).
  2. Select the Configuration tab.
  3. In the General configuration panel click Edit.
  4. Change Timeout to 5 minutes (300 seconds) and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate (1 hour).
    • Target: Your Lambda function.
    • Name: bitwarden-events-1h.
  3. Click Create schedule.

(Optional) Create read-only IAM user & keys for Google SecOps

  1. Go to AWS Console > IAM > Users > Add users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: Enter secops-reader.
    • Access type: Select Access key — Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. JSON:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject"],
      "Resource": "arn:aws:s3:::<your-bucket>/*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": "arn:aws:s3:::<your-bucket>"
    }
  ]
}
  1. Name = secops-reader-policy.
  2. Click Create policy > search/select > Next > Add permissions.
  3. Create access key for secops-reader: Security credentials > Access keys > Create access key > download the .csv (you will paste these values into the feed).

Configure a feed in Google SecOps to ingest the Bitwarden Enterprise Event Logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed name field, enter a name for the feed (for example, Bitwarden Events).
  4. Select Amazon S3 V2 as the Source type.
  5. Select Bitwarden events as the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://bitwarden-events/bitwarden/events/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Default 180 Days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace.
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalize screen, and then click Submit.

UDM Mapping Table

Log Field UDM Mapping Logic
actingUserId target.user.userid If enriched.actingUser.userId is empty or null, this field is used to populate the target.user.userid field.
collectionID security_result.detection_fields.key Populates the key field within detection_fields in security_result.
collectionID security_result.detection_fields.value Populates the value field within detection_fields in security_result.
date metadata.event_timestamp Parsed and converted to a timestamp format and mapped to event_timestamp.
enriched.actingUser.accessAll security_result.rule_labels.key Sets the value to "Access_All" within rule_labels in security_result.
enriched.actingUser.accessAll security_result.rule_labels.value Populates the value field within rule_labels in security_result with the value from enriched.actingUser.accessAll converted to string.
enriched.actingUser.email target.user.email_addresses Populates the email_addresses field within target.user.
enriched.actingUser.id metadata.product_log_id Populates the product_log_id field within metadata.
enriched.actingUser.id target.labels.key Sets the value to "ID" within target.labels.
enriched.actingUser.id target.labels.value Populates the value field within target.labels with the value from enriched.actingUser.id.
enriched.actingUser.name target.user.user_display_name Populates the user_display_name field within target.user.
enriched.actingUser.object target.labels.key Sets the value to "Object" within target.labels.
enriched.actingUser.object target.labels.value Populates the value field within target.labels with the value from enriched.actingUser.object.
enriched.actingUser.resetPasswordEnrolled target.labels.key Sets the value to "ResetPasswordEnrolled" within target.labels.
enriched.actingUser.resetPasswordEnrolled target.labels.value Populates the value field within target.labels with the value from enriched.actingUser.resetPasswordEnrolled converted to string.
enriched.actingUser.twoFactorEnabled security_result.rule_labels.key Sets the value to "Two Factor Enabled" within rule_labels in security_result.
enriched.actingUser.twoFactorEnabled security_result.rule_labels.value Populates the value field within rule_labels in security_result with the value from enriched.actingUser.twoFactorEnabled converted to string.
enriched.actingUser.userId target.user.userid Populates the userid field within target.user.
enriched.collection.id additional.fields.key Sets the value to "Collection ID" within additional.fields.
enriched.collection.id additional.fields.value.string_value Populates the string_value field within additional.fields with the value from enriched.collection.id.
enriched.collection.object additional.fields.key Sets the value to "Collection Object" within additional.fields.
enriched.collection.object additional.fields.value.string_value Populates the string_value field within additional.fields with the value from enriched.collection.object.
enriched.type metadata.product_event_type Populates the product_event_type field within metadata.
groupId target.user.group_identifiers Adds the value to the group_identifiers array within target.user.
ipAddress principal.ip Extracted IP address from the field and mapped to principal.ip.
N/A extensions.auth An empty object is created by the parser.
N/A metadata.event_type Determined based on the enriched.type and presence of principal and target information. Possible values: USER_LOGIN, STATUS_UPDATE, GENERIC_EVENT.
N/A security_result.action Determined based on the enriched.type. Possible values: ALLOW, BLOCK.
object additional.fields.key Sets the value to "Object" within additional.fields.
object additional.fields.value Populates the value field within additional.fields with the value from object.

Need more help? Get answers from Community members and Google SecOps professionals.