Collect Oracle Cloud Infrastructure Audit logs
This document explains how to ingest Oracle Cloud Infrastructure Audit logs to Google Security Operations using Amazon S3.
Before you begin
Ensure that you have the following prerequisites:
- Google SecOps instance.
- Oracle Cloud Infrastructure account with permissions to create and manage:
- Service Connector Hub
- Oracle Functions
- Vaults and Secrets
- Dynamic Groups and IAM Policies
- Logging
- AWS account with permissions to create and manage:
- S3 buckets
- IAM users and policies
Create an Amazon S3 bucket
- Sign in to the AWS Management Console.
- Go to S3 > Create bucket.
- Provide the following configuration details:
- Bucket name: Enter a unique name (for example,
oci-audit-logs-bucket). - AWS Region: Select a region (for example,
us-east-1). - Keep the default settings for other options.
- Bucket name: Enter a unique name (for example,
- Click Create bucket.
- Save the bucket Name and Region for later use.
Create an IAM user in AWS for OCI Functions
- Sign in to the AWS Management Console.
- Go to IAM > Users > Add users.
- Provide the following configuration details:
- User name: Enter a username (for example,
oci-functions-s3-user). - Access type: Select Access key - Programmatic access.
- User name: Enter a username (for example,
- Click Next: Permissions.
- Click Attach existing policies directly.
- Search for and select the AmazonS3FullAccess policy.
- Click Next: Tags.
- Click Next: Review.
- Click Create user.
- Important: On the success page, copy and save the following credentials:
- Access key ID
- Secret access key
Store AWS credentials in OCI Vault
To securely store AWS credentials, you must use Oracle Cloud Infrastructure Vault instead of hardcoding them in the function code.
Create a Vault and Master Encryption Key
- Sign in to the Oracle Cloud Console.
- Go to Identity and Security > Vault.
- If you don't have a Vault, click Create Vault.
- Provide the following configuration details:
- Create in Compartment: Select your compartment.
- Name: Enter a name (for example,
oci-functions-vault).
- Click Create Vault.
- After the Vault is created, click the Vault name to open it.
- Under Master Encryption Keys, click Create Key.
- Provide the following configuration details:
- Protection Mode: Software
- Name: Enter a name (for example,
oci-functions-key). - Key Shape: Algorithm: AES
- Key Shape: Length: 256 bits
- Click Create Key.
Create secrets for AWS credentials
- In the Vault, under Secrets, click Create Secret.
- Provide the following configuration details for the AWS access key:
- Create in Compartment: Select your compartment.
- Name:
aws-access-key - Description: AWS access key for S3
- Encryption Key: Select the Master Encryption Key you created.
- Secret Type Contents: Plain-Text
- Secret Contents: Paste your AWS access key ID.
- Click Create Secret.
- Copy and save the OCID of this secret (it looks like
ocid1.vaultsecret.oc1...). - Click Create Secret again to create the second secret.
- Provide the following configuration details for the AWS secret key:
- Create in Compartment: Select your compartment.
- Name:
aws-secret-key - Description: AWS secret key for S3
- Encryption Key: Select the same Master Encryption Key.
- Secret Type Contents: Plain-Text
- Secret Contents: Paste your AWS secret access key.
- Click Create Secret.
- Copy and save the OCID of this secret.
Create a Dynamic Group for OCI Functions
- Sign in to the Oracle Cloud Console.
- Go to Identity & Security > Identity > Dynamic Groups.
- Click Create Dynamic Group.
Provide the following configuration details:
- Name:
oci-functions-dynamic-group - Description: Dynamic group for OCI Functions to access Vault secrets
Matching Rules: Enter the following rule (replace
<your_compartment_ocid>with your compartment OCID):ALL {resource.type = 'fnfunc', resource.compartment.id = '<your_compartment_ocid>'}
- Name:
Click Create.
Create an IAM Policy for Vault access
- Sign in to the Oracle Cloud Console.
- Go to Identity & Security > Identity > Policies.
- Select the compartment where you want to create the policy.
- Click Create Policy.
Provide the following configuration details:
- Name:
oci-functions-vault-access-policy - Description: Allow OCI Functions to read secrets from Vault
- Policy Builder: Toggle Show manual editor.
Policy statements: Enter the following (replace
<compartment_name>with your compartment name):allow dynamic-group oci-functions-dynamic-group to manage secret-family in compartment <compartment_name>
- Name:
Click Create.
Create an OCI Function Application
- Sign in to the Oracle Cloud Console.
- Go to Developer Services > Applications (under Functions).
- Click Create Application.
- Provide the following configuration details:
- Name: Enter a name (for example,
oci-logs-to-s3-app). - VCN: Select a VCN in your compartment.
- Subnets: Select one or more subnets.
- Name: Enter a name (for example,
- Click Create.
Create and deploy the OCI Function
Set up Cloud Shell (recommended)
- In the Oracle Cloud Console, click the Cloud Shell icon in the top-right corner.
- Wait for Cloud Shell to initialize.
Create the function
In Cloud Shell, create a new directory for your function:
mkdir pushlogs cd pushlogsInitialize a new Python function:
fn init --runtime pythonThis creates three files:
func.py,func.yaml, andrequirements.txt.
Update func.py
Replace the contents of
func.pywith the following code:import io import json import logging import boto3 import oci import base64 import os from fdk import response def handler(ctx, data: io.BytesIO = None): """ OCI Function to push audit logs from OCI Logging to AWS S3 """ try: # Parse incoming log data from Service Connector funDataStr = data.read().decode('utf-8') funData = json.loads(funDataStr) logging.getLogger().info(f"Received {len(funData)} log entries") # Replace these with your actual OCI Vault secret OCIDs secret_key_id = "ocid1.vaultsecret.oc1..<your_secret_key_ocid>" access_key_id = "ocid1.vaultsecret.oc1..<your_access_key_ocid>" # Replace with your S3 bucket name s3_bucket_name = "oci-audit-logs-bucket" # Use Resource Principals for OCI authentication signer = oci.auth.signers.get_resource_principals_signer() secret_client = oci.secrets.SecretsClient({}, signer=signer) def read_secret_value(secret_client, secret_id): """Retrieve and decode secret value from OCI Vault""" response = secret_client.get_secret_bundle(secret_id) base64_secret_content = response.data.secret_bundle_content.content base64_secret_bytes = base64_secret_content.encode('ascii') base64_message_bytes = base64.b64decode(base64_secret_bytes) secret_content = base64_message_bytes.decode('ascii') return secret_content # Retrieve AWS credentials from OCI Vault awsaccesskey = read_secret_value(secret_client, access_key_id) awssecretkey = read_secret_value(secret_client, secret_key_id) # Initialize boto3 session with AWS credentials session = boto3.Session( aws_access_key_id=awsaccesskey, aws_secret_access_key=awssecretkey ) s3 = session.resource('s3') # Process each log entry for i in range(0, len(funData)): # Use timestamp as filename filename = funData[i].get('time', f'log_{i}') # Remove special characters from filename filename = filename.replace(':', '-').replace('.', '-') logging.getLogger().info(f"Processing log entry: {filename}") # Write log entry to temporary file temp_file = f'/tmp/{filename}.json' with open(temp_file, 'w', encoding='utf-8') as f: json.dump(funData[i], f, ensure_ascii=False, indent=4) # Upload to S3 s3_key = f'{filename}.json' s3.meta.client.upload_file( Filename=temp_file, Bucket=s3_bucket_name, Key=s3_key ) logging.getLogger().info(f"Uploaded {s3_key} to S3 bucket {s3_bucket_name}") # Clean up temporary file os.remove(temp_file) return response.Response( ctx, response_data=json.dumps({ "status": "success", "processed_logs": len(funData) }), headers={"Content-Type": "application/json"} ) except Exception as e: logging.getLogger().error(f"Error processing logs: {str(e)}") return response.Response( ctx, response_data=json.dumps({ "status": "error", "message": str(e) }), headers={"Content-Type": "application/json"}, status_code=500 )- Replace
secret_key_idwith your actual vault secret OCID for AWS secret key - Replace
access_key_idwith your actual vault secret OCID for AWS access key - Replace
s3_bucket_namewith your actual S3 bucket name
- Replace
Update func.yaml
Replace the contents of func.yaml with:
schema_version: 20180708
name: pushlogs
version: 0.0.1
runtime: python
build_image: fnproject/python:3.9-dev
run_image: fnproject/python:3.9
entrypoint: /python/bin/fdk /function/func.py handler
memory: 256
Update requirements.txt
Replace the contents of
requirements.txtwith:fdk>=0.1.56 boto3 oci
Deploy the function
Set the Fn context to use your application:
fn use context <region-context> fn update context oracle.compartment-id <compartment-ocid>Deploy the function:
fn -v deploy --app oci-logs-to-s3-appWait for the deployment to complete. You should see output indicating the function was successfully deployed.
Verify the function was created:
fn list functions oci-logs-to-s3-app
Create a Service Connector to send OCI Audit logs to the Function
- Sign in to the Oracle Cloud Console.
- Go to Analytics & AI > Messaging > Service Connector Hub.
- Select the compartment where you want to create the service connector.
- Click Create Service Connector.
Configure Service Connector details
- Provide the following configuration details:
Service Connector Information:
* Connector Name: Enter a descriptive name (for example, audit-logs-to-s3-connector).
* Description: Optional description (for example, "Forward OCI Audit logs to AWS S3").
* Resource Compartment: Select the compartment.
Configure Source
- Under Configure Source:
- Source: Select Logging.
- Compartment: Select the compartment containing audit logs.
- Log Group: Select
_Audit(this is the default log group for audit logs). - Logs: Click + Another Log.
- Select the audit log for your compartment (for example,
_Audit_Include_Subcompartment).
Configure Target
- Under Configure Target:
- Target: Select Functions.
- Compartment: Select the compartment containing your function application.
- Function Application: Select
oci-logs-to-s3-app(the application you created earlier). - Function: Select
pushlogs(the function you deployed).
Configure Policy
Under Configure Policy:
- Review the required IAM policy statements displayed.
- Click Create to create the required policies automatically.
Click Create to create the service connector.
Wait for the service connector to be created and activated. The status should change to Active.
Verify logs are being pushed to AWS S3
- Sign in to the Oracle Cloud Console.
- Perform some actions that generate audit logs (for example, create or modify a resource).
- Wait 2-5 minutes for logs to be processed.
- Sign in to the AWS Management Console.
- Go to S3 > Buckets.
- Click your bucket (for example,
oci-audit-logs-bucket). - Verify that JSON log files are appearing in the bucket.
Configure AWS S3 bucket and IAM for Google SecOps
Create an IAM user for Chronicle
- Sign in to the AWS Management Console.
- Go to IAM > Users > Add users.
- Provide the following configuration details:
- User name: Enter
chronicle-s3-reader. - Access type: Select Access key - Programmatic access.
- User name: Enter
- Click Next: Permissions.
- Click Attach existing policies directly.
- Search for and select the AmazonS3ReadOnlyAccess policy.
- Click Next: Tags.
- Click Next: Review.
- Click Create user.
- Click Download CSV file to save the Access Key ID and Secret Access Key.
- Click Close.
Optional: Create a custom IAM policy for least-privilege access
If you want to restrict access to only the specific bucket:
- Go to IAM > Policies > Create policy.
- Click the JSON tab.
Enter the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::oci-audit-logs-bucket", "arn:aws:s3:::oci-audit-logs-bucket/*" ] } ] }- Replace
oci-audit-logs-bucketwith your bucket name.
- Replace
Click Next: Tags.
Click Next: Review.
Provide the following configuration details:
- Name:
chronicle-s3-read-policy - Description: Read-only access to OCI audit logs bucket
- Name:
Click Create policy.
Go back to IAM > Users and select the
chronicle-s3-readeruser.Click Add permissions > Attach policies directly.
Search for and select
chronicle-s3-read-policy.Remove the AmazonS3ReadOnlyAccess policy if you added it earlier.
Click Add permissions.
Configure a feed in Google SecOps to ingest Oracle Cloud Audit logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- On the next page, click Configure a single feed.
- In the Feed name field, enter a name for the feed (for example,
Oracle Cloud Audit Logs). - Select Amazon S3 V2 as the Source type.
- Select Oracle Cloud Infrastructure as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI: Enter the S3 bucket URI (for example,
s3://oci-audit-logs-bucket/). - Source deletion option: Select the deletion option according to your preference:
- Never: Recommended for testing and initial setup.
- Delete transferred files: Deletes files after successful ingestion (use for production to manage storage costs).
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: Enter the access key ID from the Chronicle IAM user you created.
- Secret Access Key: Enter the secret access key from the Chronicle IAM user you created.
- Asset namespace: The asset namespace.
- Ingestion labels: The label to be applied to the events from this feed.
- S3 URI: Enter the S3 bucket URI (for example,
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
data.request.headers.authorization.0 |
event.idm.read_only_udm.additional.fields |
Value taken from data.request.headers.authorization.0 and added as a key-value pair where the key is "Request Headers Authorization". |
data.compartmentId |
event.idm.read_only_udm.additional.fields |
Value taken from data.compartmentId and added as a key-value pair where the key is "compartmentId". |
data.compartmentName |
event.idm.read_only_udm.additional.fields |
Value taken from data.compartmentName and added as a key-value pair where the key is "compartmentName". |
data.response.headers.Content-Length.0 |
event.idm.read_only_udm.additional.fields |
Value taken from data.response.headers.Content-Length.0 and added as a key-value pair where the key is "Response Headers Content-Length". |
data.response.headers.Content-Type.0 |
event.idm.read_only_udm.additional.fields |
Value taken from data.response.headers.Content-Type.0 and added as a key-value pair where the key is "Response Headers Content-Type". |
data.eventGroupingId |
event.idm.read_only_udm.additional.fields |
Value taken from data.eventGroupingId and added as a key-value pair where the key is "eventGroupingId". |
oracle.tenantid, data.identity.tenantId |
event.idm.read_only_udm.additional.fields |
Value is taken from oracle.tenantid if present, otherwise from data.identity.tenantId. It is added as a key-value pair where the key is "tenantId". |
data.message |
event.idm.read_only_udm.metadata.description |
Value taken from data.message. |
time |
event.idm.read_only_udm.metadata.event_timestamp |
Value taken from time and parsed as an ISO8601 timestamp. |
event.idm.read_only_udm.metadata.event_type |
Set to GENERIC_EVENT by default. Set to NETWORK_CONNECTION if a principal (IP or hostname) and a target IP are present. Set to STATUS_UPDATE if only a principal is present. |
|
time |
event.idm.read_only_udm.metadata.ingested_timestamp |
If oracle.ingestedtime is not empty, the value is taken from the time field and parsed as an ISO8601 timestamp. |
oracle.tenantid |
event.idm.read_only_udm.metadata.product_deployment_id |
Value taken from oracle.tenantid. |
type |
event.idm.read_only_udm.metadata.product_event_type |
Value taken from type. |
oracle.logid |
event.idm.read_only_udm.metadata.product_log_id |
Value taken from oracle.logid. |
specversion |
event.idm.read_only_udm.metadata.product_version |
Value taken from specversion. |
data.request.action |
event.idm.read_only_udm.network.http.method |
Value taken from data.request.action. |
data.identity.userAgent |
event.idm.read_only_udm.network.http.parsed_user_agent |
Value taken from data.identity.userAgent and parsed. |
data.response.status |
event.idm.read_only_udm.network.http.response_code |
Value taken from data.response.status and converted to an integer. |
data.protocol |
event.idm.read_only_udm.network.ip_protocol |
The numeric value from data.protocol is converted to its string representation (e.g., 6 becomes "TCP", 17 becomes "UDP"). |
data.bytesOut |
event.idm.read_only_udm.network.sent_bytes |
Value taken from data.bytesOut and converted to an unsigned integer. |
data.packets |
event.idm.read_only_udm.network.sent_packets |
Value taken from data.packets and converted to an integer. |
data.identity.consoleSessionId |
event.idm.read_only_udm.network.session_id |
Value taken from data.identity.consoleSessionId. |
id |
event.idm.read_only_udm.principal.asset.product_object_id |
Value taken from id. |
source |
event.idm.read_only_udm.principal.hostname |
Value taken from source. |
data.sourceAddress, data.identity.ipAddress |
event.idm.read_only_udm.principal.ip |
Values from data.sourceAddress and data.identity.ipAddress are merged into this field. |
data.sourcePort |
event.idm.read_only_udm.principal.port |
Value taken from data.sourcePort and converted to an integer. |
data.request.headers.X-Forwarded-For.0 |
event.idm.read_only_udm.principal.resource.attribute.labels |
Value taken from data.request.headers.X-Forwarded-For.0 and added as a key-value pair where the key is "x forward". |
oracle.compartmentid |
event.idm.read_only_udm.principal.resource.attribute.labels |
Value taken from oracle.compartmentid and added as a key-value pair where the key is "compartmentid". |
oracle.loggroupid |
event.idm.read_only_udm.principal.resource.attribute.labels |
Value taken from oracle.loggroupid and added as a key-value pair where the key is "loggroupid". |
oracle.vniccompartmentocid |
event.idm.read_only_udm.principal.resource.attribute.labels |
Value taken from oracle.vniccompartmentocid and added as a key-value pair where the key is "vniccompartmentocid". |
oracle.vnicocid |
event.idm.read_only_udm.principal.resource.attribute.labels |
Value taken from oracle.vnicocid and added as a key-value pair where the key is "vnicocid". |
oracle.vnicsubnetocid |
event.idm.read_only_udm.principal.resource.attribute.labels |
Value taken from oracle.vnicsubnetocid and added as a key-value pair where the key is "vnicsubnetocid". |
data.flowid |
event.idm.read_only_udm.principal.resource.product_object_id |
Value taken from data.flowid. |
data.identity.credentials |
event.idm.read_only_udm.principal.user.attribute.labels |
Value taken from data.identity.credentials and added as a key-value pair where the key is "credentials". |
data.identity.principalName |
event.idm.read_only_udm.principal.user.user_display_name |
Value taken from data.identity.principalName. |
data.identity.principalId |
event.idm.read_only_udm.principal.user.userid |
Value taken from data.identity.principalId. |
data.action |
event.idm.read_only_udm.security_result.action |
Set to UNKNOWN_ACTION by default. If data.action is "REJECT", this is set to BLOCK. If data.action is "ACCEPT", this is set to ALLOW. |
data.endTime |
event.idm.read_only_udm.security_result.detection_fields |
Value taken from data.endTime and added as a key-value pair where the key is "endTime". |
data.startTime |
event.idm.read_only_udm.security_result.detection_fields |
Value taken from data.startTime and added as a key-value pair where the key is "startTime". |
data.status |
event.idm.read_only_udm.security_result.detection_fields |
Value taken from data.status and added as a key-value pair where the key is "status". |
data.version |
event.idm.read_only_udm.security_result.detection_fields |
Value taken from data.version and added as a key-value pair where the key is "version". |
data.destinationAddress |
event.idm.read_only_udm.target.ip |
Value taken from data.destinationAddress. |
data.destinationPort |
event.idm.read_only_udm.target.port |
Value taken from data.destinationPort and converted to an integer. |
data.request.path |
event.idm.read_only_udm.target.url |
Value taken from data.request.path. |
event.idm.read_only_udm.metadata.product_name |
Set to "ORACLE CLOUD AUDIT". | |
event.idm.read_only_udm.metadata.vendor_name |
Set to "ORACLE". |
Need more help? Get answers from Community members and Google SecOps professionals.