Collect Neo4j Aura logs
This document explains how to ingest Neo4j Aura logs to Google Security Operations using Google Cloud Storage V2.
Neo4j Aura is a fully managed cloud graph database service. It provides security logs and query logs that can be forwarded to Google Cloud Logging for AuraDB Business Critical, AuraDB Virtual Dedicated Cloud, and AuraDS Enterprise tiers.
Before you begin
Ensure that you have the following prerequisites:
- A Google SecOps instance
- A GCP project with Cloud Storage API enabled
- Permissions to create and manage GCS buckets
- Permissions to manage IAM policies on GCS buckets
- Permissions to create Cloud Logging sinks
- Neo4j Aura instance running on Google Cloud Platform
- Neo4j Aura Project Admin role
- Neo4j Aura tier that supports log forwarding (AuraDB Business Critical, AuraDB Virtual Dedicated Cloud, or AuraDS Enterprise)
Create Google Cloud Storage bucket
- Go to the Google Cloud Console.
- Select your project or create a new one.
- In the navigation menu, go to Cloud Storage > Buckets.
- Click Create bucket.
Provide the following configuration details:
Setting Value Name your bucket Enter a globally unique name (for example, neo4j-aura-logs)Location type Choose based on your needs (Region, Dual-region, Multi-region) Location Select the location (for example, us-central1)Storage class Standard (recommended for frequently accessed logs) Access control Uniform (recommended) Protection tools Optional: Enable object versioning or retention policy Click Create.
Configure Neo4j Aura log forwarding to Google Cloud Logging
Neo4j Aura forwards logs to Google Cloud Logging in your GCP project. This configuration is done in the Neo4j Aura Console.
- Navigate to the Neo4j Aura Console.
- Sign in with your Neo4j account.
- In the sidebar, go to Settings > Log forwarding.
- Click Configure log forwarding.
- Select the scope for log forwarding:
- AuraDB Business Critical: Select a specific instance to forward its logs.
- AuraDB Virtual Dedicated Cloud: Select a region to forward logs from all instances in that region.
- AuraDS Enterprise: Select a region to forward logs from all instances in that region.
Select the type of logs to forward:
- Security logs: Authentication events, authorization events, and security-related activities.
- Query logs: Cypher queries executed on the database.
If you selected Query logs, expand the Filter section to configure optional filters:
- Remove start entries: Enable to forward only query end entries (reduces volume by approximately 50%).
- Include: Select All queries, Successful queries only, or Failed queries only.
Follow the instructions in the wizard specific to Google Cloud Platform.
Complete the wizard and click Create or Save.
Wait for the status to change from setting up to forwarding.
Create Cloud Logging sink to export logs to GCS
After Neo4j Aura forwards logs to Google Cloud Logging, create a log sink to export those logs to your GCS bucket.
- In the GCP Console, go to Logging > Log Router.
- Click Create sink.
Provide the following configuration details:
Setting Value Sink name Enter a name (for example, neo4j-aura-to-gcs)Sink description Enter a description (for example, Export Neo4j Aura logs to GCS for Chronicle)Click Next.
In the Select sink service section, select Cloud Storage bucket.
In the Select Cloud Storage bucket dropdown, select the bucket you created (for example,
neo4j-aura-logs).Click Next.
In the Choose logs to include in sink section, build an inclusion filter to select Neo4j Aura logs.
For Neo4j Aura logs forwarded from the log forwarding feature, use the following filter:
resource.type="generic_node" resource.labels.namespace="neo4j-aura"Click Next.
Review the sink configuration.
Click Create sink.
Verify log export to GCS
- Wait 2-3 hours for the first log entries to appear in the GCS bucket.
- In the GCP Console, go to Cloud Storage > Buckets.
- Click on your bucket name (for example,
neo4j-aura-logs). Verify that log files are being created in the bucket.
Cloud Logging organizes logs in directory hierarchies by log type and date. The directory structure will be similar to:
neo4j-aura-logs/ ├── security.log/ │ └── YYYY/ │ └── MM/ │ └── DD/ │ └── HH:MM:SS_HH:MM:SS_S0.json └── query.log/ └── YYYY/ └── MM/ └── DD/ └── HH:MM:SS_HH:MM:SS_S0.json
Retrieve the Google SecOps service account
Google SecOps uses a unique service account to read data from your GCS bucket. You must grant this service account access to your bucket.
Get the service account email
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- Click Configure a single feed.
- In the Feed name field, enter a name for the feed (for example,
Neo4j Aura Logs). - Select Google Cloud Storage V2 as the Source type.
- Select NEO4J as the Log type.
- Click Get Service Account.
A unique service account email is displayed. For example:
chronicle-12345678@chronicle-gcp-prod.iam.gserviceaccount.comCopy this email address for use in the next step.
Click Next.
Specify values for the following input parameters:
Storage bucket URL: Enter the GCS bucket URI:
gs://neo4j-aura-logs/Replace
neo4j-aura-logswith your actual bucket name.Source deletion option: Select the deletion option according to your preference:
- Never: Never deletes any files after transfers (recommended for testing).
- Delete transferred files: Deletes files after successful transfer.
Delete transferred files and empty directories: Deletes files and empty directories after successful transfer.
Maximum File Age: Include files modified in the last number of days (default is 180 days)
Asset namespace: The asset namespace
Ingestion labels: The label to be applied to the events from this feed
Click Next.
Review your new feed configuration in the Finalize screen, and then click Submit.
Grant IAM permissions to the Google SecOps service account
The Google SecOps service account needs Storage Object Viewer role on your GCS bucket.
- Go to Cloud Storage > Buckets.
- Click on your bucket name (for example,
neo4j-aura-logs). - Go to the Permissions tab.
- Click Grant access.
Provide the following configuration details:
- Add principals: Paste the Google SecOps service account email
- Assign roles: Select Storage Object Viewer
Click Save.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
| id2 | about.resource.attribute.labels | Labels for the resource attributes |
| id3 | about.resource.attribute.labels | |
| timestamp_msg | metadata.event_timestamp | Event timestamp |
| event_type | metadata.event_type | Event type |
| recordDate | principal.asset.creation_time | Asset creation time |
| namespace | principal.domain.tech.product_object_id | Product object ID in the domain |
| properties.host_group | principal.hostname | Source hostname |
| msg | principal.process.command_line | Command line of the process |
| user_id | principal.user.userid | User ID of the principal |
| role | roles.name | Name of the role |
| security_result | security_result | Security result details |
| levelx | security_result.severity | Severity level |
| levelx | security_result.severity_details | Detailed severity information |
| properties.source | src.resource.name | Source resource name |
| properties.cluster_node | target.hostname | Destination hostname |
| database | target.resource.name | Target resource name |
| roles | target.user.attribute.roles | Roles associated with the target user |
| user_dest | target.user.userid | User ID of the target |
| metadata.product_name | metadata.product_name | Product name |
| metadata.vendor_name | metadata.vendor_name | Vendor name |
Need more help? Get answers from Community members and Google SecOps professionals.