Overview
Google Distributed Cloud makes use of Kubernetes Audit Logging, which keeps a chronological record of calls made to a cluster's Kubernetes API server. Audit logs are useful for investigating suspicious API requests and for collecting statistics.
You can configure a cluster to write audit logs to disk or to Cloud Audit Logs in a Google Cloud project. Writing to Cloud Audit Logs has several benefits over writing to disk, or even capturing logs in an on-premises logging system:
- Audit logs for all GKE clusters can be centralized.
- Log entries written to Cloud Audit Logs are immutable.
- Cloud Audit Logs entries are retained for 400 days.
- Cloud Audit Logs is included in the price of Anthos.
Disk-based audit logging
By default, audit logs are written to a persistent disk so that VM restarts and upgrades don't cause the logs to disappear. Google Distributed Cloud retains up to 12 GB of audit log entries.
Cloud Audit Logs
If you enable Cloud Audit Logs for a cluster, then Admin Activity audit log
entries from the cluster's Kubernetes API server are sent to Google Cloud,
using the Google Cloud project that you specify in the
cloudAuditLogging.projectID field of your cluster configuration file.
This Google Cloud project is called your
audit logging project.
Your audit logging project must be the same as your connect project.
When you enable Cloud Audit Logs, Google Distributed Cloud disables disk-based audit logging.
To buffer and write log entries to Cloud Audit Logs,
Google Distributed Cloud deploys an audit-proxy Pod to the admin cluster.
This component is also available as a sidecar container on user clusters.
Limitations
Cloud Audit Logs for Google Distributed Cloud is a preview feature. This preview release has several limitations:
Data access logging is not supported.
Modifying the Kubernetes audit policy is not supported.
Cloud Audit Logs is not resilient to extended network outages. If the log entries cannot be exported to Google Cloud, they are cached in a 10G disk buffer. If that buffer fills, then subsequent entries are dropped.
Enable the Anthos Audit API
Enable the Anthos Audit API in your audit logging project.
Create a service account for Cloud Audit Logs
You already have one or more service accounts that you created to use with Google Distributed Cloud. For this feature, you need to create an additional service account called the audit logging service account.
Create your audit logging service account:
gcloud iam service-accounts create audit-logging-service-account
Create a JSON key file for your Cloud Audit Logs service account:
gcloud iam service-accounts keys create audit-logging-key.json \ --iam-account AUDIT_LOGGING_SERVICE_ACCOUNT_EMAIL
where AUDIT_LOGGING_SERVICE_ACCOUNT_EMAIL is the email address of your service account.
Save
audit-logging-key.jsonon the admin workstation in the same location as your other service account keys.
Create an admin cluster with Cloud Audit Logs enabled
You can enable Cloud Audit Logs for an admin cluster only when you first create the admin cluster. You cannot modify an existing admin cluster to enable Cloud Audit Logs.
Refer to Creating an admin cluster.
In your admin cluster configuration file, fill in the
cloudAuditLoggingsection.Set
cloudAuditLogging.projectIdto the ID of your audit logging project.Set
cloudAuditLogging.clusterLocationto a Google Cloud region where you want to store audit logs. For improved latency, choose a region that is near your on-premises data center.Set
cloudAuditLogging.serviceAccountKeyPathto the path of the JSON key file for your audit logging service account.
For example:
cloudAuditLogging: projectId: "my-project" clusterLocation: "us-west1" serviceAccountKeyPath: "/my-key-folder/audit-logging-key.json"
Continue the cluster creation as usual.
Create a user cluster with Cloud Audit Logs enabled
Refer to Creating a user cluster.
In your user cluster configuration file, fill in the
cloudAuditLoggingsection.Set
cloudAuditLogging.projectIdto the ID of your audit logging project.Set
cloudAuditLogging.clusterLocationto a Google Cloud region where you want to store audit logs. For improved latency, choose a region that is near your on-premises data center.Set
cloudAuditLogging.serviceAccounKeyPathto the path of the JSON key file for your Cloud Audit Logs service account.Ensure that the
gkeConnectsection is filled in andgkeConnect.projectIDis the same ascloudAuditLogging.projectID.
For example:
gkeConnect: projectId: "my-project" registerServiceAccountKeyPath: "/my-key-fokder/connect-register-key.json" cloudAuditLogging: projectId: "my-project" clusterLocation: "us-west1" serviceAccountKeyPath: "/my-key-folder/audit-logging-key.json"
Continue the cluster creation as usual.
Enable Cloud Audit Logs for an existing user cluster
To enable Cloud Audit Logs, the cluster must already be registered. That
is, you filled in the gkeConnect section of the
cluster configuration
file
before you created the cluster.
In the user cluster configuration file, fill in the
cloudAuditLoggingsection.Set
cloudAuditLogging.projectIdto the ID of your audit logging project.Set
cloudAuditLogging.clusterLocationto a Google Cloud region where you want to store audit logs. For improved latency, choose a region that is near your on-premises data center.Set
cloudAuditLogging.serviceAccounKeyPathto the path of the JSON key file for your Cloud Audit Logs service account.Ensure that the
gkeConnectsection is filled in andgkeConnect.projectIDis the same ascloudAuditLogging.projectID.
For example:
gkeConnect: projectId: "my-project" registerServiceAccountKeyPath: "/my-key-fokder/connect-register-key.json" cloudAuditLogging: projectId: "my-project" clusterLocation: "us-west1" serviceAccountKeyPath: "/my-key-folder/audit-logging-key.json"
Update the user cluster:
gkectl update cluster --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] --config [USER_CLUSTER_CONFIG]
Disable Cloud Audit Logs for an existing user cluster
In the user cluster configuration file, delete the
cloudAuditLoggingsection.Update the user cluster:
gkectl update cluster --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] --config [USER_CLUSTER_CONFIG]
Access audit logs
Disk-based audit logging
View the Kubernetes API servers running in your admin cluster and all of its associated user clusters:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] get pods --all-namespaces -l component=kube-apiserver
where [ADMIN_CLUSTER_KUBECONFIG] is the kubeconfig file of your admin cluster.
Download the API server's audit logs:
kubectl cp -n [NAMESPACE] [APISERVER_POD_NAME]:/var/log/kube-audit/kube-apiserver-audit.log /tmp/kubeaudit.log
This command fetches the latest log file, which can contain up to 1GB of data for the admin cluster and up to 850GB for user clusters.
You can also find the audit logs for the admin cluster on the control-plane nodes under
/var/log/kube-audit/kube-apiserver-audit.log. The audit logs for the user cluster are in thePersistentVolumeClaimnamedkube-audit-kube-apiserver-0. You can access this data within your own Pods viavolumeentries like this:volumes: ‐ name: kube-audit hostPath: path: /var/log/kube-audit type: ""volumes: ‐ name: kube-audit persistentVolumeClaim: claimName: kube-audit-kube-apiserver-0 readOnly: trueTo schedule your Pod on the appropriate admin cluster node (and only this node) you will need to add
nodeSelectorandtolerationssections to your Pod spec, like this:spec: nodeSelector: node-role.kubernetes.io/master: '' tolerations: ‐ key: node-role.kubernetes.io/master value: "" effect: NoScheduleFor the user cluster, use this
nodeSelector:spec: nodeSelector: kubernetes.googleapis.com/cluster-name: [USER_CLUSTER_NAME]
Older audit records are kept in separate files. To view those files:
kubectl exec -n [NAMESPACE] [APISERVER_POD_NAME] -- ls /var/log/kube-audit -la
Each audit log's filename has a timestamp that indicates when the file was rotated. A file contains audit logs up to that time and date.
Cloud Audit Logs
Console
In the Google Cloud console, go to the Logs page in the Logging menu.
In the Filter by label or text search box, just above the drop-down menus discussed above, click the down arrow to open the drop-down menu. From the menu, choose Convert to advanced filter.
Fill the text box with the following filter:
resource.type="k8s_cluster" logName="projects/[PROJECT_ID]/logs/externalaudit.googleapis.com%2Factivity" protoPayload.serviceName="anthosgke.googleapis.com"
Click Submit Filter to display all audit logs from Google Distributed Cloud clusters that were configured to log in to this project.
gcloud
List the first two log entries in your project's Admin Activity log that
apply to the k8s_cluster resource type:
gcloud logging read \
'logName="projects/[PROJECT_ID]/logs/externalaudit.googleapis.com%2Factivity" \
AND resource.type="k8s_cluster" \
AND protoPayload.serviceName="anthosgke.googleapis.com" ' \
--limit 2 \
--freshness 300d
where [PROJECT_ID] is your project ID.
The output shows two log entries. Notice that for each log entry, the
logName field has the value
projects/[PROJECT_ID]/logs/externalaudit.googleapis.com%2Factivity
and protoPayload.serviceName is equal to anthosgke.googleapis.com.
Audit policy
Cloud Audit Logs behavior is determined by a statically-configured Kubernetes audit logging policy. Changing this policy is currently not supported.