Tool: get_node_pool
Gets the details of a specific node pool within a GKE cluster.
The following sample demonstrate how to use curl to invoke the get_node_pool MCP tool.
| Curl Request |
|---|
curl --location 'https://container.googleapis.com/mcp' \ --header 'content-type: application/json' \ --header 'accept: application/json, text/event-stream' \ --data '{ "method": "tools/call", "params": { "name": "get_node_pool", "arguments": { // provide these details according to the tool's MCP specification } }, "jsonrpc": "2.0", "id": 1 }' |
Input Schema
MCPGetNodePoolRequest retrieves a node pool for a cluster.
MCPGetNodePoolRequest
| JSON representation |
|---|
{ "name": string } |
| Fields | |
|---|---|
name |
Required. The name (project, location, cluster, node pool id) of the node pool to get. Specified in the format |
Output Schema
NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.
NodePool
| JSON representation |
|---|
{ "name": string, "config": { object ( |
| Fields | |
|---|---|
name |
The name of the node pool. |
config |
The node configuration of the pool. |
initialNodeCount |
The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota. |
locations[] |
The list of Google Compute Engine zones in which the NodePool's nodes should be located. If this value is unspecified during node pool creation, the Cluster.Locations value will be used, instead. Warning: changing node pool locations will result in nodes being added and/or removed. |
networkConfig |
Networking configuration for this NodePool. If specified, it overrides the cluster-level defaults. |
selfLink |
Output only. Server-defined URL for the resource. |
version |
The version of Kubernetes running on this NodePool's nodes. If unspecified, it defaults as described here. |
instanceGroupUrls[] |
Output only. The resource URLs of the managed instance groups associated with this node pool. During the node pool blue-green upgrade operation, the URLs contain both blue and green resources. |
status |
Output only. The status of the nodes in this pool instance. |
statusMessage |
Output only. Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available. |
autoscaling |
Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present. |
management |
NodeManagement configuration for this NodePool. |
maxPodsConstraint |
The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool. |
conditions[] |
Which conditions caused the current node pool state. |
podIpv4CidrSize |
Output only. The pod CIDR block size per node in this node pool. |
upgradeSettings |
Upgrade settings control disruption and speed of the upgrade. |
placementPolicy |
Specifies the node placement policy. |
updateInfo |
Output only. Update info contains relevant information during a node pool update. |
etag |
This checksum is computed by the server based on the value of node pool fields, and may be sent on update requests to ensure the client has an up-to-date value before proceeding. |
queuedProvisioning |
Specifies the configuration of queued provisioning. |
bestEffortProvisioning |
Enable best effort provisioning for nodes |
autopilotConfig |
Specifies the autopilot configuration for this node pool. This field is exclusively reserved for Cluster Autoscaler. |
nodeDrainConfig |
Specifies the node drain configuration for this node pool. |
NodeConfig
| JSON representation |
|---|
{ "machineType": string, "diskSizeGb": integer, "oauthScopes": [ string ], "serviceAccount": string, "metadata": { string: string, ... }, "imageType": string, "labels": { string: string, ... }, "localSsdCount": integer, "tags": [ string ], "preemptible": boolean, "accelerators": [ { object ( |
| Fields | |
|---|---|
machineType |
The name of a Google Compute Engine machine type If unspecified, the default machine type is |
diskSizeGb |
Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB. |
oauthScopes[] |
The set of Google API scopes to be made available on all of the node VMs under the "default" service account. The following scopes are recommended, but not required, and by default are not included:
If unspecified, no scopes are added, unless Cloud Logging or Cloud Monitoring are enabled, in which case their required scopes will be added. |
serviceAccount |
The Google Cloud Platform Service Account to be used by the node VMs. Specify the email address of the Service Account; otherwise, if no Service Account is specified, the "default" service account is used. |
metadata |
The metadata key/value pairs assigned to instances in the cluster. Keys must conform to the regexp
Values are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on them is that each value's size must be less than or equal to 32 KB. The total size of all keys and values must be less than 512 KB. An object containing a list of |
imageType |
The image type to use for this node. Note that for a given image type, the latest version of it will be used. Please see https://cloud.google.com/kubernetes-engine/docs/concepts/node-images for available image types. |
labels |
The map of Kubernetes labels (key/value pairs) to be applied to each node. These will added in addition to any default label(s) that Kubernetes may apply to the node. In case of conflict in label keys, the applied set may differ depending on the Kubernetes version -- it's best to assume the behavior is undefined and conflicts should be avoided. For more information, including usage and the valid values, see: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ An object containing a list of |
localSsdCount |
The number of local SSD disks to be attached to the node. The limit for this value is dependent upon the maximum number of disks available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. |
tags[] |
The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during cluster or node pool creation. Each tag within the list must comply with RFC1035. |
preemptible |
Whether the nodes are created as preemptible VM instances. See: https://cloud.google.com/compute/docs/instances/preemptible for more information about preemptible VM instances. |
accelerators[] |
A list of hardware accelerators to be attached to each node. See https://cloud.google.com/compute/docs/gpus for more information about support for GPUs. |
diskType |
Type of the disk attached to each node (e.g. 'pd-standard', 'pd-ssd' or 'pd-balanced') If unspecified, the default disk type is 'pd-standard' |
minCpuPlatform |
Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such as |
workloadMetadataConfig |
The workload metadata configuration for this node. |
taints[] |
List of kubernetes taints to be applied to each node. For more information, including usage and the valid values, see: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ |
sandboxConfig |
Sandbox configuration for this node. |
nodeGroup |
Setting this field will assign instances of this pool to run on the specified node group. This is useful for running workloads on sole tenant nodes. |
reservationAffinity |
The optional reservation affinity. Setting this field will apply the specified Zonal Compute Reservation to this node pool. |
shieldedInstanceConfig |
Shielded Instance options. |
linuxNodeConfig |
Parameters that can be configured on Linux nodes. |
kubeletConfig |
Node kubelet configs. |
bootDiskKmsKey |
The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption |
gcfsConfig |
Google Container File System (image streaming) configs. |
advancedMachineFeatures |
Advanced features for the Compute Engine VM. |
gvnic |
Enable or disable gvnic in the node pool. |
spot |
Spot flag for enabling Spot VM, which is a rebrand of the existing preemptible flag. |
confidentialNodes |
Confidential nodes config. All the nodes in the node pool will be Confidential VM once enabled. |
resourceLabels |
The resource labels for the node pool to use to annotate any related Google Compute Engine resources. An object containing a list of |
loggingConfig |
Logging configuration. |
windowsNodeConfig |
Parameters that can be configured on Windows nodes. |
localNvmeSsdBlockConfig |
Parameters for using raw-block Local NVMe SSDs. |
ephemeralStorageLocalSsdConfig |
Parameters for the node ephemeral storage using Local SSDs. If unspecified, ephemeral storage is backed by the boot disk. |
soleTenantConfig |
Parameters for node pools to be backed by shared sole tenant node groups. |
containerdConfig |
Parameters for containerd customization. |
resourceManagerTags |
A map of resource manager tag keys and values to be attached to the nodes. |
enableConfidentialStorage |
Optional. Reserved for future use. |
secondaryBootDisks[] |
List of secondary boot disks attached to the nodes. |
storagePools[] |
List of Storage Pools where boot disks are provisioned. |
maxRunDuration |
The maximum duration for the nodes to exist. If unspecified, the nodes can exist indefinitely. A duration in seconds with up to nine fractional digits, ending with ' |
effectiveCgroupMode |
Output only. effective_cgroup_mode is the cgroup mode actually used by the node pool. It is determined by the cgroup mode specified in the LinuxNodeConfig or the default cgroup mode based on the cluster creation version. |
bootDisk |
The boot disk configuration for the node pool. |
Union field
|
|
fastSocket |
Enable or disable NCCL fast socket for the node pool. |
Union field
|
|
secondaryBootDiskUpdateStrategy |
Secondary boot disk update strategy. |
Union field
|
|
localSsdEncryptionMode |
Specifies which method should be used for encrypting the Local SSDs attached to the node. |
Union field
|
|
flexStart |
Flex Start flag for enabling Flex Start VM. |
MetadataEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
LabelsEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
AcceleratorConfig
| JSON representation |
|---|
{ "acceleratorCount": string, "acceleratorType": string, "gpuPartitionSize": string, // Union field |
| Fields | |
|---|---|
acceleratorCount |
The number of the accelerator cards exposed to an instance. |
acceleratorType |
The accelerator type resource name. List of supported accelerators here |
gpuPartitionSize |
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide. |
Union field
|
|
gpuSharingConfig |
The configuration for GPU sharing options. |
Union field
|
|
gpuDriverInstallationConfig |
The configuration for auto installation of GPU driver. |
GPUSharingConfig
| JSON representation |
|---|
{ "maxSharedClientsPerGpu": string, // Union field |
| Fields | |
|---|---|
maxSharedClientsPerGpu |
The max number of containers that can share a physical GPU. |
Union field
|
|
gpuSharingStrategy |
The type of GPU sharing strategy to enable on the GPU node. |
GPUDriverInstallationConfig
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field
|
|
gpuDriverVersion |
Mode for how the GPU driver is installed. |
WorkloadMetadataConfig
| JSON representation |
|---|
{
"mode": enum ( |
| Fields | |
|---|---|
mode |
Mode is the configuration for how to expose metadata to workloads running on the node pool. |
NodeTaint
| JSON representation |
|---|
{
"key": string,
"value": string,
"effect": enum ( |
| Fields | |
|---|---|
key |
Key for taint. |
value |
Value for taint. |
effect |
Effect for taint. |
SandboxConfig
| JSON representation |
|---|
{
"type": enum ( |
| Fields | |
|---|---|
type |
Type of the sandbox to use for the node. |
ReservationAffinity
| JSON representation |
|---|
{
"consumeReservationType": enum ( |
| Fields | |
|---|---|
consumeReservationType |
Corresponds to the type of reservation consumption. |
key |
Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, specify "compute.googleapis.com/reservation-name" as the key and specify the name of your reservation as its value. |
values[] |
Corresponds to the label value(s) of reservation resource(s). |
ShieldedInstanceConfig
| JSON representation |
|---|
{ "enableSecureBoot": boolean, "enableIntegrityMonitoring": boolean } |
| Fields | |
|---|---|
enableSecureBoot |
Defines whether the instance has Secure Boot enabled. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. |
enableIntegrityMonitoring |
Defines whether the instance has integrity monitoring enabled. Enables monitoring and attestation of the boot integrity of the instance. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the instance is created. |
LinuxNodeConfig
| JSON representation |
|---|
{ "sysctls": { string: string, ... }, "cgroupMode": enum ( |
| Fields | |
|---|---|
sysctls |
The Linux kernel parameters to be applied to the nodes and all pods running on the nodes. The following parameters are supported. net.core.busy_poll net.core.busy_read net.core.netdev_max_backlog net.core.rmem_max net.core.rmem_default net.core.wmem_default net.core.wmem_max net.core.optmem_max net.core.somaxconn net.ipv4.tcp_rmem net.ipv4.tcp_wmem net.ipv4.tcp_tw_reuse net.ipv4.tcp_mtu_probing net.ipv4.tcp_max_orphans net.ipv4.tcp_max_tw_buckets net.ipv4.tcp_syn_retries net.ipv4.tcp_ecn net.ipv4.tcp_congestion_control net.netfilter.nf_conntrack_max net.netfilter.nf_conntrack_buckets net.netfilter.nf_conntrack_tcp_timeout_close_wait net.netfilter.nf_conntrack_tcp_timeout_time_wait net.netfilter.nf_conntrack_tcp_timeout_established net.netfilter.nf_conntrack_acct kernel.shmmni kernel.shmmax kernel.shmall kernel.perf_event_paranoid kernel.sched_rt_runtime_us kernel.softlockup_panic kernel.yama.ptrace_scope kernel.kptr_restrict kernel.dmesg_restrict kernel.sysrq fs.aio-max-nr fs.file-max fs.inotify.max_user_instances fs.inotify.max_user_watches fs.nr_open vm.dirty_background_ratio vm.dirty_background_bytes vm.dirty_expire_centisecs vm.dirty_ratio vm.dirty_bytes vm.dirty_writeback_centisecs vm.max_map_count vm.overcommit_memory vm.overcommit_ratio vm.vfs_cache_pressure vm.swappiness vm.watermark_scale_factor vm.min_free_kbytes An object containing a list of |
cgroupMode |
cgroup_mode specifies the cgroup mode to be used on the node. |
transparentHugepageEnabled |
Optional. Transparent hugepage support for anonymous memory can be entirely disabled (mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to avoid the risk of consuming more memory resources) or enabled system wide. See https://docs.kernel.org/admin-guide/mm/transhuge.html for more details. |
transparentHugepageDefrag |
Optional. Defines the transparent hugepage defrag configuration on the node. VM hugepage allocation can be managed by either limiting defragmentation for delayed allocation or skipping it entirely for immediate allocation only. See https://docs.kernel.org/admin-guide/mm/transhuge.html for more details. |
nodeKernelModuleLoading |
Optional. Configuration for kernel module loading on nodes. When enabled, the node pool will be provisioned with a Container-Optimized OS image that enforces kernel module signature verification. |
Union field
|
|
hugepages |
Optional. Amounts for 2M and 1G hugepages |
SysctlsEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
HugepagesConfig
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field
|
|
hugepageSize2m |
Optional. Amount of 2M hugepages |
Union field
|
|
hugepageSize1g |
Optional. Amount of 1G hugepages |
NodeKernelModuleLoading
| JSON representation |
|---|
{
"policy": enum ( |
| Fields | |
|---|---|
policy |
Set the node module loading policy for nodes in the node pool. |
NodeKubeletConfig
| JSON representation |
|---|
{ "cpuManagerPolicy": string, "topologyManager": { object ( |
| Fields | |
|---|---|
cpuManagerPolicy |
Control the CPU management policy on the node. See https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/ The following values are allowed. * "none": the default, which represents the existing scheduling behavior. * "static": allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. The default value is 'none' if unspecified. |
topologyManager |
Optional. Controls Topology Manager configuration on the node. For more information, see: https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/ |
memoryManager |
Optional. Controls NUMA-aware Memory Manager configuration on the node. For more information, see: https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/ |
cpuCfsQuota |
Enable CPU CFS quota enforcement for containers that specify CPU limits. This option is enabled by default which makes kubelet use CFS quota (https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt) to enforce container CPU limits. Otherwise, CPU limits will not be enforced at all. Disable this option to mitigate CPU throttling problems while still having your pods to be in Guaranteed QoS class by specifying the CPU limits. The default value is 'true' if unspecified. |
cpuCfsQuotaPeriod |
Set the CPU CFS quota period value 'cpu.cfs_period_us'. The string must be a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". The value must be a positive duration between 1ms and 1 second, inclusive. |
podPidsLimit |
Set the Pod PID limits. See https://kubernetes.io/docs/concepts/policy/pid-limiting/#pod-pid-limits Controls the maximum number of processes allowed to run in a pod. The value must be greater than or equal to 1024 and less than 4194304. |
imageGcLowThresholdPercent |
Optional. Defines the percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. The percent is calculated as this field value out of 100. The value must be between 10 and 85, inclusive and smaller than image_gc_high_threshold_percent. The default value is 80 if unspecified. |
imageGcHighThresholdPercent |
Optional. Defines the percent of disk usage after which image garbage collection is always run. The percent is calculated as this field value out of 100. The value must be between 10 and 85, inclusive and greater than image_gc_low_threshold_percent. The default value is 85 if unspecified. |
imageMinimumGcAge |
Optional. Defines the minimum age for an unused image before it is garbage collected. The string must be a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300s", "1.5h", and "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". The value must be a positive duration less than or equal to 2 minutes. The default value is "2m0s" if unspecified. |
imageMaximumGcAge |
Optional. Defines the maximum age an image can be unused before it is garbage collected. The string must be a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300s", "1.5h", and "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". The value must be a positive duration greater than image_minimum_gc_age or "0s". The default value is "0s" if unspecified, which disables this field, meaning images won't be garbage collected based on being unused for too long. |
containerLogMaxSize |
Optional. Defines the maximum size of the container log file before it is rotated. See https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation Valid format is positive number + unit, e.g. 100Ki, 10Mi. Valid units are Ki, Mi, Gi. The value must be between 10Mi and 500Mi, inclusive. Note that the total container log size (container_log_max_size * container_log_max_files) cannot exceed 1% of the total storage of the node, to avoid disk pressure caused by log files. The default value is 10Mi if unspecified. |
containerLogMaxFiles |
Optional. Defines the maximum number of container log files that can be present for a container. See https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation The value must be an integer between 2 and 10, inclusive. The default value is 5 if unspecified. |
allowedUnsafeSysctls[] |
Optional. Defines a comma-separated allowlist of unsafe sysctls or sysctl patterns (ending in The unsafe namespaced sysctl groups are To allow certain sysctls or sysctl patterns to be set on Pods, list them separated by commas. For example: See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for more details. |
evictionSoft |
Optional. eviction_soft is a map of signal names to quantities that defines soft eviction thresholds. Each signal is compared to its corresponding threshold to determine if a pod eviction should occur. |
evictionSoftGracePeriod |
Optional. eviction_soft_grace_period is a map of signal names to quantities that defines grace periods for each soft eviction signal. The grace period is the amount of time that a pod must be under pressure before an eviction occurs. |
evictionMinimumReclaim |
Optional. eviction_minimum_reclaim is a map of signal names to quantities that defines minimum reclaims, which describe the minimum amount of a given resource the kubelet will reclaim when performing a pod eviction while that resource is under pressure. |
evictionMaxPodGracePeriodSeconds |
Optional. eviction_max_pod_grace_period_seconds is the maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. This value effectively caps the Pod's terminationGracePeriodSeconds value during soft evictions. Default: 0. Range: [0, 300]. |
maxParallelImagePulls |
Optional. Defines the maximum number of image pulls in parallel. The range is 2 to 5, inclusive. The default value is 2 or 3 depending on the disk type. See https://kubernetes.io/docs/concepts/containers/images/#maximum-parallel-image-pulls for more details. |
Union field
|
|
insecureKubeletReadonlyPortEnabled |
Enable or disable Kubelet read only port. |
Union field
|
|
singleProcessOomKill |
Optional. Defines whether to enable single process OOM killer. If true, will prevent the memory.oom.group flag from being set for container cgroups in cgroups v2. This causes processes in the container to be OOM killed individually instead of as a group. |
TopologyManager
| JSON representation |
|---|
{ "policy": string, "scope": string } |
| Fields | |
|---|---|
policy |
Configures the strategy for resource alignment. Allowed values are:
The default policy value is 'none' if unspecified. Details about each strategy can be found here. |
scope |
The Topology Manager aligns resources in following scopes:
The default scope is 'container' if unspecified. See https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#topology-manager-scopes |
MemoryManager
| JSON representation |
|---|
{ "policy": string } |
| Fields | |
|---|---|
policy |
Controls the memory management policy on the Node. See https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/#policies The following values are allowed. * "none" * "static" The default value is 'none' if unspecified. |
BoolValue
| JSON representation |
|---|
{ "value": boolean } |
| Fields | |
|---|---|
value |
The bool value. |
EvictionSignals
| JSON representation |
|---|
{ "memoryAvailable": string, "nodefsAvailable": string, "nodefsInodesFree": string, "imagefsAvailable": string, "imagefsInodesFree": string, "pidAvailable": string } |
| Fields | |
|---|---|
memoryAvailable |
Optional. Memory available (i.e. capacity - workingSet), in bytes. Defines the amount of "memory.available" signal in kubelet. Default is unset, if not specified in the kubelet config. Format: positive number + unit, e.g. 100Ki, 10Mi, 5Gi. Valid units are Ki, Mi, Gi. Must be >= 100Mi and <= 50% of the node's memory. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
nodefsAvailable |
Optional. Amount of storage available on filesystem that kubelet uses for volumes, daemon logs, etc. Defines the amount of "nodefs.available" signal in kubelet. Default is unset, if not specified in the kubelet config. It takses percentage value for now. Sample format: "30%". Must be >= 10% and <= 50%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
nodefsInodesFree |
Optional. Amount of inodes available on filesystem that kubelet uses for volumes, daemon logs, etc. Defines the amount of "nodefs.inodesFree" signal in kubelet. Default is unset, if not specified in the kubelet config. Linux only. It takses percentage value for now. Sample format: "30%". Must be >= 5% and <= 50%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
imagefsAvailable |
Optional. Amount of storage available on filesystem that container runtime uses for storing images layers. If the container filesystem and image filesystem are not separate, then imagefs can store both image layers and writeable layers. Defines the amount of "imagefs.available" signal in kubelet. Default is unset, if not specified in the kubelet config. It takses percentage value for now. Sample format: "30%". Must be >= 15% and <= 50%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
imagefsInodesFree |
Optional. Amount of inodes available on filesystem that container runtime uses for storing images layers. Defines the amount of "imagefs.inodesFree" signal in kubelet. Default is unset, if not specified in the kubelet config. Linux only. It takses percentage value for now. Sample format: "30%". Must be >= 5% and <= 50%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
pidAvailable |
Optional. Amount of PID available for pod allocation. Defines the amount of "pid.available" signal in kubelet. Default is unset, if not specified in the kubelet config. It takses percentage value for now. Sample format: "30%". Must be >= 10% and <= 50%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
EvictionGracePeriod
| JSON representation |
|---|
{ "memoryAvailable": string, "nodefsAvailable": string, "nodefsInodesFree": string, "imagefsAvailable": string, "imagefsInodesFree": string, "pidAvailable": string } |
| Fields | |
|---|---|
memoryAvailable |
Optional. Grace period for eviction due to memory available signal. Sample format: "10s". Must be >= 0. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
nodefsAvailable |
Optional. Grace period for eviction due to nodefs available signal. Sample format: "10s". Must be >= 0. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
nodefsInodesFree |
Optional. Grace period for eviction due to nodefs inodes free signal. Sample format: "10s". Must be >= 0. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
imagefsAvailable |
Optional. Grace period for eviction due to imagefs available signal. Sample format: "10s". Must be >= 0. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
imagefsInodesFree |
Optional. Grace period for eviction due to imagefs inodes free signal. Sample format: "10s". Must be >= 0. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
pidAvailable |
Optional. Grace period for eviction due to pid available signal. Sample format: "10s". Must be >= 0. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
EvictionMinimumReclaim
| JSON representation |
|---|
{ "memoryAvailable": string, "nodefsAvailable": string, "nodefsInodesFree": string, "imagefsAvailable": string, "imagefsInodesFree": string, "pidAvailable": string } |
| Fields | |
|---|---|
memoryAvailable |
Optional. Minimum reclaim for eviction due to memory available signal. Only take percentage value for now. Sample format: "10%". Must be <=10%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
nodefsAvailable |
Optional. Minimum reclaim for eviction due to nodefs available signal. Only take percentage value for now. Sample format: "10%". Must be <=10%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
nodefsInodesFree |
Optional. Minimum reclaim for eviction due to nodefs inodes free signal. Only take percentage value for now. Sample format: "10%". Must be <=10%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
imagefsAvailable |
Optional. Minimum reclaim for eviction due to imagefs available signal. Only take percentage value for now. Sample format: "10%". Must be <=10%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
imagefsInodesFree |
Optional. Minimum reclaim for eviction due to imagefs inodes free signal. Only take percentage value for now. Sample format: "10%". Must be <=10%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
pidAvailable |
Optional. Minimum reclaim for eviction due to pid available signal. Only take percentage value for now. Sample format: "10%". Must be <=10%. See https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals |
GcfsConfig
| JSON representation |
|---|
{ "enabled": boolean } |
| Fields | |
|---|---|
enabled |
Whether to use GCFS. |
AdvancedMachineFeatures
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field
|
|
threadsPerCore |
The number of threads per physical core. To disable simultaneous multithreading (SMT) set this to 1. If unset, the maximum number of threads supported per core by the underlying processor is assumed. |
Union field
|
|
enableNestedVirtualization |
Whether or not to enable nested virtualization (defaults to false). |
Union field
|
|
performanceMonitoringUnit |
Type of Performance Monitoring Unit (PMU) requested on node pool instances. If unset, PMU will not be available to the node. |
VirtualNIC
| JSON representation |
|---|
{ "enabled": boolean } |
| Fields | |
|---|---|
enabled |
Whether gVNIC features are enabled in the node pool. |
ConfidentialNodes
| JSON representation |
|---|
{
"enabled": boolean,
"confidentialInstanceType": enum ( |
| Fields | |
|---|---|
enabled |
Whether Confidential Nodes feature is enabled. |
confidentialInstanceType |
Defines the type of technology used by the confidential node. |
FastSocket
| JSON representation |
|---|
{ "enabled": boolean } |
| Fields | |
|---|---|
enabled |
Whether Fast Socket features are enabled in the node pool. |
ResourceLabelsEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
NodePoolLoggingConfig
| JSON representation |
|---|
{
"variantConfig": {
object ( |
| Fields | |
|---|---|
variantConfig |
Logging variant configuration. |
LoggingVariantConfig
| JSON representation |
|---|
{
"variant": enum ( |
| Fields | |
|---|---|
variant |
Logging variant deployed on nodes. |
WindowsNodeConfig
| JSON representation |
|---|
{
"osVersion": enum ( |
| Fields | |
|---|---|
osVersion |
OSVersion specifies the Windows node config to be used on the node. |
LocalNvmeSsdBlockConfig
| JSON representation |
|---|
{ "localSsdCount": integer } |
| Fields | |
|---|---|
localSsdCount |
Number of local NVMe SSDs to use. The limit for this value is dependent upon the maximum number of disk available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. A zero (or unset) value has different meanings depending on machine type being used: 1. For pre-Gen3 machines, which support flexible numbers of local ssds, zero (or unset) means to disable using local SSDs as ephemeral storage. 2. For Gen3 machines which dictate a specific number of local ssds, zero (or unset) means to use the default number of local ssds that goes with that machine type. For example, for a c3-standard-8-lssd machine, 2 local ssds would be provisioned. For c3-standard-8 (which doesn't support local ssds), 0 will be provisioned. See https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds for more info. |
EphemeralStorageLocalSsdConfig
| JSON representation |
|---|
{ "localSsdCount": integer, "dataCacheCount": integer } |
| Fields | |
|---|---|
localSsdCount |
Number of local SSDs to use to back ephemeral storage. Uses NVMe interfaces. A zero (or unset) value has different meanings depending on machine type being used: 1. For pre-Gen3 machines, which support flexible numbers of local ssds, zero (or unset) means to disable using local SSDs as ephemeral storage. The limit for this value is dependent upon the maximum number of disk available on a machine per zone. See: https://cloud.google.com/compute/docs/disks/local-ssd for more information. 2. For Gen3 machines which dictate a specific number of local ssds, zero (or unset) means to use the default number of local ssds that goes with that machine type. For example, for a c3-standard-8-lssd machine, 2 local ssds would be provisioned. For c3-standard-8 (which doesn't support local ssds), 0 will be provisioned. See https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds for more info. |
dataCacheCount |
Number of local SSDs to use for GKE Data Cache. |
SoleTenantConfig
| JSON representation |
|---|
{ "nodeAffinities": [ { object ( |
| Fields | |
|---|---|
nodeAffinities[] |
NodeAffinities used to match to a shared sole tenant node group. |
Union field
|
|
minNodeCpus |
Optional. The minimum number of virtual CPUs this instance will consume when running on a sole-tenant node. This field can only be set if the node pool is created in a shared sole-tenant node group. |
NodeAffinity
| JSON representation |
|---|
{
"key": string,
"operator": enum ( |
| Fields | |
|---|---|
key |
Key for NodeAffinity. |
operator |
Operator for NodeAffinity. |
values[] |
Values for NodeAffinity. |
ContainerdConfig
| JSON representation |
|---|
{ "privateRegistryAccessConfig": { object ( |
| Fields | |
|---|---|
privateRegistryAccessConfig |
PrivateRegistryAccessConfig is used to configure access configuration for private container registries. |
writableCgroups |
Optional. WritableCgroups defines writable cgroups configuration for the node pool. |
registryHosts[] |
RegistryHostConfig configures containerd registry host configuration. Each registry_hosts represents a hosts.toml file. At most 25 registry_hosts are allowed. |
PrivateRegistryAccessConfig
| JSON representation |
|---|
{
"enabled": boolean,
"certificateAuthorityDomainConfig": [
{
object ( |
| Fields | |
|---|---|
enabled |
Private registry access is enabled. |
certificateAuthorityDomainConfig[] |
Private registry access configuration. |
CertificateAuthorityDomainConfig
| JSON representation |
|---|
{ "fqdns": [ string ], // Union field |
| Fields | |
|---|---|
fqdns[] |
List of fully qualified domain names (FQDN). Specifying port is supported. Wildcards are NOT supported. Examples: - my.customdomain.com - 10.0.1.2:5000 |
Union field certificate_config. Certificate access config. The following are supported: - GCPSecretManagerCertificateConfig certificate_config can be only one of the following: |
|
gcpSecretManagerCertificateConfig |
Secret Manager certificate configuration. |
GCPSecretManagerCertificateConfig
| JSON representation |
|---|
{ "secretUri": string } |
| Fields | |
|---|---|
secretUri |
Secret URI, in the form "projects/$PROJECT_ID/secrets/$SECRET_NAME/versions/$VERSION". Version can be fixed (e.g. "2") or "latest" |
WritableCgroups
| JSON representation |
|---|
{ "enabled": boolean } |
| Fields | |
|---|---|
enabled |
Optional. Whether writable cgroups is enabled. |
RegistryHostConfig
| JSON representation |
|---|
{
"server": string,
"hosts": [
{
object ( |
| Fields | |
|---|---|
server |
Defines the host name of the registry server, which will be used to create configuration file as /etc/containerd/hosts.d/ |
hosts[] |
HostConfig configures a list of host-specific configurations for the server. Each server can have at most 10 host configurations. |
HostConfig
| JSON representation |
|---|
{ "host": string, "capabilities": [ enum ( |
| Fields | |
|---|---|
host |
Host configures the registry host/mirror. It supports fully qualified domain names (FQDN) and IP addresses: Specifying port is supported. Wildcards are NOT supported. Examples: - my.customdomain.com - 10.0.1.2:5000 |
capabilities[] |
Capabilities represent the capabilities of the registry host, specifying what operations a host is capable of performing. If not set, containerd enables all capabilities by default. |
overridePath |
OverridePath is used to indicate the host's API root endpoint is defined in the URL path rather than by the API specification. This may be used with non-compliant OCI registries which are missing the /v2 prefix. If not set, containerd sets default false. |
header[] |
Header configures the registry host headers. |
ca[] |
CA configures the registry host certificate. |
client[] |
Client configures the registry host client certificate and key. |
dialTimeout |
Specifies the maximum duration allowed for a connection attempt to complete. A shorter timeout helps reduce delays when falling back to the original registry if the mirror is unreachable. Maximum allowed value is 180s. If not set, containerd sets default 30s. The value should be a decimal number of seconds with an A duration in seconds with up to nine fractional digits, ending with ' |
RegistryHeader
| JSON representation |
|---|
{ "key": string, "value": [ string ] } |
| Fields | |
|---|---|
key |
Key configures the header key. |
value[] |
Value configures the header value. |
CertificateConfig
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field certificate. One of the methods to configure the certificate. certificate can be only one of the following: |
|
gcpSecretManagerSecretUri |
The URI configures a secret from Secret Manager in the format "projects/$PROJECT_ID/secrets/$SECRET_NAME/versions/$VERSION" for global secret or "projects/$PROJECT_ID/locations/$REGION/secrets/$SECRET_NAME/versions/$VERSION" for regional secret. Version can be fixed (e.g. "2") or "latest" |
CertificateConfigPair
| JSON representation |
|---|
{ "cert": { object ( |
| Fields | |
|---|---|
cert |
Cert configures the client certificate. |
key |
Key configures the client private key. Optional. |
Duration
| JSON representation |
|---|
{ "seconds": string, "nanos": integer } |
| Fields | |
|---|---|
seconds |
Signed seconds of the span of time. Must be from -315,576,000,000 to +315,576,000,000 inclusive. Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years |
nanos |
Signed fractions of a second at nanosecond resolution of the span of time. Durations less than one second are represented with a 0 |
ResourceManagerTags
| JSON representation |
|---|
{ "tags": { string: string, ... } } |
| Fields | |
|---|---|
tags |
TagKeyValue must be in one of the following formats ([KEY]=[VALUE]) 1. An object containing a list of |
TagsEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
SecondaryBootDisk
| JSON representation |
|---|
{
"mode": enum ( |
| Fields | |
|---|---|
mode |
Disk mode (container image cache, etc.) |
diskImage |
Fully-qualified resource ID for an existing disk image. |
BootDisk
| JSON representation |
|---|
{ "diskType": string, "sizeGb": string, "provisionedIops": string, "provisionedThroughput": string } |
| Fields | |
|---|---|
diskType |
Disk type of the boot disk. (i.e. Hyperdisk-Balanced, PD-Balanced, etc.) |
sizeGb |
Disk size in GB. Replaces NodeConfig.disk_size_gb |
provisionedIops |
For Hyperdisk-Balanced only, the provisioned IOPS config value. |
provisionedThroughput |
For Hyperdisk-Balanced only, the provisioned throughput config value. |
NodeNetworkConfig
| JSON representation |
|---|
{ "createPodRange": boolean, "podRange": string, "podIpv4CidrBlock": string, "podCidrOverprovisionConfig": { object ( |
| Fields | |
|---|---|
createPodRange |
Input only. Whether to create a new range for pod IPs in this node pool. Defaults are provided for If neither Only applicable if This field cannot be changed after the node pool has been created. |
podRange |
The ID of the secondary range for pod IPs. If Only applicable if This field cannot be changed after the node pool has been created. |
podIpv4CidrBlock |
The IP address range for pod IPs in this node pool. Only applicable if Set to blank to have a range chosen with the default size. Set to /netmask (e.g. Set to a CIDR notation (e.g. Only applicable if This field cannot be changed after the node pool has been created. |
podCidrOverprovisionConfig |
[PRIVATE FIELD] Pod CIDR size overprovisioning config for the nodepool. Pod CIDR size per node depends on max_pods_per_node. By default, the value of max_pods_per_node is rounded off to next power of 2 and we then double that to get the size of pod CIDR block per node. Example: max_pods_per_node of 30 would result in 64 IPs (/26). This config can disable the doubling of IPs (we still round off to next power of 2) Example: max_pods_per_node of 30 will result in 32 IPs (/27) when overprovisioning is disabled. |
additionalNodeNetworkConfigs[] |
We specify the additional node networks for this node pool using this list. Each node network corresponds to an additional interface |
additionalPodNetworkConfigs[] |
We specify the additional pod networks for this node pool using this list. Each pod network corresponds to an additional alias IP range for the node |
podIpv4RangeUtilization |
Output only. The utilization of the IPv4 range for the pod. The ratio is Usage/[Total number of IPs in the secondary range], Usage=numNodes*numZones*podIPsPerNode. |
subnetwork |
The subnetwork path for the node pool. Format: projects/{project}/regions/{region}/subnetworks/{subnetwork} If the cluster is associated with multiple subnetworks, the subnetwork for the node pool is picked based on the IP utilization during node pool creation and is immutable. |
networkTierConfig |
Output only. The network tier configuration for the node pool inherits from the cluster-level configuration and remains immutable throughout the node pool's lifecycle, including during upgrades. |
Union field
|
|
enablePrivateNodes |
Whether nodes have internal IP addresses only. If enable_private_nodes is not specified, then the value is derived from [Cluster.NetworkConfig.default_enable_private_nodes][] |
Union field
|
|
networkPerformanceConfig |
Network bandwidth tier configuration. |
NetworkPerformanceConfig
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field
|
|
totalEgressBandwidthTier |
Specifies the total network bandwidth tier for the NodePool. |
PodCIDROverprovisionConfig
| JSON representation |
|---|
{ "disable": boolean } |
| Fields | |
|---|---|
disable |
Whether Pod CIDR overprovisioning is disabled. Note: Pod CIDR overprovisioning is enabled by default. |
AdditionalNodeNetworkConfig
| JSON representation |
|---|
{ "network": string, "subnetwork": string } |
| Fields | |
|---|---|
network |
Name of the VPC where the additional interface belongs |
subnetwork |
Name of the subnetwork where the additional interface belongs |
AdditionalPodNetworkConfig
| JSON representation |
|---|
{ "subnetwork": string, "secondaryPodRange": string, "networkAttachment": string, // Union field |
| Fields | |
|---|---|
subnetwork |
Name of the subnetwork where the additional pod network belongs. |
secondaryPodRange |
The name of the secondary range on the subnet which provides IP address for this pod range. |
networkAttachment |
The name of the network attachment for pods to communicate to; cannot be specified along with subnetwork or secondary_pod_range. |
Union field
|
|
maxPodsPerNode |
The maximum number of pods per node which use this pod network. |
MaxPodsConstraint
| JSON representation |
|---|
{ "maxPodsPerNode": string } |
| Fields | |
|---|---|
maxPodsPerNode |
Constraint enforced on the max num of pods per node. |
NetworkTierConfig
| JSON representation |
|---|
{
"networkTier": enum ( |
| Fields | |
|---|---|
networkTier |
Network tier configuration. |
NodePoolAutoscaling
| JSON representation |
|---|
{
"enabled": boolean,
"minNodeCount": integer,
"maxNodeCount": integer,
"autoprovisioned": boolean,
"locationPolicy": enum ( |
| Fields | |
|---|---|
enabled |
Is autoscaling enabled for this node pool. |
minNodeCount |
Minimum number of nodes for one location in the node pool. Must be greater than or equal to 0 and less than or equal to max_node_count. |
maxNodeCount |
Maximum number of nodes for one location in the node pool. Must be >= min_node_count. There has to be enough quota to scale up the cluster. |
autoprovisioned |
Can this node pool be deleted automatically. |
locationPolicy |
Location policy used when scaling up a nodepool. |
totalMinNodeCount |
Minimum number of nodes in the node pool. Must be greater than or equal to 0 and less than or equal to total_max_node_count. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
totalMaxNodeCount |
Maximum number of nodes in the node pool. Must be greater than or equal to total_min_node_count. There has to be enough quota to scale up the cluster. The total_*_node_count fields are mutually exclusive with the *_node_count fields. |
NodeManagement
| JSON representation |
|---|
{
"autoUpgrade": boolean,
"autoRepair": boolean,
"upgradeOptions": {
object ( |
| Fields | |
|---|---|
autoUpgrade |
A flag that specifies whether node auto-upgrade is enabled for the node pool. If enabled, node auto-upgrade helps keep the nodes in your node pool up to date with the latest release version of Kubernetes. |
autoRepair |
A flag that specifies whether the node auto-repair is enabled for the node pool. If enabled, the nodes in this node pool will be monitored and, if they fail health checks too many times, an automatic repair action will be triggered. |
upgradeOptions |
Specifies the Auto Upgrade knobs for the node pool. |
AutoUpgradeOptions
| JSON representation |
|---|
{ "autoUpgradeStartTime": string, "description": string } |
| Fields | |
|---|---|
autoUpgradeStartTime |
Output only. This field is set when upgrades are about to commence with the approximate start time for the upgrades, in RFC3339 text format. |
description |
Output only. This field is set when upgrades are about to commence with the description of the upgrade. |
StatusCondition
| JSON representation |
|---|
{ "code": enum ( |
| Fields | |
|---|---|
code |
Machine-friendly representation of the condition Deprecated. Use canonical_code instead. |
message |
Human-friendly representation of the condition |
canonicalCode |
Canonical code of the condition. |
UpgradeSettings
| JSON representation |
|---|
{ "maxSurge": integer, "maxUnavailable": integer, // Union field |
| Fields | |
|---|---|
maxSurge |
The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process. |
maxUnavailable |
The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready. |
Union field
|
|
strategy |
Update strategy of the node pool. |
Union field
|
|
blueGreenSettings |
Settings for blue-green upgrade strategy. |
BlueGreenSettings
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field rollout_policy. The rollout policy controls the general rollout progress of blue-green. rollout_policy can be only one of the following: |
|
standardRolloutPolicy |
Standard policy for the blue-green upgrade. |
autoscaledRolloutPolicy |
Autoscaled policy for cluster autoscaler enabled blue-green upgrade. |
Union field
|
|
nodePoolSoakDuration |
Time needed after draining entire blue pool. After this period, blue pool will be cleaned up. A duration in seconds with up to nine fractional digits, ending with ' |
StandardRolloutPolicy
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field update_batch_size. Blue pool size to drain in a batch. update_batch_size can be only one of the following: |
|
batchPercentage |
Percentage of the blue pool nodes to drain in a batch. The range of this field should be (0.0, 1.0]. |
batchNodeCount |
Number of blue nodes to drain in a batch. |
Union field
|
|
batchSoakDuration |
Soak time after each batch gets drained. Default to zero. A duration in seconds with up to nine fractional digits, ending with ' |
AutoscaledRolloutPolicy
| JSON representation |
|---|
{ "waitForDrainDuration": string } |
| Fields | |
|---|---|
waitForDrainDuration |
Optional. Time to wait after cordoning the blue pool before draining the nodes. Defaults to 3 days. The value can be set between 0 and 7 days, inclusive. A duration in seconds with up to nine fractional digits, ending with ' |
PlacementPolicy
| JSON representation |
|---|
{
"type": enum ( |
| Fields | |
|---|---|
type |
The type of placement. |
tpuTopology |
Optional. TPU placement topology for pod slice node pool. https://cloud.google.com/tpu/docs/types-topologies#tpu_topologies |
policyName |
If set, refers to the name of a custom resource policy supplied by the user. The resource policy must be in the same project and region as the node pool. If not found, InvalidArgument error is returned. |
UpdateInfo
| JSON representation |
|---|
{
"blueGreenInfo": {
object ( |
| Fields | |
|---|---|
blueGreenInfo |
Information of a blue-green upgrade. |
BlueGreenInfo
| JSON representation |
|---|
{
"phase": enum ( |
| Fields | |
|---|---|
phase |
Current blue-green upgrade phase. |
blueInstanceGroupUrls[] |
The resource URLs of the managed instance groups associated with blue pool. |
greenInstanceGroupUrls[] |
The resource URLs of the managed instance groups associated with green pool. |
bluePoolDeletionStartTime |
Time to start deleting blue pool to complete blue-green upgrade, in RFC3339 text format. |
greenPoolVersion |
Version of green pool. |
QueuedProvisioning
| JSON representation |
|---|
{ "enabled": boolean } |
| Fields | |
|---|---|
enabled |
Denotes that this nodepool is QRM specific, meaning nodes can be only obtained through queuing via the Cluster Autoscaler ProvisioningRequest API. |
BestEffortProvisioning
| JSON representation |
|---|
{ "enabled": boolean, "minProvisionNodes": integer } |
| Fields | |
|---|---|
enabled |
When this is enabled, cluster/node pool creations will ignore non-fatal errors like stockout to best provision as many nodes as possible right now and eventually bring up all target number of nodes |
minProvisionNodes |
Minimum number of nodes to be provisioned to be considered as succeeded, and the rest of nodes will be provisioned gradually and eventually when stockout issue has been resolved. |
AutopilotConfig
| JSON representation |
|---|
{ "enabled": boolean } |
| Fields | |
|---|---|
enabled |
Denotes that nodes belonging to this node pool are Autopilot nodes. |
NodeDrainConfig
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field
|
|
respectPdbDuringNodePoolDeletion |
Whether to respect PDB during node pool deletion. |
Tool Annotations
Destructive Hint: ❌ | Idempotent Hint: ✅ | Read Only Hint: ✅ | Open World Hint: ❌