Index
AutoscalingPolicyService(interface)BatchController(interface)ClusterController(interface)JobController(interface)NodeGroupController(interface)SessionController(interface)SessionTemplateController(interface)WorkflowTemplateService(interface)AcceleratorConfig(message)AttachedDiskConfig(message)AttachedDiskConfig.DiskType(enum)AuthenticationConfig(message)AuthenticationConfig.AuthenticationType(enum)AutoscalingConfig(message)AutoscalingPolicy(message)AutoscalingPolicy.ClusterType(enum)AutotuningConfig(message)AutotuningConfig.Scenario(enum)AuxiliaryNodeGroup(message)AuxiliaryServicesConfig(message)BasicAutoscalingAlgorithm(message)BasicYarnAutoscalingConfig(message)Batch(message)Batch.State(enum)Batch.StateHistory(message)BatchOperationMetadata(message)BatchOperationMetadata.BatchOperationType(enum)CancelJobRequest(message)Cluster(message)ClusterConfig(message)ClusterConfig.ClusterType(enum)ClusterConfig.Engine(enum)ClusterMetrics(message)ClusterOperation(message)ClusterOperationMetadata(message)ClusterOperationStatus(message)ClusterOperationStatus.State(enum)ClusterSelector(message)ClusterStatus(message)ClusterStatus.State(enum)ClusterStatus.Substate(enum)CohortInfo(message)CohortInfo.CohortSource(enum)Component(enum)ConfidentialInstanceConfig(message)CreateAutoscalingPolicyRequest(message)CreateBatchRequest(message)CreateClusterRequest(message)CreateSessionRequest(message)CreateSessionTemplateRequest(message)CreateWorkflowTemplateRequest(message)DataprocMetricConfig(message)DataprocMetricConfig.Metric(message)DataprocMetricConfig.MetricSource(enum)DeleteAutoscalingPolicyRequest(message)DeleteBatchRequest(message)DeleteClusterRequest(message)DeleteJobRequest(message)DeleteSessionRequest(message)DeleteSessionTemplateRequest(message)DeleteWorkflowTemplateRequest(message)DiagnoseClusterRequest(message)DiagnoseClusterRequest.TarballAccess(enum)DiagnoseClusterResults(message)DiskConfig(message)DriverSchedulingConfig(message)EncryptionConfig(message)EndpointConfig(message)EnvironmentConfig(message)ExecutionConfig(message)FailureAction(enum)FlinkJob(message)GceClusterConfig(message)GceClusterConfig.PrivateIpv6GoogleAccess(enum)GetAutoscalingPolicyRequest(message)GetBatchRequest(message)GetClusterRequest(message)GetJobRequest(message)GetNodeGroupRequest(message)GetSessionRequest(message)GetSessionTemplateRequest(message)GetWorkflowTemplateRequest(message)GkeClusterConfig(message)GkeClusterConfig.NamespacedGkeDeploymentTarget(message) (deprecated)GkeNodePoolConfig(message)GkeNodePoolConfig.GkeNodeConfig(message)GkeNodePoolConfig.GkeNodePoolAcceleratorConfig(message)GkeNodePoolConfig.GkeNodePoolAutoscalingConfig(message)GkeNodePoolTarget(message)GkeNodePoolTarget.Role(enum)HadoopJob(message)HiveJob(message)IdentityConfig(message)InstanceFlexibilityPolicy(message)InstanceFlexibilityPolicy.InstanceSelection(message)InstanceFlexibilityPolicy.InstanceSelectionResult(message)InstanceFlexibilityPolicy.ProvisioningModelMix(message)InstanceGroupAutoscalingPolicyConfig(message)InstanceGroupConfig(message)InstanceGroupConfig.Preemptibility(enum)InstantiateInlineWorkflowTemplateRequest(message)InstantiateWorkflowTemplateRequest(message)Job(message)JobMetadata(message)JobPlacement(message)JobReference(message)JobScheduling(message)JobStatus(message)JobStatus.State(enum)JobStatus.Substate(enum)JupyterConfig(message)JupyterConfig.Kernel(enum)KerberosConfig(message)KubernetesClusterConfig(message)KubernetesSoftwareConfig(message)LifecycleConfig(message)ListAutoscalingPoliciesRequest(message)ListAutoscalingPoliciesResponse(message)ListBatchesRequest(message)ListBatchesResponse(message)ListClustersRequest(message)ListClustersResponse(message)ListJobsRequest(message)ListJobsRequest.JobStateMatcher(enum)ListJobsResponse(message)ListSessionTemplatesRequest(message)ListSessionTemplatesResponse(message)ListSessionsRequest(message)ListSessionsResponse(message)ListWorkflowTemplatesRequest(message)ListWorkflowTemplatesResponse(message)LoggingConfig(message)LoggingConfig.Level(enum)ManagedCluster(message)ManagedGroupConfig(message)MetastoreConfig(message)NodeGroup(message)NodeGroup.Role(enum)NodeGroupAffinity(message)NodeGroupOperationMetadata(message)NodeGroupOperationMetadata.NodeGroupOperationType(enum)NodeInitializationAction(message)OrderedJob(message)ParameterValidation(message)PeripheralsConfig(message)PigJob(message)PrestoJob(message)PropertiesInfo(message)PropertiesInfo.ValueInfo(message)PyPiRepositoryConfig(message)PySparkBatch(message)PySparkJob(message)PySparkNotebookBatch(message)QueryList(message)RegexValidation(message)RepositoryConfig(message)ReservationAffinity(message)ReservationAffinity.Type(enum)ResizeNodeGroupRequest(message)RuntimeConfig(message)RuntimeInfo(message)SecurityConfig(message)Session(message)Session.SessionStateHistory(message)Session.State(enum)SessionOperationMetadata(message)SessionOperationMetadata.SessionOperationType(enum)SessionTemplate(message)ShieldedInstanceConfig(message)SoftwareConfig(message)SparkBatch(message)SparkConnectConfig(message)SparkHistoryServerConfig(message)SparkJob(message)SparkRBatch(message)SparkRJob(message)SparkSqlBatch(message)SparkSqlJob(message)StartClusterRequest(message)StartupConfig(message)StopClusterRequest(message)SubmitJobRequest(message)TemplateParameter(message)TerminateSessionRequest(message)UpdateAutoscalingPolicyRequest(message)UpdateClusterRequest(message)UpdateJobRequest(message)UpdateSessionTemplateRequest(message)UpdateWorkflowTemplateRequest(message)UsageMetrics(message)UsageSnapshot(message)ValueValidation(message)VirtualClusterConfig(message)WorkflowGraph(message)WorkflowMetadata(message)WorkflowMetadata.State(enum)WorkflowNode(message)WorkflowNode.NodeState(enum)WorkflowTemplate(message)WorkflowTemplate.EncryptionConfig(message)WorkflowTemplatePlacement(message)YarnApplication(message)YarnApplication.State(enum)
AutoscalingPolicyService
The API interface for managing autoscaling policies in the Dataproc API.
| CreateAutoscalingPolicy |
|---|
|
Creates new autoscaling policy.
|
| DeleteAutoscalingPolicy |
|---|
|
Deletes an autoscaling policy. It is an error to delete an autoscaling policy that is in use by one or more clusters.
|
| GetAutoscalingPolicy |
|---|
|
Retrieves autoscaling policy.
|
| ListAutoscalingPolicies |
|---|
|
Lists autoscaling policies in the project.
|
| UpdateAutoscalingPolicy |
|---|
|
Updates (replaces) autoscaling policy. Disabled check for update_mask, because all updates will be full replacements.
|
BatchController
The BatchController provides methods to manage batch workloads.
| CreateBatch |
|---|
|
Creates a batch workload that executes asynchronously.
|
| DeleteBatch |
|---|
|
Deletes the batch workload resource. If the batch is not in a
|
| GetBatch |
|---|
|
Gets the batch workload resource representation.
|
| ListBatches |
|---|
|
Lists batch workloads.
|
ClusterController
The ClusterControllerService provides methods to manage clusters of Compute Engine instances.
| CreateCluster |
|---|
|
Creates a cluster in a project. The returned
|
| DeleteCluster |
|---|
|
Deletes a cluster in a project. The returned
|
| DiagnoseCluster |
|---|
|
Gets cluster diagnostic information. The returned
|
| GetCluster |
|---|
|
Gets the resource representation for a cluster in a project.
|
| ListClusters |
|---|
|
Lists all regions/{region}/clusters in a project alphabetically.
|
| StartCluster |
|---|
|
Starts a cluster in a project.
|
| StopCluster |
|---|
|
Stops a cluster in a project.
|
| UpdateCluster |
|---|
|
Updates a cluster in a project. The returned
|
JobController
The JobController provides methods to manage jobs.
| CancelJob |
|---|
|
Starts a job cancellation request. To access the job resource after cancellation, call regions/{region}/jobs.list or regions/{region}/jobs.get.
|
| DeleteJob |
|---|
|
Deletes the job from the project. If the job is active, the delete fails, and the response returns
|
| GetJob |
|---|
|
Gets the resource representation for a job in a project.
|
| ListJobs |
|---|
|
Lists regions/{region}/jobs in a project.
|
| SubmitJob |
|---|
|
Submits a job to a cluster.
|
| SubmitJobAsOperation |
|---|
|
Submits job to a cluster.
|
| UpdateJob |
|---|
|
Updates a job in a project.
|
NodeGroupController
The NodeGroupControllerService provides methods to manage node groups of Compute Engine managed instances.
| GetNodeGroup |
|---|
|
Gets the resource representation for a node group in a cluster.
|
| ResizeNodeGroup |
|---|
|
Resizes a node group in a cluster. The returned
|
SessionController
The SessionController provides methods to manage interactive sessions.
| CreateSession |
|---|
|
Create an interactive session asynchronously.
|
| DeleteSession |
|---|
|
Deletes the interactive session resource. If the session is not in terminal state, it is terminated, and then deleted.
|
| GetSession |
|---|
|
Gets the resource representation for an interactive session.
|
| ListSessions |
|---|
|
Lists interactive sessions.
|
| TerminateSession |
|---|
|
Terminates the interactive session.
|
SessionTemplateController
The SessionTemplateController provides methods to manage session templates.
| CreateSessionTemplate |
|---|
|
Create a session template synchronously.
|
| DeleteSessionTemplate |
|---|
|
Deletes a session template.
|
| GetSessionTemplate |
|---|
|
Gets the resource representation for a session template.
|
| ListSessionTemplates |
|---|
|
Lists session templates.
|
| UpdateSessionTemplate |
|---|
|
Updates the session template synchronously.
|
WorkflowTemplateService
The API interface for managing Workflow Templates in the Dataproc API.
| CreateWorkflowTemplate |
|---|
|
Creates new workflow template.
|
| DeleteWorkflowTemplate |
|---|
|
Deletes a workflow template. It does not cancel in-progress workflows.
|
| GetWorkflowTemplate |
|---|
|
Retrieves the latest workflow template. Can retrieve previously instantiated template by specifying optional version parameter.
|
| InstantiateInlineWorkflowTemplate |
|---|
|
Instantiates a template and begins execution. This method is equivalent to executing the sequence The returned Operation can be used to track execution of workflow by polling The running workflow can be aborted via The On successful completion,
|
| InstantiateWorkflowTemplate |
|---|
|
Instantiates a template and begins execution. The returned Operation can be used to track execution of workflow by polling The running workflow can be aborted via The On successful completion,
|
| ListWorkflowTemplates |
|---|
|
Lists workflows that match the specified filter in the request.
|
| UpdateWorkflowTemplate |
|---|
|
Updates (replaces) workflow template. The updated template must contain version that matches the current server version.
|
AcceleratorConfig
Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
| Fields | |
|---|---|
accelerator_type_uri |
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes. Examples:
Auto Zone Exception: If you are using Auto Zone Placement, you must use the short name of the accelerator type resource, for example, |
accelerator_count |
The number of the accelerator cards of this type exposed to this instance. |
AttachedDiskConfig
Specifies the config of attached disk options for single VM instance.
| Fields | |
|---|---|
disk_type |
Optional. Disk type. |
disk_size_gb |
Optional. Disk size in GB. |
provisioned_iops |
Optional. Indicates how many IOPS to provision for the attached disk. This sets the number of I/O operations per second that the disk can handle. See https://cloud.google.com/compute/docs/disks/hyperdisks#hyperdisk-features |
provisioned_throughput |
Optional. Indicates how much throughput to provision for the attached disk. This sets the number of throughput mb per second that the disk can handle. See https://cloud.google.com/compute/docs/disks/hyperdisks#hyperdisk-features |
DiskType
Attached disk type. Currently only supports Hyperdisks. See https://cloud.google.com/compute/docs/disks/hyperdisks. Hyperdisk Balanced High Availability is not supported because that applies to cross-zone usages,which are not supported by the service.
| Enums | |
|---|---|
DISK_TYPE_UNSPECIFIED |
Required unspecified disk type. |
HYPERDISK_BALANCED |
Hyperdisk Balanced disk type. |
HYPERDISK_EXTREME |
Hyperdisk Extreme disk type. |
HYPERDISK_ML |
Hyperdisk ML disk type. |
HYPERDISK_THROUGHPUT |
Hyperdisk Throughput disk type. |
AuthenticationConfig
Authentication configuration for a workload is used to set the default identity for the workload execution. The config specifies the type of identity (service account or user) that will be used by workloads to access resources on the project(s).
| Fields | |
|---|---|
user_workload_authentication_type |
Optional. Authentication type for the user workload running in containers. |
AuthenticationType
Authentication types for workload execution.
| Enums | |
|---|---|
AUTHENTICATION_TYPE_UNSPECIFIED |
If AuthenticationType is unspecified then END_USER_CREDENTIALS is used for 3.0 and newer runtimes, and SERVICE_ACCOUNT is used for older runtimes. |
SERVICE_ACCOUNT |
Use service account credentials for authenticating to other services. |
END_USER_CREDENTIALS |
Use OAuth credentials associated with the workload creator/user for authenticating to other services. |
AutoscalingConfig
Autoscaling Policy config associated with the cluster.
| Fields | |
|---|---|
policy_uri |
Optional. The autoscaling policy used by the cluster. Only resource names including projectid and location (region) are valid. Examples:
Note that the policy must be in the same project and region. |
AutoscalingPolicy
Describes an autoscaling policy for Dataproc cluster autoscaler.
| Fields | |
|---|---|
id |
Required. The policy id. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. |
name |
Output only. The "resource name" of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.
|
worker_config |
Required. Describes how the autoscaler will operate for primary workers. |
secondary_worker_config |
Optional. Describes how the autoscaler will operate for secondary workers. |
cluster_type |
Optional. The type of the clusters for which this autoscaling policy is to be configured. |
Union field algorithm. Autoscaling algorithm for policy. algorithm can be only one of the following: |
|
basic_algorithm |
|
ClusterType
The type of the clusters for which this autoscaling policy is to be configured.
| Enums | |
|---|---|
CLUSTER_TYPE_UNSPECIFIED |
Not set. |
STANDARD |
Standard dataproc cluster with a minimum of two primary workers. |
ZERO_SCALE |
Clusters that can use only secondary workers and be scaled down to zero secondary worker nodes. |
AutotuningConfig
Autotuning configuration of the workload.
| Fields | |
|---|---|
scenarios[] |
Optional. Scenarios for which tunings are applied. |
Scenario
Scenario represents a specific goal that autotuning will attempt to achieve by modifying workloads.
| Enums | |
|---|---|
SCENARIO_UNSPECIFIED |
Default value. |
SCALING |
Scaling recommendations such as initialExecutors. |
BROADCAST_HASH_JOIN |
Adding hints for potential relation broadcasts. |
MEMORY |
Memory management for workloads. |
NONE |
No autotuning. |
AUTO |
Automatic selection of scenarios. |
AuxiliaryNodeGroup
Node group identification and configuration information.
| Fields | |
|---|---|
node_group |
Required. Node group configuration. |
node_group_id |
Optional. A node group ID. Generated if not specified. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters. |
AuxiliaryServicesConfig
Auxiliary services configuration for a Cluster.
| Fields | |
|---|---|
metastore_config |
Optional. The Hive Metastore configuration for this workload. |
spark_history_server_config |
Optional. The Spark History Server configuration for the workload. |
BasicAutoscalingAlgorithm
Basic algorithm for autoscaling.
| Fields | |
|---|---|
cooldown_period |
Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed. Bounds: [2m, 1d]. Default: 2m. |
Union field
|
|
yarn_config |
Optional. YARN autoscaling configuration. |
BasicYarnAutoscalingConfig
Basic autoscaling configurations for YARN.
| Fields | |
|---|---|
graceful_decommission_timeout |
Required. Timeout for YARN graceful decommissioning of Node Managers. Specifies the duration to wait for jobs to complete before forcefully removing workers (and potentially interrupting jobs). Only applicable to downscaling operations. Bounds: [0s, 1d]. |
scale_up_factor |
Required. Fraction of average YARN pending memory in the last cooldown period for which to add workers. A scale-up factor of 1.0 will result in scaling up so that there is no pending memory remaining after the update (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling). See How autoscaling works for more information. Bounds: [0.0, 1.0]. |
scale_down_factor |
Required. Fraction of average YARN pending memory in the last cooldown period for which to remove workers. A scale-down factor of 1 will result in scaling down so that there is no available memory remaining after the update (more aggressive scaling). A scale-down factor of 0 disables removing workers, which can be beneficial for autoscaling a single job. See How autoscaling works for more information. Bounds: [0.0, 1.0]. |
scale_up_min_worker_fraction |
Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0. |
scale_down_min_worker_fraction |
Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change. Bounds: [0.0, 1.0]. Default: 0.0. |
Batch
A representation of a batch workload in the service.
| Fields | |
|---|---|
name |
Output only. The resource name of the batch. |
uuid |
Output only. A batch UUID (Unique Universal Identifier). The service generates this value when it creates the batch. |
create_time |
Output only. The time when the batch was created. |
runtime_info |
Output only. Runtime information about batch execution. |
state |
Output only. The state of the batch. |
state_message |
Output only. Batch state details, such as a failure description if the state is |
state_time |
Output only. The time when the batch entered a current state. |
creator |
Output only. The email address of the user who created the batch. |
labels |
Optional. The labels to associate with this batch. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a batch. |
runtime_config |
Optional. Runtime configuration for the batch execution. |
environment_config |
Optional. Environment configuration for the batch execution. |
operation |
Output only. The resource name of the operation associated with this batch. |
state_history[] |
Output only. Historical state information for the batch. |
Union field batch_config. The application/framework-specific portion of the batch configuration. batch_config can be only one of the following: |
|
pyspark_batch |
Optional. PySpark batch config. |
spark_batch |
Optional. Spark batch config. |
spark_r_batch |
Optional. SparkR batch config. |
spark_sql_batch |
Optional. SparkSql batch config. |
pyspark_notebook_batch |
Optional. PySpark notebook batch config. |
State
The batch state.
| Enums | |
|---|---|
STATE_UNSPECIFIED |
The batch state is unknown. |
PENDING |
The batch is created before running. |
RUNNING |
The batch is running. |
CANCELLING |
The batch is cancelling. |
CANCELLED |
The batch cancellation was successful. |
SUCCEEDED |
The batch completed successfully. |
FAILED |
The batch is no longer running due to an error. |
StateHistory
Historical state information.
| Fields | |
|---|---|
state |
Output only. The state of the batch at this point in history. |
state_message |
Output only. Details about the state at this point in history. |
state_start_time |
Output only. The time when the batch entered the historical state. |
BatchOperationMetadata
Metadata describing the Batch operation.
| Fields | |
|---|---|
batch |
Name of the batch for the operation. |
batch_uuid |
Batch UUID for the operation. |
create_time |
The time when the operation was created. |
done_time |
The time when the operation finished. |
operation_type |
The operation type. |
description |
Short description of the operation. |
labels |
Labels associated with the operation. |
warnings[] |
Warnings encountered during operation execution. |
BatchOperationType
Operation type for Batch resources
| Enums | |
|---|---|
BATCH_OPERATION_TYPE_UNSPECIFIED |
Batch operation type is unknown. |
BATCH |
Batch operation type. |
CancelJobRequest
A request to cancel a job.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
Required. The Dataproc region in which to handle the request. |
job_id |
Required. The job ID. Authorization requires the following IAM permission on the specified resource
|
Cluster
Describes the identifying information, config, and status of a cluster
| Fields | |
|---|---|
project_id |
Required. The Google Cloud Platform project ID that the cluster belongs to. |
cluster_name |
Required. The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused. |
config |
Optional. The cluster config for a cluster of Compute Engine Instances. Note that the service may set default values, and values may change when clusters are updated. Exactly one of ClusterConfig or VirtualClusterConfig must be specified. |
virtual_cluster_config |
Optional. The virtual cluster config is used when creating a cluster that does not directly control the underlying compute resources, for example, when creating a GKE cluster. the service may set default values, and values may change when clusters are updated. Exactly one of |
labels |
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a cluster. |
status |
Output only. Cluster status. |
status_history[] |
Output only. The previous cluster status. |
cluster_uuid |
Output only. A cluster UUID (Unique Universal Identifier). The service generates this value when it creates the cluster. |
metrics |
Output only. Contains cluster daemon metrics such as HDFS and YARN stats. Beta Feature: This report is available for testing purposes only. It may be changed before final release. |
ClusterConfig
The cluster config.
| Fields | |
|---|---|
cluster_type |
Optional. The type of the cluster. |
engine |
Optional. The cluster engine. |
config_bucket |
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, the service will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see staging and temp buckets). This field requires a Cloud Storage bucket name, not a |
temp_bucket |
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, the service will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see staging and temp buckets). This field requires a Cloud Storage bucket name, not a |
gce_cluster_config |
Optional. The shared Compute Engine config settings for all instances in a cluster. |
master_config |
Optional. The Compute Engine config settings for the cluster's master instance. |
worker_config |
Optional. The Compute Engine config settings for the cluster's worker instances. |
secondary_worker_config |
Optional. The Compute Engine config settings for a cluster's secondary worker instances |
software_config |
Optional. The config settings for cluster software. |
initialization_actions[] |
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's |
encryption_config |
Optional. Encryption settings for the cluster. |
autoscaling_config |
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset. |
security_config |
Optional. Security settings for the cluster. |
lifecycle_config |
Optional. Lifecycle setting for the cluster. |
endpoint_config |
Optional. Port/endpoint configuration for this cluster |
metastore_config |
Optional. Metastore configuration. |
dataproc_metric_config |
Optional. The config for metrics. |
auxiliary_node_groups[] |
Optional. The node group settings. |
ClusterType
The type of the cluster.
| Enums | |
|---|---|
CLUSTER_TYPE_UNSPECIFIED |
Not set. |
STANDARD |
Standard dataproc cluster with a minimum of two primary workers. |
SINGLE_NODE |
https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/single-node-clusters |
ZERO_SCALE |
Clusters that can use only secondary workers and be scaled down to zero secondary worker nodes. |
Engine
The cluster engine.
| Enums | |
|---|---|
ENGINE_UNSPECIFIED |
The engine is not specified. Works the same as ENGINE_DEFAULT. |
DEFAULT |
The cluster is a default engine cluster. |
LIGHTNING |
The cluster is a Lightning Engine cluster. |
ClusterMetrics
Contains cluster daemon metrics, such as HDFS and YARN stats.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
| Fields | |
|---|---|
hdfs_metrics |
The HDFS metrics. |
yarn_metrics |
YARN metrics. |
ClusterOperation
The cluster operation triggered by a workflow.
| Fields | |
|---|---|
operation_id |
Output only. The id of the cluster operation. |
error |
Output only. Error, if operation failed. |
done |
Output only. Indicates the operation is done. |
ClusterOperationMetadata
Metadata describing the operation.
| Fields | |
|---|---|
cluster_name |
Output only. Name of the cluster for the operation. |
cluster_uuid |
Output only. Cluster UUID for the operation. |
status |
Output only. Current operation status. |
status_history[] |
Output only. The previous operation status. |
operation_type |
Output only. The operation type. |
description |
Output only. Short description of operation. |
labels |
Output only. Labels associated with the operation |
warnings[] |
Output only. Errors encountered during operation execution. |
child_operation_ids[] |
Output only. Child operation ids |
ClusterOperationStatus
The status of the operation.
| Fields | |
|---|---|
state |
Output only. A message containing the operation state. |
inner_state |
Output only. A message containing the detailed operation state. |
details |
Output only. A message containing any operation metadata details. |
state_start_time |
Output only. The time this state was entered. |
State
The operation state.
| Enums | |
|---|---|
UNKNOWN |
Unused. |
PENDING |
The operation has been created. |
RUNNING |
The operation is running. |
DONE |
The operation is done; either cancelled or completed. |
ClusterSelector
A selector that chooses target cluster for jobs based on metadata.
| Fields | |
|---|---|
zone |
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster. If unspecified, the zone of the first cluster matching the selector is used. |
cluster_labels |
Required. The cluster labels. Cluster must have all labels to match. |
ClusterStatus
The status of a cluster and its instances.
| Fields | |
|---|---|
state |
Output only. The cluster's state. |
detail |
Optional. Output only. Details of cluster's state. |
state_start_time |
Output only. Time when this state was entered (see JSON representation of Timestamp). |
substate |
Output only. Additional state information that includes status reported by the agent. |
State
The cluster state.
| Enums | |
|---|---|
UNKNOWN |
The cluster state is unknown. |
CREATING |
The cluster is being created and set up. It is not ready for use. |
RUNNING |
The cluster is currently running and healthy. It is ready for use. Note: The cluster state changes from "creating" to "running" status after the master node(s), first two primary worker nodes (and the last primary worker node if primary workers > 2) are running. |
ERROR |
The cluster encountered an error. It is not ready for use. |
ERROR_DUE_TO_UPDATE |
The cluster has encountered an error while being updated. Jobs can be submitted to the cluster, but the cluster cannot be updated. |
DELETING |
The cluster is being deleted. It cannot be used. |
UPDATING |
The cluster is being updated. It continues to accept and process jobs. |
STOPPING |
The cluster is being stopped. It cannot be used. |
STOPPED |
The cluster is currently stopped. It is not ready for use. |
STARTING |
The cluster is being started. It is not ready for use. |
SCHEDULED |
Cluster creation is currently waiting for resources to be available. Once all resources are available, it will transition to CREATING and then RUNNING. |
Substate
The cluster substate.
| Enums | |
|---|---|
UNSPECIFIED |
The cluster substate is unknown. |
UNHEALTHY |
The cluster is known to be in an unhealthy state (for example, critical daemons are not running or HDFS capacity is exhausted). Applies to RUNNING state. |
STALE_STATUS |
The agent-reported status is out of date (may occur if the service loses communication with the Agent). Applies to RUNNING state. |
CohortInfo
Information about the cohort that the workload belongs to.
| Fields | |
|---|---|
cohort |
Output only. Final cohort that was used to tune the workload. |
cohort_source |
Output only. Source of the cohort. |
CohortSource
Source of the cohort.
| Enums | |
|---|---|
COHORT_SOURCE_UNSPECIFIED |
Cohort source is unspecified. |
USER_PROVIDED |
Indicates that the cohort was explicitly provided. |
AIRFLOW |
Composed from the labels coming from Airflow/Composer. |
Component
Cluster components that can be activated.
| Enums | |
|---|---|
COMPONENT_UNSPECIFIED |
Unspecified component. Specifying this will cause Cluster creation to fail. |
ANACONDA |
The Anaconda component is no longer supported or applicable to supported Dataproc on Compute Engine image versions. It cannot be activated on clusters created with supported Dataproc on Compute Engine image versions. |
DELTA |
Delta Lake. |
DOCKER |
Docker |
DRUID |
The Druid query engine. (alpha) |
FLINK |
Flink |
HBASE |
HBase. (beta) |
HIVE_WEBHCAT |
The Hive Web HCatalog (the REST service for accessing HCatalog). |
HUDI |
Hudi. |
ICEBERG |
Iceberg. |
JUPYTER |
The Jupyter Notebook. |
PRESTO |
The Presto query engine. |
TRINO |
The Trino query engine. |
RANGER |
The Ranger service. |
SOLR |
The Solr service. |
ZEPPELIN |
The Zeppelin notebook. |
ZOOKEEPER |
The Zookeeper service. |
JUPYTER_KERNEL_GATEWAY |
The Jupyter Kernel Gateway. |
ConfidentialInstanceConfig
Confidential Instance Config for clusters using Confidential VMs
| Fields | |
|---|---|
enable_confidential_compute |
Optional. Deprecated: Use 'confidential_instance_type' instead. Defines whether the instance should have confidential compute enabled. |
CreateAutoscalingPolicyRequest
A request to create an autoscaling policy.
| Fields | |
|---|---|
parent |
Required. The "resource name" of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
policy |
Required. The autoscaling policy to create. |
CreateBatchRequest
A request to create a batch workload.
| Fields | |
|---|---|
parent |
Required. The parent resource where this batch will be created. Authorization requires the following IAM permission on the specified resource
|
batch |
Required. The batch to create. |
batch_id |
Optional. The ID to use for the batch, which will become the final component of the batch's resource name. This value must be 4-63 characters. Valid characters are |
request_id |
Optional. A unique ID used to identify the request. If the service receives two Recommendation: Set this value to a UUID. The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
CreateClusterRequest
A request to create a cluster.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. Authorization requires the following IAM permission on the specified resource
|
region |
Required. The region in which to handle the request. |
cluster |
Required. The cluster to create. |
request_id |
Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequests with the same id, then the second request will be ignored and the first It is recommended to always set this value to a UUID. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
action_on_failed_primary_workers |
Optional. Failure action when primary worker creation fails. |
CreateSessionRequest
A request to create a session.
| Fields | |
|---|---|
parent |
Required. The parent resource where this session will be created. Authorization requires the following IAM permission on the specified resource
|
session |
Required. The interactive session to create. |
session_id |
Required. The ID to use for the session, which becomes the final component of the session's resource name. This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/. |
request_id |
Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequestss with the same ID, the second request is ignored, and the first Recommendation: Set this value to a UUID. The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
CreateSessionTemplateRequest
A request to create a session template.
| Fields | |
|---|---|
parent |
Required. The parent resource where this session template will be created. Authorization requires the following IAM permission on the specified resource
|
session_template |
Required. The session template to create. |
CreateWorkflowTemplateRequest
A request to create a workflow template.
| Fields | |
|---|---|
parent |
Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
template |
Required. The Dataproc workflow template to create. |
DataprocMetricConfig
Metric config.
| Fields | |
|---|---|
metrics[] |
Required. Metrics sources to enable. |
Metric
A custom metric.
| Fields | |
|---|---|
metric_source |
Required. A standard set of metrics is collected unless |
metric_overrides[] |
Optional. Specify one or more Custom metrics to collect for the metric course (for the Provide metrics in the following format:
Use camelcase as appropriate. Examples: Notes:
|
MetricSource
A source for the collection of custom metrics (see Custom metrics).
| Enums | |
|---|---|
METRIC_SOURCE_UNSPECIFIED |
Required unspecified metric source. |
MONITORING_AGENT_DEFAULTS |
Monitoring agent metrics. If this source is enabled, the service enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix. |
HDFS |
HDFS metric source. |
SPARK |
Spark metric source. |
YARN |
YARN metric source. |
SPARK_HISTORY_SERVER |
Spark History Server metric source. |
HIVESERVER2 |
Hiveserver2 metric source. |
HIVEMETASTORE |
hivemetastore metric source |
FLINK |
flink metric source |
DeleteAutoscalingPolicyRequest
A request to delete an autoscaling policy.
Autoscaling policies in use by one or more clusters will not be deleted.
| Fields | |
|---|---|
name |
Required. The "resource name" of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
DeleteBatchRequest
A request to delete a batch workload.
| Fields | |
|---|---|
name |
Required. The fully qualified name of the batch to retrieve in the format "projects/PROJECT_ID/locations/DATAPROC_REGION/batches/BATCH_ID" Authorization requires the following IAM permission on the specified resource
|
DeleteClusterRequest
A request to delete a cluster.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
Required. The region in which to handle the request. |
cluster_name |
Required. The cluster name. Authorization requires the following IAM permission on the specified resource
|
cluster_uuid |
Optional. Specifying the |
request_id |
Optional. A unique ID used to identify the request. If the server receives two DeleteClusterRequests with the same id, then the second request will be ignored and the first It is recommended to always set this value to a UUID. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
graceful_termination_timeout |
Optional. The graceful termination timeout for the deletion of the cluster. Indicate the time the request will wait to complete the running jobs on the cluster before its forceful deletion. Default value is 0 indicating that the user has not enabled the graceful termination. Value can be between 60 second and 6 Hours, in case the graceful termination is enabled. (There is no separate flag to check the enabling or disabling of graceful termination, it can be checked by the values in the field). |
DeleteJobRequest
A request to delete a job.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
Required. The Dataproc region in which to handle the request. |
job_id |
Required. The job ID. Authorization requires the following IAM permission on the specified resource
|
DeleteSessionRequest
A request to delete a session.
| Fields | |
|---|---|
name |
Required. The name of the session resource to delete. Authorization requires the following IAM permission on the specified resource
|
request_id |
Optional. A unique ID used to identify the request. If the service receives two DeleteSessionRequests with the same ID, the second request is ignored. Recommendation: Set this value to a UUID. The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
DeleteSessionTemplateRequest
A request to delete a session template.
| Fields | |
|---|---|
name |
Required. The name of the session template resource to delete. Authorization requires the following IAM permission on the specified resource
|
DeleteWorkflowTemplateRequest
A request to delete a workflow template.
Currently started workflows will remain running.
| Fields | |
|---|---|
name |
Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
version |
Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version. |
DiagnoseClusterRequest
A request to collect cluster diagnostic information.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
Required. The region in which to handle the request. |
cluster_name |
Required. The cluster name. Authorization requires the following IAM permission on the specified resource
|
tarball_gcs_dir |
Optional. (Optional) The output Cloud Storage directory for the diagnostic tarball. If not specified, a task-specific directory in the cluster's staging bucket will be used. Authorization requires the following IAM permission on the specified resource
|
tarball_access |
Optional. (Optional) The access type to the diagnostic tarball. If not specified, falls back to default access of the bucket Authorization requires the following IAM permission on the specified resource
|
diagnosis_interval |
Optional. Time interval in which diagnosis should be carried out on the cluster. |
jobs[] |
Optional. Specifies a list of jobs on which diagnosis is to be performed. Format: projects/{project}/regions/{region}/jobs/{job} |
yarn_application_ids[] |
Optional. Specifies a list of yarn applications on which diagnosis is to be performed. |
TarballAccess
Defines who has access to the diagnostic tarball
| Enums | |
|---|---|
TARBALL_ACCESS_UNSPECIFIED |
Tarball Access unspecified. Falls back to default access of the bucket |
GOOGLE_CLOUD_SUPPORT |
Google Cloud Support group has read access to the diagnostic tarball |
GOOGLE_DATAPROC_DIAGNOSE |
The diagnose service account has read access to the diagnostic tarball |
DiagnoseClusterResults
The location of diagnostic output.
| Fields | |
|---|---|
output_uri |
Output only. The Cloud Storage URI of the diagnostic output. The output report is a plain text file with a summary of collected diagnostics. |
DiskConfig
Specifies the config of boot disk and attached disk options for a group of VM instances.
| Fields | |
|---|---|
boot_disk_type |
Optional. Type of the boot disk (default is |
boot_disk_size_gb |
Optional. Size in GB of the boot disk (default is 500GB). |
num_local_ssds |
Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries. Note: Local SSD options may vary by machine type and number of vCPUs selected. |
local_ssd_interface |
Optional. Interface type of local SSDs (default is |
attached_disk_configs[] |
Optional. A list of attached disk configs for a group of VM instances. |
boot_disk_provisioned_iops |
Optional. Indicates how many IOPS to provision for the disk. This sets the number of I/O operations per second that the disk can handle. This field is supported only if |
boot_disk_provisioned_throughput |
Optional. Indicates how much throughput to provision for the disk. This sets the number of throughput mb per second that the disk can handle. Values must be greater than or equal to 1. This field is supported only if |
DriverSchedulingConfig
Driver scheduling configuration.
| Fields | |
|---|---|
memory_mb |
Required. The amount of memory in MB the driver is requesting. |
vcores |
Required. The number of vCPUs the driver is requesting. |
EncryptionConfig
Encryption settings for the cluster.
| Fields | |
|---|---|
gce_pd_kms_key_name |
Optional. The Cloud KMS key resource name to use for persistent disk encryption for all instances in the cluster. See Use CMEK with cluster data for more information. |
kms_key |
Optional. The Cloud KMS key resource name to use for cluster persistent disk and job argument encryption. See Use CMEK with cluster data for more information. When this key resource name is provided, the following job arguments of the following job types submitted to the cluster are encrypted using CMEK:
|
EndpointConfig
Endpoint config for this cluster
| Fields | |
|---|---|
http_ports |
Output only. The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true. |
enable_http_port_access |
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false. |
EnvironmentConfig
Environment configuration for a workload.
| Fields | |
|---|---|
execution_config |
Optional. Execution configuration for a workload. |
peripherals_config |
Optional. Peripherals configuration that workload has access to. |
ExecutionConfig
Execution configuration for a workload.
| Fields | |
|---|---|
service_account |
Optional. Service account that used to execute workload. |
network_tags[] |
Optional. Tags used for network traffic control. |
kms_key |
Optional. The Cloud KMS key to use for encryption. |
idle_ttl |
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). Defaults to 1 hour if not set. If both |
ttl |
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration. When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If |
staging_bucket |
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a |
authentication_config |
Optional. Authentication configuration used to set the default identity for the workload execution. The config specifies the type of identity (service account or user) that will be used by workloads to access resources on the project(s). |
resource_manager_tags |
Optional. Associates Resource Manager tags with the workload nodes. There is a max limit of 30 tags. Keys and values can be either in numeric format, such as |
Union field network. Network configuration for workload execution. network can be only one of the following: |
|
network_uri |
Optional. Network URI to connect workload to. |
subnetwork_uri |
Optional. Subnetwork URI to connect workload to. |
FailureAction
Actions in response to failure of a resource associated with a cluster.
| Enums | |
|---|---|
FAILURE_ACTION_UNSPECIFIED |
When FailureAction is unspecified, failure action defaults to NO_ACTION. |
NO_ACTION |
Take no action on failure to create a cluster resource. NO_ACTION is the default. |
DELETE |
Delete the failed cluster resource. |
FlinkJob
A Dataproc job for running Apache Flink applications on YARN.
| Fields | |
|---|---|
args[] |
Optional. The arguments to pass to the driver. Do not include arguments, such as |
jar_file_uris[] |
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks. |
savepoint_uri |
Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job. |
properties |
Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in |
logging_config |
Optional. The runtime log config for job execution. |
Union field driver. Required. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in the jar, add the jar to jarFileUris, and then specify the main class name in mainClass. driver can be only one of the following: |
|
main_jar_file_uri |
The HCFS URI of the jar file that contains the main class. |
main_class |
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in |
GceClusterConfig
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.
| Fields | |
|---|---|
zone_uri |
Optional. The Compute Engine zone where the cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present. A full URL, partial URI, or short name are valid. Examples:
|
auto_zone_exclude_zone_uris[] |
Optional. An optional list of Compute Engine zones where the cluster will not be located when Auto Zone is enabled. Only one of A full URL, partial URI, or short name are valid. Examples:
|
network_uri |
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither A full URL, partial URI, or short name are valid. Examples:
|
subnetwork_uri |
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri. A full URL, partial URI, or short name are valid. Examples:
|
private_ipv6_google_access |
Optional. The type of IPv6 access for a cluster. |
service_account |
Optional. The VM service account (also see VM Data Plane identity) used by cluster VM instances to access Google Cloud Platform services. If not specified, the Compute Engine default service account is used. |
service_account_scopes[] |
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included:
If no scopes are specified, the following defaults are also provided: |
tags[] |
The Compute Engine network tags to add to all instances (see Tagging instances). |
metadata |
Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata). |
reservation_affinity |
Optional. Reservation Affinity for consuming Zonal reservation. |
node_group_affinity |
Optional. Node Group Affinity for sole-tenant clusters. |
shielded_instance_config |
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs. |
confidential_instance_config |
Optional. Confidential Instance Config for clusters using Confidential VMs. |
resource_manager_tags |
Optional. Resource manager tags to add to all instances (see Use secure tags). |
internal_ip_only |
Optional. This setting applies to subnetwork-enabled networks. It is set to When set to
When set to
|
PrivateIpv6GoogleAccess
PrivateIpv6GoogleAccess controls whether and how cluster nodes can communicate with Google Services through gRPC over IPv6. These values are directly mapped to corresponding values in the Compute Engine Instance fields.
| Enums | |
|---|---|
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED |
If unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK. |
INHERIT_FROM_SUBNETWORK |
Private access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior. |
OUTBOUND |
Enables outbound private IPv6 access to Google Services from the cluster. |
BIDIRECTIONAL |
Enables bidirectional private IPv6 access between Google Services and the cluster. |
GetAutoscalingPolicyRequest
A request to fetch an autoscaling policy.
| Fields | |
|---|---|
name |
Required. The "resource name" of the autoscaling policy, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
GetBatchRequest
A request to get the resource representation for a batch workload.
| Fields | |
|---|---|
name |
Required. The fully qualified name of the batch to retrieve in the format "projects/PROJECT_ID/locations/DATAPROC_REGION/batches/BATCH_ID" Authorization requires the following IAM permission on the specified resource
|
GetClusterRequest
Request to get the resource representation for a cluster in a project.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. |
region |
Required. The region in which to handle the request. |
cluster_name |
Required. The cluster name. Authorization requires the following IAM permission on the specified resource
|
GetJobRequest
A request to get the resource representation for a job in a project.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the job belongs to. |
region |
Required. The Dataproc region in which to handle the request. |
job_id |
Required. The job ID. Authorization requires the following IAM permission on the specified resource
|
GetNodeGroupRequest
A request to get a node group .
| Fields | |
|---|---|
name |
Required. The name of the node group to retrieve. Format: |
GetSessionRequest
A request to get the resource representation for a session.
| Fields | |
|---|---|
name |
Required. The name of the session to retrieve. Authorization requires the following IAM permission on the specified resource
|
GetSessionTemplateRequest
A request to get the resource representation for a session template.
| Fields | |
|---|---|
name |
Required. The name of the session template to retrieve. Authorization requires the following IAM permission on the specified resource
|
GetWorkflowTemplateRequest
A request to fetch a workflow template.
| Fields | |
|---|---|
name |
Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
version |
Optional. The version of workflow template to retrieve. Only previously instantiated versions can be retrieved. If unspecified, retrieves the current version. |
GkeClusterConfig
The cluster's GKE config.
| Fields | |
|---|---|
namespaced_gke_deployment_target |
Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment. |
gke_cluster_target |
Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}' |
node_pool_target[] |
Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the |
NamespacedGkeDeploymentTarget
Deprecated. Used only for the deprecated beta. A full, namespace-isolated deployment target for an existing GKE cluster.
| Fields | |
|---|---|
target_gke_cluster |
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}' |
cluster_namespace |
Optional. A namespace within the GKE cluster to deploy into. |
GkeNodePoolConfig
The configuration of a GKE node pool used by a Dataproc-on-GKE cluster.
| Fields | |
|---|---|
config |
Optional. The node pool configuration. |
locations[] |
Optional. The list of Compute Engine zones where node pool nodes associated with a Dataproc on GKE virtual cluster will be located. Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region. If a location is not specified during node pool creation, Dataproc on GKE will choose the zone. |
autoscaling |
Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present. |
GkeNodeConfig
Parameters that describe cluster nodes.
| Fields | |
|---|---|
machine_type |
Optional. The name of a Compute Engine machine type. |
local_ssd_count |
Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs). |
preemptible |
Optional. Whether the nodes are created as legacy preemptible VM instances. Also see |
accelerators[] |
Optional. A list of hardware accelerators to attach to each node. |
min_cpu_platform |
Optional. Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge". |
spot |
Optional. Whether the nodes are created as Spot VM instances. Spot VMs are the latest update to legacy |
GkeNodePoolAcceleratorConfig
A GkeNodeConfigAcceleratorConfig represents a Hardware Accelerator request for a node pool.
| Fields | |
|---|---|
accelerator_count |
The number of accelerator cards exposed to an instance. |
accelerator_type |
The accelerator type resource namename (see GPUs on Compute Engine). |
gpu_partition_size |
Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide. |
GkeNodePoolAutoscalingConfig
GkeNodePoolAutoscaling contains information the cluster autoscaler needs to adjust the size of the node pool to the current cluster usage.
| Fields | |
|---|---|
min_node_count |
The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count. |
max_node_count |
The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster. |
GkeNodePoolTarget
GKE node pools that Dataproc workloads run on.
| Fields | |
|---|---|
node_pool |
Required. The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}' |
roles[] |
Required. The roles associated with the GKE node pool. |
node_pool_config |
Input only. The configuration for the GKE node pool. If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail. If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values. This is an input only field. It will not be returned by the API. |
Role
Role specifies the tasks that will run on the node pool. Roles can be specific to workloads. Exactly one GkeNodePoolTarget within the virtual cluster must have the DEFAULT role, which is used to run all workloads that are not associated with a node pool.
| Enums | |
|---|---|
ROLE_UNSPECIFIED |
Role is unspecified. |
DEFAULT |
At least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role. |
CONTROLLER |
Run work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements. |
SPARK_DRIVER |
Run work associated with a Spark driver of a job. |
SPARK_EXECUTOR |
Run work associated with a Spark executor of a job. |
HadoopJob
A Dataproc job for running Apache Hadoop MapReduce jobs on Apache Hadoop YARN.
| Fields | |
|---|---|
args[] |
Optional. The arguments to pass to the driver. Do not include arguments, such as |
jar_file_uris[] |
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. |
file_uris[] |
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. |
properties |
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in |
logging_config |
Optional. The runtime log config for job execution. |
Union field driver. Required. Indicates the location of the driver's main class. Specify either the jar file that contains the main class or the main class name. To specify both, add the jar file to jar_file_uris, and then specify the main class name in this property. driver can be only one of the following: |
|
main_jar_file_uri |
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' |
main_class |
The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in |
HiveJob
A Dataproc job for running Apache Hive queries on YARN.
| Fields | |
|---|---|
continue_on_failure |
Optional. Whether to continue executing queries if a query fails. The default value is |
script_variables |
Optional. Mapping of query variable names to values (equivalent to the Hive command: |
properties |
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in |
jar_file_uris[] |
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. |
Union field queries. Required. The sequence of Hive queries to execute, specified as either an HCFS file URI or a list of queries. queries can be only one of the following: |
|
query_file_uri |
The HCFS URI of the script that contains Hive queries. |
query_list |
A list of queries. |
IdentityConfig
Identity related configuration, including service account based secure multi-tenancy user mappings.
| Fields | |
|---|---|
user_service_account_mapping |
Required. Map of user to service account. |
InstanceFlexibilityPolicy
Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
| Fields | |
|---|---|
provisioning_model_mix |
Optional. Defines how the Group selects the provisioning model to ensure required reliability. |
instance_selection_list[] |
Optional. List of instance selection options that the group will use when creating new VMs. |
instance_selection_results[] |
Output only. A list of instance selection results in the group. |
instance_machine_types |
Output only. A map of instance short name to machine type. The key is the short name of the Compute Engine instance, and the value is the full machine-type name (e.g., 'n1-standard-16'). See Machine types for more information on valid machine type strings. |
InstanceSelection
Defines machines types and a rank to which the machines types belong.
| Fields | |
|---|---|
machine_types[] |
Optional. Full machine-type names, e.g. "n1-standard-16". |
rank |
Optional. Preference of this instance selection. Lower number means higher preference. The service will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference. |
InstanceSelectionResult
Defines a mapping from machine types to the number of VMs that are created with each machine type.
| Fields | |
|---|---|
machine_type |
Output only. Full machine-type names, e.g. "n1-standard-16". |
vm_count |
Output only. Number of VM provisioned with the machine_type. |
ProvisioningModelMix
Defines how to create VMs with a mixture of provisioning models.
| Fields | |
|---|---|
standard_capacity_base |
Optional. The base capacity that will always use Standard VMs to avoid risk of more preemption than the minimum capacity you need. The service will create only standard VMs until it reaches standard_capacity_base, then it will start using standard_capacity_percent_above_base to mix Spot with Standard VMs. eg. If 15 instances are requested and standard_capacity_base is 5, the service will create 5 standard VMs and thenstart mixing spot and standard VMs for remaining 10 instances. |
standard_capacity_percent_above_base |
Optional. The percentage of target capacity that should use Standard VM. The remaining percentage will use Spot VMs. The percentage applies only to the capacity above standard_capacity_base. eg. If 15 instances are requested and standard_capacity_base is 5 and standard_capacity_percent_above_base is 30, the service will create 5 standard VMs and then start mixing spot and standard VMs for remaining 10 instances. The mix will be 30% standard and 70% spot. |
InstanceGroupAutoscalingPolicyConfig
Configuration for the size bounds of an instance group, including its proportional size to other groups.
| Fields | |
|---|---|
min_instances |
Optional. Minimum number of instances for this group. Primary workers - Bounds: [2, max_instances]. Default: 2. Secondary workers - Bounds: [0, max_instances]. Default: 0. |
max_instances |
Required. Maximum number of instances for this group. Required for primary workers. Note that by default, clusters will not use secondary workers. Required for secondary workers if the minimum secondary instances is set. Primary workers - Bounds: [min_instances, ). Secondary workers - Bounds: [min_instances, ). Default: 0. |
weight |
Optional. Weight for the instance group, which is used to determine the fraction of total workers in the cluster from this instance group. For example, if primary workers have weight 2, and secondary workers have weight 1, the cluster will have approximately 2 primary workers for each secondary worker. The cluster may not reach the specified balance if constrained by min/max bounds or other autoscaling settings. For example, if If weight is not set on any instance group, the cluster will default to equal weight for all groups: the cluster will attempt to maintain an equal number of workers in each group within the configured size bounds for each group. If weight is set for one group only, the cluster will default to zero weight on the unset group. For example if weight is set only on primary workers, the cluster will use primary workers only and no secondary workers. |
InstanceGroupConfig
The config settings for Compute Engine resources in an instance group, such as a master or worker group.
| Fields | |
|---|---|
num_instances |
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1. |
instance_names[] |
Output only. The list of instance names, derived from |
image_uri |
Optional. The Compute Engine image resource used for cluster instances. The URI can represent an image or image family. Image examples:
Image family examples. The service will use the most recent image from the family:
If the URI is unspecified, it will be inferred from |
machine_type_uri |
Optional. The Compute Engine machine type used for cluster instances. A full URL, partial URI, or short name are valid. Examples:
Auto Zone Exception: If you are using Auto Zone Placement, you must use the short name of the machine type resource, for example, |
disk_config |
Optional. Disk option config settings. |
is_preemptible |
Output only. Specifies that this instance group contains preemptible instances. |
preemptibility |
Optional. Specifies the preemptibility of the instance group. The default value for master and worker groups is The default value for secondary instances is |
managed_group_config |
Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups. |
accelerators[] |
Optional. The Compute Engine accelerator configuration for these instances. |
min_cpu_platform |
Optional. Specifies the minimum cpu platform for the Instance Group. See Minimum CPU Platform. |
min_num_instances |
Optional. The minimum number of primary worker instances to create. If Example: Cluster creation request with
|
instance_flexibility_policy |
Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models. |
startup_config |
Optional. Configuration to handle the startup of instances during cluster create and update process. |
Preemptibility
Controls the use of preemptible instances within the group.
| Enums | |
|---|---|
PREEMPTIBILITY_UNSPECIFIED |
Preemptibility is unspecified, the system will choose the appropriate setting for each instance group. |
NON_PREEMPTIBLE |
Instances are non-preemptible. This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups. |
PREEMPTIBLE |
Instances are preemptible. This option is allowed only for secondary worker groups. |
SPOT |
Instances are Spot VMs. This option is allowed only for secondary worker groups. Spot VMs are the latest version of preemptible VMs, and provide additional features. |
InstantiateInlineWorkflowTemplateRequest
A request to instantiate an inline workflow template.
| Fields | |
|---|---|
parent |
Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
template |
Required. The workflow template to instantiate. |
request_id |
Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries. It is recommended to always set this value to a UUID. The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
InstantiateWorkflowTemplateRequest
A request to instantiate a workflow template.
| Fields | |
|---|---|
name |
Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
version |
Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version. This option cannot be used to instantiate a previous version of workflow template. |
request_id |
Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries. It is recommended to always set this value to a UUID. The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
parameters |
Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 1000 characters. |
Job
A Dataproc job resource.
| Fields | |
|---|---|
reference |
Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a
. |
placement |
Required. Job information, including how, when, and where to run the job. |
status |
Output only. The job status. Additional application-specific status information might be contained in the
and
fields. |
status_history[] |
Output only. The previous job status. |
yarn_applications[] |
Output only. The collection of YARN applications spun up by this job. Beta Feature: This report is available for testing purposes only. It might be changed before final release. |
driver_output_resource_uri |
Output only. A URI pointing to the location of the stdout of the job's driver program. |
driver_control_files_uri |
Output only. If present, the location of miscellaneous control files which can be used as part of job setup and handling. If not present, control files might be placed in the same location as |
labels |
Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job. |
scheduling |
Optional. Job scheduling configuration. |
job_uuid |
Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time. |
done |
Output only. Indicates whether the job is completed. If the value is |
driver_scheduling_config |
Optional. Driver scheduling configuration. |
Union field type_job. Required. The application/framework-specific portion of the job. type_job can be only one of the following: |
|
hadoop_job |
Optional. Job is a Hadoop job. |
spark_job |
Optional. Job is a Spark job. |
pyspark_job |
Optional. Job is a PySpark job. |
hive_job |
Optional. Job is a Hive job. |
pig_job |
Optional. Job is a Pig job. |
spark_r_job |
Optional. Job is a SparkR job. |
spark_sql_job |
Optional. Job is a SparkSql job. |
presto_job |
Optional. Job is a Presto job. |
flink_job |
Optional. Job is a Flink job. |
JobMetadata
Job Operation metadata.
| Fields | |
|---|---|
job_id |
Output only. The job id. |
status |
Output only. Most recent job status. |
operation_type |
Output only. Operation type. |
start_time |
Output only. Job submission time. |
JobPlacement
Dataproc job config.
| Fields | |
|---|---|
cluster_name |
Required. The name of the cluster where the job will be submitted. |
cluster_uuid |
Output only. A cluster UUID generated by the Dataproc service when the job is submitted. |
cluster_labels |
Optional. Cluster labels to identify a cluster where the job will be submitted. |
JobReference
Encapsulates the full scoping used to reference a job.
| Fields | |
|---|---|
project_id |
Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID. |
job_id |
Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters. If not specified by the caller, the job ID will be provided by the server. |
JobScheduling
Job scheduling options.
| Fields | |
|---|---|
max_failures_per_hour |
Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed. A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window. Maximum value is 10. Note: This restartable job option is not supported in Dataproc workflow templates. |
max_failures_total |
Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed. Maximum value is 240. Note: Currently, this restartable job option is not supported in Dataproc workflow templates. |
JobStatus
Dataproc job status.
| Fields | |
|---|---|
state |
Output only. A state message specifying the overall job state. |
details |
Optional. Output only. Job state details, such as an error description if the state is |
state_start_time |
Output only. The time when this state was entered. |
substate |
Output only. Additional state information, which includes status reported by the agent. |
State
The job state.
| Enums | |
|---|---|
STATE_UNSPECIFIED |
The job state is unknown. |
PENDING |
The job is pending; it has been submitted, but is not yet running. |
SETUP_DONE |
Job has been received by the service and completed initial setup; it will soon be submitted to the cluster. |
RUNNING |
The job is running on the cluster. |
CANCEL_PENDING |
A CancelJob request has been received, but is pending. |
CANCEL_STARTED |
Transient in-flight resources have been canceled, and the request to cancel the running job has been issued to the cluster. |
CANCELLED |
The job cancellation was successful. |
DONE |
The job has completed successfully. |
ERROR |
The job has completed, but encountered an error. |
ATTEMPT_FAILURE |
Job attempt has failed. The detail field contains failure details for this attempt. Applies to restartable jobs only. |
Substate
The job substate.
| Enums | |
|---|---|
UNSPECIFIED |
The job substate is unknown. |
SUBMITTED |
The Job is submitted to the agent. Applies to RUNNING state. |
QUEUED |
The Job has been received and is awaiting execution (it might be waiting for a condition to be met). See the "details" field for the reason for the delay. Applies to RUNNING state. |
STALE_STATUS |
The agent-reported status is out of date, which can be caused by a loss of communication between the agent and Dataproc. If the agent does not send a timely update, the job will fail. Applies to RUNNING state. |
JupyterConfig
Jupyter configuration for an interactive session.
| Fields | |
|---|---|
kernel |
Optional. Kernel |
display_name |
Optional. Display name, shown in the Jupyter kernelspec card. |
Kernel
Jupyter kernel types.
| Enums | |
|---|---|
KERNEL_UNSPECIFIED |
The kernel is unknown. |
PYTHON |
Python kernel. |
SCALA |
Scala kernel. |
KerberosConfig
Specifies Kerberos related configuration.
| Fields | |
|---|---|
enable_kerberos |
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster. |
root_principal_password_uri |
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password. |
kms_key_uri |
Optional. The URI of the KMS key used to encrypt sensitive files. |
keystore_uri |
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, the service will provide a self-signed certificate. |
truststore_uri |
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, the service will provide a self-signed certificate. |
keystore_password_uri |
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by the service. |
key_password_uri |
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by the service. |
truststore_password_uri |
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by the service. |
cross_realm_trust_realm |
Optional. The remote realm the on-cluster KDC will trust, should the user enable cross realm trust. |
cross_realm_trust_kdc |
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship. |
cross_realm_trust_admin_server |
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship. |
cross_realm_trust_shared_password_uri |
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship. |
kdc_db_key_uri |
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database. |
tgt_lifetime_hours |
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used. |
realm |
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm. |
KubernetesClusterConfig
The configuration for running the Dataproc cluster on Kubernetes.
| Fields | |
|---|---|
kubernetes_namespace |
Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used. |
kubernetes_software_config |
Optional. The software configuration for this Dataproc cluster running on Kubernetes. |
Union field
|
|
gke_cluster_config |
Required. The configuration for running the Dataproc cluster on GKE. |
KubernetesSoftwareConfig
The software configuration for this Dataproc cluster running on Kubernetes.
| Fields | |
|---|---|
component_version |
The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified. |
properties |
The properties to set on daemon config files. Property keys are specified in
For more information, see Cluster properties. |
LifecycleConfig
Specifies the cluster auto-delete schedule configuration.
| Fields | |
|---|---|
idle_delete_ttl |
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration). |
idle_stop_ttl |
Optional. The duration to keep the cluster started while idling (when no jobs are running). Passing this threshold will cause the cluster to be stopped. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration). |
idle_start_time |
Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp). |
Union field ttl. Either the exact time the cluster should be deleted at or the cluster maximum age. ttl can be only one of the following: |
|
auto_delete_time |
Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp). |
auto_delete_ttl |
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). |
Union field stop_ttl. Either the exact time the cluster should be stopped at or the cluster maximum age. stop_ttl can be only one of the following: |
|
auto_stop_time |
Optional. The time when cluster will be auto-stopped (see JSON representation of Timestamp). |
auto_stop_ttl |
Optional. The lifetime duration of the cluster. The cluster will be auto-stopped at the end of this period, calculated from the time of submission of the create or update cluster request. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). |
ListAutoscalingPoliciesRequest
A request to list autoscaling policies in a project.
| Fields | |
|---|---|
parent |
Required. The "resource name" of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
page_size |
Optional. The maximum number of results to return in each response. Must be less than or equal to 1000. Defaults to 100. |
page_token |
Optional. The page token, returned by a previous call, to request the next page of results. |
ListAutoscalingPoliciesResponse
A response to a request to list autoscaling policies in a project.
| Fields | |
|---|---|
policies[] |
Output only. Autoscaling policies list. |
next_page_token |
Output only. This token is included in the response if there are more results to fetch. |
ListBatchesRequest
A request to list batch workloads in a project.
| Fields | |
|---|---|
parent |
Required. The parent, which owns this collection of batches. Authorization requires the following IAM permission on the specified resource
|
page_size |
Optional. The maximum number of batches to return in each response. The service may return fewer than this value. The default page size is 20; the maximum page size is 1000. |
page_token |
Optional. A page token received from a previous |
filter |
Optional. A filter for the batches to return in the response. A filter is a logical expression constraining the values of various fields in each batch resource. Filters are case sensitive, and may contain multiple clauses combined with logical operators (AND/OR). Supported fields are e.g. See https://google.aip.dev/assets/misc/ebnf-filtering.txt for a detailed description of the filter syntax and a list of supported comparisons. |
order_by |
Optional. Field(s) on which to sort the list of batches. Currently the only supported sort orders are unspecified (empty) and See https://google.aip.dev/132#ordering for more details. |
ListBatchesResponse
A list of batch workloads.
| Fields | |
|---|---|
batches[] |
Output only. The batches from the specified collection. |
next_page_token |
A token, which can be sent as |
unreachable[] |
Output only. List of Batches that could not be included in the response. Attempting to get one of these resources may indicate why it was not included in the list response. |
ListClustersRequest
A request to list the clusters in a project.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the cluster belongs to. Authorization requires the following IAM permission on the specified resource
|
region |
Required. The region in which to handle the request. |
filter |
Optional. A filter constraining the clusters to list. Filters are case-sensitive and have the following syntax: field = value [AND [field = value]] ... where field is one of Example filter: status.state = ACTIVE AND clusterName = mycluster AND labels.env = staging AND labels.starred = * |
page_size |
Optional. The maximum number of clusters to return in each response. The service may return fewer than this value. If unspecified, the default value is 200. The maximum value is 1000. |
page_token |
Optional. A page token received from a previous |
ListClustersResponse
The list of all clusters in a project.
| Fields | |
|---|---|
clusters[] |
Output only. The clusters in the project. |
next_page_token |
Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the |
ListJobsRequest
A request to list jobs in a project.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the job belongs to. Authorization requires the following IAM permission on the specified resource
|
region |
Required. The Dataproc region in which to handle the request. |
page_size |
Optional. The number of results to return in each response. |
page_token |
Optional. The page token, returned by a previous call, to request the next page of results. |
cluster_name |
Optional. If set, the returned jobs list includes only jobs that were submitted to the named cluster. |
job_state_matcher |
Optional. Specifies enumerated categories of jobs to list. (default = match ALL jobs). If |
filter |
Optional. A filter constraining the jobs to list. Filters are case-sensitive and have the following syntax: [field = value] AND [field [= value]] ... where field is Example filter: status.state = ACTIVE AND labels.env = staging AND labels.starred = * AND insertTime <= "2025-01-01T00:00:00Z" |
JobStateMatcher
A matcher that specifies categories of job states.
| Enums | |
|---|---|
ALL |
Match all jobs, regardless of state. |
ACTIVE |
Only match jobs in non-terminal states: PENDING, RUNNING, or CANCEL_PENDING. |
NON_ACTIVE |
Only match jobs in terminal states: CANCELLED, DONE, or ERROR. |
ListJobsResponse
A list of jobs in a project.
| Fields | |
|---|---|
jobs[] |
Output only. Jobs list. |
next_page_token |
Optional. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the
. |
unreachable[] |
Output only. List of jobs with |
ListSessionTemplatesRequest
A request to list session templates in a project.
| Fields | |
|---|---|
parent |
Required. The parent that owns this collection of session templates. Authorization requires the following IAM permission on the specified resource
|
page_size |
Optional. The maximum number of sessions to return in each response. The service may return fewer than this value. |
page_token |
Optional. A page token received from a previous |
ListSessionTemplatesResponse
A list of session templates.
| Fields | |
|---|---|
session_templates[] |
Output only. Session template list |
next_page_token |
A token, which can be sent as |
ListSessionsRequest
A request to list sessions in a project.
| Fields | |
|---|---|
parent |
Required. The parent, which owns this collection of sessions. Authorization requires the following IAM permission on the specified resource
|
page_size |
Optional. The maximum number of sessions to return in each response. The service may return fewer than this value. |
page_token |
Optional. A page token received from a previous |
filter |
Optional. A filter for the sessions to return in the response. A filter is a logical expression constraining the values of various fields in each session resource. Filters are case sensitive, and may contain multiple clauses combined with logical operators (AND, OR). Supported fields are Example: See https://google.aip.dev/assets/misc/ebnf-filtering.txt for a detailed description of the filter syntax and a list of supported comparators. |
ListSessionsResponse
A list of interactive sessions.
| Fields | |
|---|---|
sessions[] |
Output only. The sessions from the specified collection. |
next_page_token |
A token, which can be sent as |
ListWorkflowTemplatesRequest
A request to list workflow templates in a project.
| Fields | |
|---|---|
parent |
Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
Authorization requires the following IAM permission on the specified resource
|
page_size |
Optional. The maximum number of results to return in each response. |
page_token |
Optional. The page token, returned by a previous call, to request the next page of results. |
ListWorkflowTemplatesResponse
A response to a request to list workflow templates in a project.
| Fields | |
|---|---|
templates[] |
Output only. WorkflowTemplates list. |
next_page_token |
Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent
. |
unreachable[] |
Output only. List of workflow templates that could not be included in the response. Attempting to get one of these resources may indicate why it was not included in the list response. |
LoggingConfig
The runtime logging config of the job.
| Fields | |
|---|---|
driver_log_levels |
The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG' |
Level
The Log4j level for job execution. When running an Apache Hive job, Cloud Dataproc configures the Hive client to an equivalent verbosity level.
| Enums | |
|---|---|
LEVEL_UNSPECIFIED |
Level is unspecified. Use default level for log4j. |
ALL |
Use ALL level for log4j. |
TRACE |
Use TRACE level for log4j. |
DEBUG |
Use DEBUG level for log4j. |
INFO |
Use INFO level for log4j. |
WARN |
Use WARN level for log4j. |
ERROR |
Use ERROR level for log4j. |
FATAL |
Use FATAL level for log4j. |
OFF |
Turn off log4j. |
ManagedCluster
Cluster that is managed by the workflow.
| Fields | |
|---|---|
cluster_name |
Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix. The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters. |
config |
Required. The cluster configuration. |
labels |
Optional. The labels to associate with this cluster. Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62} Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63} No more than 32 labels can be associated with a given cluster. |
ManagedGroupConfig
Specifies the resources used to actively manage an instance group.
| Fields | |
|---|---|
instance_template_name |
Output only. The name of the Instance Template used for the Managed Instance Group. |
instance_group_manager_name |
Output only. The name of the Instance Group Manager for this group. |
instance_group_manager_uri |
Output only. The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm. |
MetastoreConfig
Specifies a Metastore configuration.
| Fields | |
|---|---|
dataproc_metastore_service |
Required. Resource name of an existing Metastore service. Example:
|
NodeGroup
Node Group. The NodeGroup resource is not related to the NodeGroupAffinity resource.
| Fields | |
|---|---|
name |
The Node group resource name. |
roles[] |
Required. Node group roles. |
node_group_config |
Optional. The node group instance group configuration. |
labels |
Optional. Node group labels. |
Role
Node pool roles.
| Enums | |
|---|---|
ROLE_UNSPECIFIED |
Required unspecified role. |
DRIVER |
Job drivers run on the node pool. |
NodeGroupAffinity
Node Group Affinity for clusters using sole-tenant node groups. The NodeGroupAffinity resource is not related to the NodeGroup resource.
| Fields | |
|---|---|
node_group_uri |
Required. The URI of a sole-tenant node group resource that the cluster will be created on. A full URL, partial URI, or node group name are valid. Examples:
|
NodeGroupOperationMetadata
Metadata describing the node group operation.
| Fields | |
|---|---|
node_group_id |
Output only. Node group ID for the operation. |
cluster_uuid |
Output only. Cluster UUID associated with the node group operation. |
status |
Output only. Current operation status. |
status_history[] |
Output only. The previous operation status. |
operation_type |
The operation type. |
description |
Output only. Short description of operation. |
labels |
Output only. Labels associated with the operation. |
warnings[] |
Output only. Errors encountered during operation execution. |
NodeGroupOperationType
Operation type for node group resources.
| Enums | |
|---|---|
NODE_GROUP_OPERATION_TYPE_UNSPECIFIED |
Node group operation type is unknown. |
CREATE |
Create node group operation type. |
UPDATE |
Update node group operation type. |
DELETE |
Delete node group operation type. |
RESIZE |
Resize node group operation type. |
START |
Start node group operation type. |
STOP |
Stop node group operation type. |
NodeInitializationAction
Specifies an executable to run on a fully configured node and a timeout period for executable completion.
| Fields | |
|---|---|
executable_file |
Required. Cloud Storage URI of executable file. |
execution_timeout |
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration). Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period. |
OrderedJob
A job executed by the workflow.
| Fields | |
|---|---|
step_id |
Required. The step id. The id must be unique among all jobs within the template. The step id is used as prefix for job id, as job The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters. |
labels |
Optional. The labels to associate with this job. Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62} Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63} No more than 32 labels can be associated with a given job. |
scheduling |
Optional. Job scheduling configuration. |
prerequisite_step_ids[] |
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow. |
Union field job_type. Required. The job definition. job_type can be only one of the following: |
|
hadoop_job |
Optional. Job is a Hadoop job. |
spark_job |
Optional. Job is a Spark job. |
pyspark_job |
Optional. Job is a PySpark job. |
hive_job |
Optional. Job is a Hive job. |
pig_job |
Optional. Job is a Pig job. |
spark_r_job |
Optional. Job is a SparkR job. |
spark_sql_job |
Optional. Job is a SparkSql job. |
presto_job |
Optional. Job is a Presto job. |
flink_job |
Optional. Job is a Flink job. |
ParameterValidation
Configuration for parameter validation.
| Fields | |
|---|---|
Union field validation_type. Required. The type of validation to be performed. validation_type can be only one of the following: |
|
regex |
Validation based on regular expressions. |
values |
Validation based on a list of allowed values. |
PeripheralsConfig
Auxiliary services configuration for a workload.
| Fields | |
|---|---|
metastore_service |
Optional. Resource name of an existing Dataproc Metastore service. Example:
|
spark_history_server_config |
Optional. The Spark History Server configuration for the workload. |
PigJob
A Dataproc job for running Apache Pig queries on YARN.
| Fields | |
|---|---|
continue_on_failure |
Optional. Whether to continue executing queries if a query fails. The default value is |
script_variables |
Optional. Mapping of query variable names to values (equivalent to the Pig command: |
properties |
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in |
jar_file_uris[] |
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. |
logging_config |
Optional. The runtime log config for job execution. |
Union field queries. Required. The sequence of Pig queries to execute, specified as an HCFS file URI or a list of queries. queries can be only one of the following: |
|
query_file_uri |
The HCFS URI of the script that contains the Pig queries. |
query_list |
A list of queries. |
PrestoJob
A Dataproc job for running Presto queries. IMPORTANT: The Dataproc Presto Optional Component must be enabled when the cluster is created to submit a Presto job to the cluster.
| Fields | |
|---|---|
continue_on_failure |
Optional. Whether to continue executing queries if a query fails. The default value is |
output_format |
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats |
client_tags[] |
Optional. Presto client tags to attach to this query |
properties |
Optional. A mapping of property names to values. Used to set Presto session properties Equivalent to using the --session flag in the Presto CLI |
logging_config |
Optional. The runtime log config for job execution. |
Union field queries. Required. The sequence of Presto queries to execute, specified as either an HCFS file URI or as a list of queries. queries can be only one of the following: |
|
query_file_uri |
The HCFS URI of the script that contains SQL queries. |
query_list |
A list of queries. |
PropertiesInfo
Properties of the workload organized by origin.
| Fields | |
|---|---|
autotuning_properties |
Output only. Properties set by autotuning engine. |
ValueInfo
Annotatated property value.
| Fields | |
|---|---|
value |
Property value. |
annotation |
Annotation, comment or explanation why the property was set. |
overridden_value |
Optional. Value which was replaced by the corresponding component. |
PyPiRepositoryConfig
Configuration for PyPi repository
| Fields | |
|---|---|
pypi_repository |
Optional. The PyPi repository address. Note: This field is not available for batch workloads. |
PySparkBatch
A configuration for running an Apache PySpark batch workload.
| Fields | |
|---|---|
main_python_file_uri |
Required. The HCFS URI of the main Python file to use as the Spark driver. Must be a .py file. |
args[] |
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as |
python_file_uris[] |
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: |
jar_file_uris[] |
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks. |
file_uris[] |
Optional. HCFS URIs of files to be placed in the working directory of each executor. |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: |
PySparkJob
A Dataproc job for running Apache PySpark applications on YARN.
| Fields | |
|---|---|
main_python_file_uri |
Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file. |
args[] |
Optional. The arguments to pass to the driver. Do not include arguments, such as |
python_file_uris[] |
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip. |
jar_file_uris[] |
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. |
file_uris[] |
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks. |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. Note: Spark applications must be deployed in cluster mode for correct environment propagation. |
properties |
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
logging_config |
Optional. The runtime log config for job execution. |
PySparkNotebookBatch
A configuration for running a PySpark Notebook batch workload.
| Fields | |
|---|---|
notebook_file_uri |
Required. The HCFS URI of the notebook file to execute. |
params |
Optional. The parameters to pass to the notebook. |
python_file_uris[] |
Optional. HCFS URIs of Python files to pass to the PySpark framework. |
jar_file_uris[] |
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH. |
file_uris[] |
Optional. HCFS URIs of files to be placed in the working directory of each executor |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: |
QueryList
A list of queries to run on a cluster.
| Fields | |
|---|---|
queries[] |
Required. The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: |
RegexValidation
Validation based on regular expressions.
| Fields | |
|---|---|
regexes[] |
Required. RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient). |
RepositoryConfig
Configuration for dependency repositories
| Fields | |
|---|---|
pypi_repository_config |
Optional. Configuration for PyPi repository. |
ReservationAffinity
Reservation Affinity for consuming Zonal reservation.
| Fields | |
|---|---|
consume_reservation_type |
Optional. Type of reservation to consume |
key |
Optional. Corresponds to the label key of reservation resource. |
values[] |
Optional. Corresponds to the label values of reservation resource. |
Type
Indicates whether to consume capacity from an reservation or not.
| Enums | |
|---|---|
TYPE_UNSPECIFIED |
|
NO_RESERVATION |
Do not consume from any allocated capacity. |
ANY_RESERVATION |
Consume any reservation available. |
SPECIFIC_RESERVATION |
Must consume from a specific reservation. Must specify key value fields for specifying the reservations. |
ResizeNodeGroupRequest
A request to resize a node group.
| Fields | |
|---|---|
name |
Required. The name of the node group to resize. Format: |
size |
Required. The number of running instances for the node group to maintain. The group adds or removes instances to maintain the number of instances specified by this parameter. |
request_id |
Optional. A unique ID used to identify the request. If the server receives two ResizeNodeGroupRequest with the same ID, the second request is ignored and the first Recommendation: Set this value to a UUID. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
graceful_decommission_timeout |
Optional. Timeout for graceful YARN decommissioning. Graceful decommissioning allows the removal of nodes from the Compute Engine node group without interrupting jobs in progress. This timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. (see JSON representation of Duration). Only supported on Dataproc image versions 1.2 and higher. |
parent_operation_id |
Optional. operation id of the parent operation sending the resize request |
RuntimeConfig
Runtime configuration for a workload.
| Fields | |
|---|---|
version |
Optional. Version of the batch runtime. |
container_image |
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used. |
properties |
Optional. A mapping of property names to values, which are used to configure workload execution. |
repository_config |
Optional. Dependency repository configuration. |
autotuning_config |
Optional. Autotuning configuration of the workload. |
cohort |
Optional. Cohort identifier. Identifies families of the workloads that have the same shape, for example, daily ETL jobs. |
RuntimeInfo
Runtime information about workload execution.
| Fields | |
|---|---|
endpoints |
Output only. Map of remote access endpoints (such as web interfaces and APIs) to their URIs. |
output_uri |
Output only. A URI pointing to the location of the stdout and stderr of the workload. |
diagnostic_output_uri |
Output only. A URI pointing to the location of the diagnostics tarball. |
approximate_usage |
Output only. Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing). Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes for announcements, changes, fixes and other Dataproc developments). |
current_usage |
Output only. Snapshot of current workload resource usage. |
properties_info |
Optional. Properties of the workload organized by origin. |
cohort_info |
Output only. Information about the cohort that the workload belongs to. |
SecurityConfig
Security related configuration, including encryption, Kerberos, etc.
| Fields | |
|---|---|
kerberos_config |
Optional. Kerberos related configuration. |
identity_config |
Optional. Identity related configuration, including service account based secure multi-tenancy user mappings. |
Session
A representation of a session.
| Fields | |
|---|---|
name |
Identifier. The resource name of the session. |
uuid |
Output only. A session UUID (Unique Universal Identifier). The service generates this value when it creates the session. |
create_time |
Output only. The time when the session was created. |
runtime_info |
Output only. Runtime information about session execution. |
state |
Output only. A state of the session. |
state_message |
Output only. Session state details, such as the failure description if the state is |
state_time |
Output only. The time when the session entered the current state. |
creator |
Output only. The email address of the user who created the session. |
labels |
Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a session. |
runtime_config |
Optional. Runtime configuration for the session execution. |
environment_config |
Optional. Environment configuration for the session execution. |
user |
Optional. The email address of the user who owns the session. |
state_history[] |
Output only. Historical state information for the session. |
session_template |
Optional. The session template used by the session. Only resource names, including project ID and location, are valid. Example: * The template must be in the same project and Dataproc region as the session. |
Union field session_config. The session configuration. session_config can be only one of the following: |
|
jupyter_session |
Optional. Jupyter session config. |
spark_connect_session |
Optional. Spark connect session config. |
SessionStateHistory
Historical state information.
| Fields | |
|---|---|
state |
Output only. The state of the session at this point in the session history. |
state_message |
Output only. Details about the state at this point in the session history. |
state_start_time |
Output only. The time when the session entered the historical state. |
State
The session state.
| Enums | |
|---|---|
STATE_UNSPECIFIED |
The session state is unknown. |
CREATING |
The session is created prior to running. |
ACTIVE |
The session is running. |
TERMINATING |
The session is terminating. |
TERMINATED |
The session is terminated successfully. |
FAILED |
The session is no longer running due to an error. |
SessionOperationMetadata
Metadata describing the Session operation.
| Fields | |
|---|---|
session |
Name of the session for the operation. |
session_uuid |
Session UUID for the operation. |
create_time |
The time when the operation was created. |
done_time |
The time when the operation was finished. |
operation_type |
The operation type. |
description |
Short description of the operation. |
labels |
Labels associated with the operation. |
warnings[] |
Warnings encountered during operation execution. |
SessionOperationType
Operation type for Session resources
| Enums | |
|---|---|
SESSION_OPERATION_TYPE_UNSPECIFIED |
Session operation type is unknown. |
CREATE |
Create Session operation type. |
TERMINATE |
Terminate Session operation type. |
DELETE |
Delete Session operation type. |
SessionTemplate
A representation of a session template.
| Fields | |
|---|---|
name |
Required. Identifier. The resource name of the session template. |
description |
Optional. Brief description of the template. |
create_time |
Output only. The time when the template was created. |
creator |
Output only. The email address of the user who created the template. |
labels |
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035. No more than 32 labels can be associated with a session. |
runtime_config |
Optional. Runtime configuration for session execution. |
environment_config |
Optional. Environment configuration for session execution. |
update_time |
Output only. The time the template was last updated. |
uuid |
Output only. A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template. |
Union field session_config. The session configuration. session_config can be only one of the following: |
|
jupyter_session |
Optional. Jupyter session config. |
spark_connect_session |
Optional. Spark connect session config. |
ShieldedInstanceConfig
Shielded Instance Config for clusters using Compute Engine Shielded VMs.
| Fields | |
|---|---|
enable_secure_boot |
Optional. Defines whether instances have Secure Boot enabled. |
enable_vtpm |
Optional. Defines whether instances have the vTPM enabled. |
enable_integrity_monitoring |
Optional. Defines whether instances have integrity monitoring enabled. |
SoftwareConfig
Specifies the selection and config of software inside the cluster.
| Fields | |
|---|---|
image_version |
Optional. The version of software inside the cluster. It must be one of the supported Image Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version. |
properties |
Optional. The properties to set on daemon config files. Property keys are specified in
For more information, see Cluster properties. |
optional_components[] |
Optional. The set of components to activate on the cluster. |
SparkBatch
A configuration for running an Apache Spark batch workload.
| Fields | |
|---|---|
args[] |
Optional. The arguments to pass to the driver. Do not include arguments that can be set as batch properties, such as |
jar_file_uris[] |
Optional. HCFS URIs of jar files to add to the classpath of the Spark driver and tasks. |
file_uris[] |
Optional. HCFS URIs of files to be placed in the working directory of each executor. |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: |
Union field driver. The specification of the main method to call to drive the Spark workload. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to jar_file_uris, and then specify the main class name in main_class. driver can be only one of the following: |
|
main_jar_file_uri |
Optional. The HCFS URI of the jar file that contains the main class. |
main_class |
Optional. The name of the driver main class. The jar file that contains the class must be in the classpath or specified in |
SparkConnectConfig
This type has no fields.
Spark connect configuration for an interactive session.
SparkHistoryServerConfig
Spark History Server configuration for the workload.
| Fields | |
|---|---|
dataproc_cluster |
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload. Example:
|
SparkJob
A Dataproc job for running Apache Spark applications on YARN.
| Fields | |
|---|---|
args[] |
Optional. The arguments to pass to the driver. Do not include arguments, such as |
jar_file_uris[] |
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. |
file_uris[] |
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks. |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
properties |
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
logging_config |
Optional. The runtime log config for job execution. |
Union field driver. Required. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to jarFileUris, and then specify the main class name in mainClass. driver can be only one of the following: |
|
main_jar_file_uri |
The HCFS URI of the jar file that contains the main class. |
main_class |
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris. |
SparkRBatch
A configuration for running an Apache SparkR batch workload.
| Fields | |
|---|---|
main_r_file_uri |
Required. The HCFS URI of the main R file to use as the driver. Must be a |
args[] |
Optional. The arguments to pass to the Spark driver. Do not include arguments that can be set as batch properties, such as |
file_uris[] |
Optional. HCFS URIs of files to be placed in the working directory of each executor. |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: |
SparkRJob
A Dataproc job for running Apache SparkR applications on YARN.
| Fields | |
|---|---|
main_r_file_uri |
Required. The HCFS URI of the main R file to use as the driver. Must be a .R file. |
args[] |
Optional. The arguments to pass to the driver. Do not include arguments, such as |
file_uris[] |
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks. |
archive_uris[] |
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
properties |
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
logging_config |
Optional. The runtime log config for job execution. |
SparkSqlBatch
A configuration for running Apache Spark SQL queries as a batch workload.
| Fields | |
|---|---|
query_file_uri |
Required. The HCFS URI of the script that contains Spark SQL queries to execute. |
query_variables |
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: |
jar_file_uris[] |
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH. |
SparkSqlJob
A Dataproc job for running Apache Spark SQL queries.
| Fields | |
|---|---|
script_variables |
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET |
properties |
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten. |
jar_file_uris[] |
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH. |
logging_config |
Optional. The runtime log config for job execution. |
Union field queries. Required. The sequence of Spark SQL queries to execute, specified as either an HCFS file URI or as a list of queries. queries can be only one of the following: |
|
query_file_uri |
The HCFS URI of the script that contains SQL queries. |
query_list |
A list of queries. |
StartClusterRequest
A request to start a cluster.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project the cluster belongs to. |
region |
Required. The region in which to handle the request. |
cluster_name |
Required. The cluster name. Authorization requires the following IAM permission on the specified resource
|
cluster_uuid |
Optional. Specifying the |
request_id |
Optional. A unique ID used to identify the request. If the server receives two StartClusterRequests with the same id, then the second request will be ignored and the first Recommendation: Set this value to a UUID. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
StartupConfig
Configuration to handle the startup of instances during cluster create and update process.
| Fields | |
|---|---|
required_registration_fraction |
Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled). |
StopClusterRequest
A request to stop a cluster.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project the cluster belongs to. |
region |
Required. The region in which to handle the request. |
cluster_name |
Required. The cluster name. Authorization requires the following IAM permission on the specified resource
|
cluster_uuid |
Optional. Specifying the |
request_id |
Optional. A unique ID used to identify the request. If the server receives two StopClusterRequests with the same id, then the second request will be ignored and the first Recommendation: Set this value to a UUID. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
SubmitJobRequest
A request to submit a job.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the job belongs to. Authorization requires one or more of the following IAM permissions on the specified resource
|
region |
Required. The Dataproc region in which to handle the request. |
job |
Required. The job resource. |
request_id |
Optional. A unique id used to identify the request. If the server receives two SubmitJobRequests with the same id, then the second request will be ignored and the first It is recommended to always set this value to a UUID. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
TemplateParameter
A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
| Fields | |
|---|---|
name |
Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters. |
fields[] |
Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths. A field path is similar in syntax to a Also, field paths can reference fields using the following syntax:
It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:
|
description |
Optional. Brief description of the parameter. Must not exceed 1024 characters. |
validation |
Optional. Validation rules to be applied to this parameter's value. |
TerminateSessionRequest
A request to terminate an interactive session.
| Fields | |
|---|---|
name |
Required. The name of the session resource to terminate. Authorization requires the following IAM permission on the specified resource
|
request_id |
Optional. A unique ID used to identify the request. If the service receives two TerminateSessionRequests with the same ID, the second request is ignored. Recommendation: Set this value to a UUID. The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
UpdateAutoscalingPolicyRequest
A request to update an autoscaling policy.
| Fields | |
|---|---|
policy |
Required. The updated autoscaling policy. Authorization requires the following IAM permission on the specified resource
|
UpdateClusterRequest
A request to update a cluster.
| Fields | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
project_id |
Required. The ID of the Google Cloud Platform project the cluster belongs to. |
||||||||||
region |
Required. The region in which to handle the request. |
||||||||||
cluster_name |
Required. The cluster name. |
||||||||||
cluster |
Required. The changes to the cluster. Authorization requires the following IAM permission on the specified resource
|
||||||||||
graceful_decommission_timeout |
Optional. Timeout for graceful YARN decommissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. (see JSON representation of Duration). Supported in image versions 1.2 and higher. |
||||||||||
update_mask |
Required. Specifies the path, relative to Similarly, to change the number of preemptible workers in a cluster to 5, the Note: Currently, only the following fields can be updated:
|
||||||||||
request_id |
Optional. A unique ID used to identify the request. If the server receives two UpdateClusterRequests with the same id, then the second request will be ignored and the first It is recommended to always set this value to a UUID. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters. |
||||||||||
UpdateJobRequest
A request to update a job.
| Fields | |
|---|---|
project_id |
Required. The ID of the Google Cloud Platform project that the job belongs to. Authorization requires the following IAM permission on the specified resource
|
region |
Required. The Dataproc region in which to handle the request. |
job_id |
Required. The job ID. |
job |
Required. The changes to the job. |
update_mask |
Required. Specifies the path, relative to
, of the field to update. For example, to update the labels of a Job the
parameter would be specified as
, and the
is the only field that can be updated. |
UpdateSessionTemplateRequest
A request to update a session template.
| Fields | |
|---|---|
session_template |
Required. The updated session template. Authorization requires the following IAM permission on the specified resource
|
UpdateWorkflowTemplateRequest
A request to update a workflow template.
| Fields | |
|---|---|
template |
Required. The updated workflow template. The Authorization requires the following IAM permission on the specified resource
|
UsageMetrics
Usage metrics represent approximate total resources consumed by a workload.
| Fields | |
|---|---|
milli_dcu_seconds |
Optional. DCU (Dataproc Compute Units) usage in ( |
shuffle_storage_gb_seconds |
Optional. Shuffle storage usage in ( |
milli_accelerator_seconds |
Optional. [DEPRECATED] Accelerator usage in ( |
accelerator_type |
Optional. [DEPRECATED] Accelerator type being used, if any |
update_time |
Optional. The timestamp of the usage metrics. |
UsageSnapshot
The usage snapshot represents the resources consumed by a workload at a specified time.
| Fields | |
|---|---|
milli_dcu |
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing). |
shuffle_storage_gb |
Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing) |
milli_dcu_premium |
Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing). |
shuffle_storage_gb_premium |
Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing) |
milli_accelerator |
Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing) |
accelerator_type |
Optional. Accelerator type being used, if any |
snapshot_time |
Optional. The timestamp of the usage snapshot. |
ValueValidation
Validation based on a list of allowed values.
| Fields | |
|---|---|
values[] |
Required. List of allowed values for the parameter. |
VirtualClusterConfig
The cluster config for a cluster that does not directly control the underlying compute resources, such as a GKE cluster.
| Fields | |
|---|---|
staging_bucket |
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, the service will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see staging and temp buckets). This field requires a Cloud Storage bucket name, not a |
auxiliary_services_config |
Optional. Configuration of auxiliary services used by this cluster. |
Union field
|
|
kubernetes_cluster_config |
Required. The configuration for running the cluster on Kubernetes. |
WorkflowGraph
The workflow graph.
| Fields | |
|---|---|
nodes[] |
Output only. The workflow nodes. |
WorkflowMetadata
A Dataproc workflow template resource.
| Fields | |
|---|---|
template |
Output only. The resource name of the workflow template as described in https://cloud.google.com/apis/design/resource_names.
|
version |
Output only. The version of template at the time of workflow instantiation. |
create_cluster |
Output only. The create cluster operation metadata. |
graph |
Output only. The workflow graph. |
delete_cluster |
Output only. The delete cluster operation metadata. |
state |
Output only. The workflow state. |
cluster_name |
Output only. The name of the target cluster. |
parameters |
Map from parameter names to values that were used for those parameters. |
start_time |
Output only. Workflow start time. |
end_time |
Output only. Workflow end time. |
cluster_uuid |
Output only. The UUID of target cluster. |
dag_timeout |
Output only. The timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration). |
dag_start_time |
Output only. DAG start time, only set for workflows with |
dag_end_time |
Output only. DAG end time, only set for workflows with |
State
The operation state.
| Enums | |
|---|---|
UNKNOWN |
Unused. |
PENDING |
The operation has been created. |
RUNNING |
The operation is running. |
DONE |
The operation is done; either cancelled or completed. |
WorkflowNode
The workflow node.
| Fields | |
|---|---|
step_id |
Output only. The name of the node. |
prerequisite_step_ids[] |
Output only. Node's prerequisite nodes. |
job_id |
Output only. The job id; populated after the node enters RUNNING state. |
state |
Output only. The node state. |
error |
Output only. The error detail. |
NodeState
The workflow node state.
| Enums | |
|---|---|
NODE_STATE_UNSPECIFIED |
State is unspecified. |
BLOCKED |
The node is awaiting prerequisite node to finish. |
RUNNABLE |
The node is runnable but not running. |
RUNNING |
The node is running. |
COMPLETED |
The node completed successfully. |
FAILED |
The node failed. A node can be marked FAILED because its ancestor or peer failed. |
WorkflowTemplate
A Dataproc workflow template resource.
| Fields | |
|---|---|
id |
|
name |
Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
|
version |
Optional. Used to perform a consistent read-modify-write. This field should be left blank for a |
create_time |
Output only. The time template was created. |
update_time |
Output only. The time template was last updated. |
labels |
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a template. |
placement |
Required. WorkflowTemplate scheduling information. |
jobs[] |
Required. The Directed Acyclic Graph of Jobs to submit. |
parameters[] |
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated. |
dag_timeout |
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted. |
encryption_config |
Optional. Encryption settings for encrypting workflow template job arguments. |
EncryptionConfig
Encryption settings for encrypting workflow template job arguments.
| Fields | |
|---|---|
kms_key |
Optional. The Cloud KMS key name to use for encrypting workflow template job arguments. When this this key is provided, the following workflow template job arguments, if present, are CMEK encrypted:
|
WorkflowTemplatePlacement
Specifies workflow execution target.
Either managed_cluster or cluster_selector is required.
| Fields | |
|---|---|
Union field placement. Required. Specifies where workflow executes; either on a managed cluster or an existing cluster chosen by labels. placement can be only one of the following: |
|
managed_cluster |
A cluster that is managed by the workflow. |
cluster_selector |
Optional. A selector that chooses target cluster for jobs based on metadata. The selector is evaluated at the time each job is submitted. |
YarnApplication
A YARN application created by a job. Application information is a subset of
org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto
.
Beta Feature: This report is available for testing purposes only. It may be changed before final release.
| Fields | |
|---|---|
name |
Required. The application name. |
state |
Required. The application state. |
progress |
Required. The numerical progress of the application, from 1 to 100. |
tracking_url |
Optional. The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access. |
vcore_seconds |
Optional. The cumulative CPU time consumed by the application for a job, measured in vcore-seconds. |
memory_mb_seconds |
Optional. The cumulative memory usage of the application for a job, measured in mb-seconds. |
State
The application state, corresponding to
YarnProtos.YarnApplicationStateProto
.
| Enums | |
|---|---|
STATE_UNSPECIFIED |
Status is unspecified. |
NEW |
Status is NEW. |
NEW_SAVING |
Status is NEW_SAVING. |
SUBMITTED |
Status is SUBMITTED. |
ACCEPTED |
Status is ACCEPTED. |
RUNNING |
Status is RUNNING. |
FINISHED |
Status is FINISHED. |
FAILED |
Status is FAILED. |
KILLED |
Status is KILLED. |