Single node clusters

Single node clusters are Managed Service for Apache Spark clusters with only one node. This single node acts as the master and worker for your Managed Service for Apache Spark cluster. While single node clusters only have one node, most Managed Service for Apache Spark concepts and features still apply, except those listed below.

There are a number of situations where single node Managed Service for Apache Spark clusters can be useful, including:

  • Trying out new versions of Spark and Hadoop or other open source components
  • Building proof-of-concept (PoC) demonstrations
  • Lightweight data science
  • Small-scale non-critical data processing
  • Education related to the Spark and Hadoop ecosystem

Single node cluster semantics

The following semantics apply to single node Managed Service for Apache Spark clusters:

  • Single node clusters are configured the same as multi node Managed Service for Apache Spark clusters, and include services such as HDFS and YARN.
  • Single node clusters report as master nodes for initialization actions.
  • Single node clusters show 0 workers since the single node acts as both master and worker.
  • Single node clusters are given hostnames that follow the pattern clustername-m. You can use this hostname to SSH into or connect to a web UI on the node.
  • Single node clusters cannot be upgraded to multi node clusters. Once created, single node clusters are restricted to one node. Similarly, multi node clusters cannot be scaled down to single node clusters.

Limitations

  • Single node clusters are not recommended for large-scale parallel data processing. If you exceed the resources on a single node cluster, a multi node Managed Service for Apache Spark cluster is recommended.

  • Single node clusters are not available with high-availability since there is only one node in the cluster.

  • Single node clusters cannot use preemptible VMs.

Create a single node cluster

gcloud command

You can create a single node Managed Service for Apache Spark cluster using the gcloud command-line tool. To create a single node cluster, pass the --single-node flag to the gcloud dataproc clusters create command.

gcloud dataproc clusters create cluster-name \
    --region=region \
    --single-node \
    ... other args

REST API

You can create a single node cluster through the Managed Service for Apache Spark REST API using a clusters.create request. When making this request, you must:

  1. Add the property "dataproc:dataproc.allow.zero.workers":"true" to the SoftwareConfig of the cluster request.
  2. Don't submit values for workerConfig and secondaryWorkerConfig (see ClusterConfig).

Console

You can create a single node cluster by selecting "Single Node (1 master, 0 workers)" on the Cluster type section of the Set up cluster panel on the Managed Service for Apache Spark Create a cluster page.