Create an Apache Iceberg table with the Lakehouse REST Catalog

This document shows you how to create an Apache Iceberg table with metadata in the Lakehouse REST Catalog using the Managed Service for Apache Spark Jobs service, the Spark SQL CLI or the Zeppelin web interface running on a Managed Service for Apache Spark cluster.

Before you begin

If you haven't done so, create a Google Cloud project, a Cloud Storage bucket, and a Managed Service for Apache Spark cluster.

  1. Set up your project

    1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
    2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Roles required to select or create a project

      • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
      • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

      Go to project selector

    3. Verify that billing is enabled for your Google Cloud project.

    4. Enable the Dataproc, BigLake, BigQuery, and Cloud Storage APIs.

      Roles required to enable APIs

      To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

      Enable the APIs

    5. Install the Google Cloud CLI.

    6. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    7. To initialize the gcloud CLI, run the following command:

      gcloud init
    8. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Roles required to select or create a project

      • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
      • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

      Go to project selector

    9. Verify that billing is enabled for your Google Cloud project.

    10. Enable the Dataproc, BigLake, BigQuery, and Cloud Storage APIs.

      Roles required to enable APIs

      To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

      Enable the APIs

    11. Install the Google Cloud CLI.

    12. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    13. To initialize the gcloud CLI, run the following command:

      gcloud init

  2. Create a Cloud Storage bucket in your project.

    1. In the Google Cloud console, go to the Cloud Storage Buckets page.

      Go to Buckets

    2. Click Create.
    3. On the Create a bucket page, enter your bucket information. To go to the next step, click Continue.
      1. In the Get started section, do the following:
        • Enter a globally unique name that meets the bucket naming requirements.
        • To add a bucket label, expand the Labels section (), click Add label, and specify a key and a value for your label.
      2. In the Choose where to store your data section, do the following:
        1. Select a Location type.
        2. Choose a location where your bucket's data is permanently stored from the Location type drop-down menu.
        3. To set up cross-bucket replication, select Add cross-bucket replication via Storage Transfer Service and follow these steps:

          Set up cross-bucket replication

          1. In the Bucket menu, select a bucket.
          2. In the Replication settings section, click Configure to configure settings for the replication job.

            The Configure cross-bucket replication pane appears.

            • To filter objects to replicate by object name prefix, enter a prefix that you want to include or exclude objects from, then click Add a prefix.
            • To set a storage class for the replicated objects, select a storage class from the Storage class menu. If you skip this step, the replicated objects will use the destination bucket's storage class by default.
            • Click Done.
      3. In the Choose how to store your data section, do the following:
        1. Select a default storage class for the bucket or Autoclass for automatic storage class management of your bucket's data.
        2. To enable hierarchical namespace, in the Optimize storage for data-intensive workloads section, select Enable hierarchical namespace on this bucket.
      4. In the Choose how to control access to objects section, select whether or not your bucket enforces public access prevention, and select an access control method for your bucket's objects.
      5. In the Choose how to protect object data section, do the following:
        • Select any of the options under Data protection that you want to set for your bucket.
          • To enable soft delete, click the Soft delete policy (For data recovery) checkbox, and specify the number of days you want to retain objects after deletion.
          • To set Object Versioning, click the Object versioning (For version control) checkbox, and specify the maximum number of versions per object and the number of days after which the noncurrent versions expire.
          • To enable the retention policy on objects and buckets, click the Retention (For compliance) checkbox, and then do the following:
            • To enable Object Retention Lock, click the Enable object retention checkbox.
            • To enable Bucket Lock, click the Set bucket retention policy checkbox, and choose a unit of time and a length of time for your retention period.
        • To choose how your object data will be encrypted, expand the Data encryption section (), and select a Data encryption method.
    4. Click Create.

  3. Create a Managed Service for Apache Spark cluster with the Iceberg optional component enabled. To save resources and costs, you can create a single-node Managed Service for Apache Spark cluster to run the examples presented in this document.

    • The subnet in the region where the cluster is created must have Private Google Access (PGA) enabled.

    • If you want to run the Zeppelin web interface example in this guide, you must use or create a Managed Service for Apache Spark cluster with the Zeppelin optional component enabled.

  4. Grant roles to a custom service account (if needed): By default, Managed Service for Apache Spark cluster VMs use the Compute Engine default service account to interact with Managed Service for Apache Spark. If you want to specify a custom service account when you create your cluster, it must have the Managed Service for Apache Spark Worker role (roles/dataproc.worker), BigLake Editor role (roles/biglake.editor), and BigQuery Data Editor role (roles/bigquery.dataEditor) roles or a custom role with needed permissions.

Create an Iceberg table

This section shows you how to create an Iceberg table with metadata in the Lakehouse REST Catalog by submitting Spark SQL code to the Managed Service for Apache Spark service, the Spark SQL CLI, and the Zeppelin component web interface, which run on a Managed Service for Apache Spark cluster.

Managed Service for Apache Spark job

You can submit a job to the Managed Service for Apache Spark service by submitting the job to a Managed Service for Apache Spark cluster using the Google Cloud console or the Google Cloud CLI, or via a HTTP REST request or programmatic gRPC Managed Service for Apache Spark Cloud Client Libraries call to the Managed Service for Apache Spark Jobs API.

The examples in this section show you how to submit a Managed Service for Apache Spark Spark SQL job to the Managed Service for Apache Spark service to create an Iceberg table with metadata in the Lakehouse REST Catalog using the gcloud CLI, Google Cloud console, or Managed Service for Apache Spark REST API.

Prepare job files

Perform the following steps to create a Spark SQL job file. The file contains Spark SQL commands to create and update an Iceberg table.

  1. In a local terminal window or in Cloud Shell, use a text editor, such as vi or nano, to copy the following commands into an iceberg-table.sql file, then save the file in the current directory.

    USE CATALOG_NAME;
    CREATE NAMESPACE IF NOT EXISTS example_namespace;
    USE example_namespace;
    DROP TABLE IF EXISTS example_table;
    CREATE TABLE example_table (id int, data string) USING ICEBERG LOCATION 'gs://BUCKET/WAREHOUSE_FOLDER';
    INSERT INTO example_table VALUES (1, 'first row');
    ALTER TABLE example_table ADD COLUMNS (newDoubleCol double);
    DESCRIBE TABLE example_table;
    

    Replace the following:

    • CATALOG_NAME: Iceberg catalog name.
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used for the Iceberg warehouse.
  2. Use gcloud CLI to copy the local iceberg-table.sql to your bucket in Cloud Storage.

    gcloud storage cp iceberg-table.sql gs://BUCKET/
    

Submit the Spark SQL job

Select a tab to follow the instructions to submit the Spark SQL job to the Managed Service for Apache Spark service using the gcloud CLI, Google Cloud console, or Managed Service for Apache Spark REST API.

gcloud

  1. Run the following gcloud dataproc jobs submit spark-sql command locally in a local terminal window or in Cloud Shell to submit the Spark SQL job to create the Iceberg table.

    gcloud dataproc jobs submit spark-sql \
        --project=PROJECT_ID \
        --cluster=CLUSTER_NAME \
        --region=REGION \
        --properties="spark.sql.catalog.CATALOG_NAME=org.apache.iceberg.spark.SparkCatalog,spark.sql.catalog.CATALOG_NAME.type=rest,spark.sql.catalog.CATALOG_NAME.uri=https://biglake.googleapis.com/iceberg/v1/restcatalog,spark.sql.catalog.CATALOG_NAME.warehouse=gs://BUCKET/WAREHOUSE_FOLDER,spark.sql.catalog.CATALOG_NAME.io-impl=org.apache.iceberg.gcp.gcs.GCSFileIO,spark.sql.catalog.CATALOG_NAME.header.x-goog-user-project=PROJECT_ID,spark.sql.catalog.CATALOG_NAME.rest.auth.type=org.apache.iceberg.gcp.auth.GoogleAuthManager,spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions" \
        -f="gs://BUCKET/iceberg-table.sql"
    

    Notes:

    • PROJECT_ID: Your Google Cloud project ID. Project IDs are listed in the Project info section on the Google Cloud console Dashboard.
    • CLUSTER_NAME: The name of your Managed Service for Apache Spark cluster.
    • REGION: The Compute Engine region where your cluster is located.
    • CATALOG_NAME: Iceberg catalog name.
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used for the Iceberg warehouse.
    • --properties: Catalog properties.
    • -f: The iceberg-table.sql job file you copied to your bucket in Cloud Storage.
  2. View the table description in the terminal output when the job finishes.

    Time taken: 2.194 seconds
    id                      int
    data                    string
    newDoubleCol            double
    Time taken: 1.479 seconds, Fetched 3 row(s)
    Job JOB_ID finished successfully.
    
  3. To view table metadata in BigQuery

    1. In the Google Cloud console, go to the BigQuery page.

      Go to BigQuery Studio

    2. View Iceberg table metadata. The table identifier uses the project.catalog.namespace.table syntax.

Console

Perform the following steps to use the Google Cloud console to submit the Spark SQL job to the Managed Service for Apache Spark service to create an Iceberg table with metadata in the Lakehouse REST Catalog.

  1. In the Google Cloud console, go to the Managed Service for Apache Spark Submit a job.

    Go to Submit a job page, then complete the following fields:

    • Job ID: Accept the suggested ID or insert your own ID.
    • Region: Select the region where your cluster is located.
    • Cluster: Select your cluster.
    • Job type: Select SparkSql.
    • Query source type: Select Query file.
    • Query file: Insert gs://BUCKET/iceberg-table.sql
    • Properties: Click Add Property multiple times to create a list of properties, then copy the following Key and Value pairs.
      # Key Value
      1.
      spark.sql.catalog.CATALOG_NAME
      
      org.apache.iceberg.spark.SparkCatalog
      
      2.
      spark.sql.catalog.CATALOG_NAME.type
      
      rest
      
      3.
      spark.sql.catalog.CATALOG_NAME.uri
      
      https://biglake.googleapis.com/iceberg/v1/restcatalog
      
      4.
      spark.sql.catalog.CATALOG_NAME.warehouse
      
      gs://BUCKET/WAREHOUSE_FOLDER
      
      5.
      spark.sql.catalog.CATALOG_NAME.io-impl
      
      org.apache.iceberg.gcp.gcs.GCSFileIO
      
      6.
      spark.sql.catalog.CATALOG_NAME.header.x-goog-user-project
      
      PROJECT_ID
      
      7.
      spark.sql.catalog.CATALOG_NAME.rest.auth.type
      
      org.apache.iceberg.gcp.auth.GoogleAuthManager
      
      8.
      spark.sql.extensions
      
      org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
      

    Notes:

    • CATALOG_NAME: Iceberg catalog name.
    • PROJECT_ID: Your Google Cloud project ID.
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used for the Iceberg warehouse.
  2. Click Submit

  3. To monitor job progress and view job output, go to the Managed Service for Apache Spark Jobs page in the Google Cloud console, then click the Job ID to open the Job details page.

  4. To view table metadata in BigQuery

    1. In the Google Cloud console, go to the BigQuery page.

      Go to BigQuery Studio

    2. View Iceberg table metadata.

REST

You can use the Managed Service for Apache Spark jobs.submit API to submit the Spark SQL job to the Managed Service for Apache Spark service to create an Iceberg table with metadata in the Lakehouse REST Catalog.

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your Google Cloud project ID. Project IDs are listed in the Project info section on the Google Cloud console Dashboard.
  • CLUSTER_NAME: The name of your Managed Service for Apache Spark cluster.
  • REGION: The Compute Engine region where your cluster is located.
  • CATALOG_NAME: Iceberg catalog name.
  • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used for the Iceberg warehouse.
  • LOCATION: A supported BigQuery location. The default location is "US".
  • BIGLAKE_ICEBERG_CATALOG_JAR: the Cloud Storage URI of the Iceberg custom catalog plugin to use. Depending on your Iceberg version number, select one of the following:
    • Iceberg 1.9.1: gs://spark-lib/bigquery/iceberg-bigquery-catalog-1.9.1-1.0.1.jar
    • Iceberg 1.6.1: gs://spark-lib/bigquery/iceberg-bigquery-catalog-1.6.1-1.0.2.jar
  • jarFileUris: The listed jars are necessary to create table metadata in BigQuery Metastore.
  • properties: Catalog properties.
  • queryFileUri: The iceberg-table.sql job file you copied to your bucket in Cloud Storage.

HTTP method and URL:

POST https://dataproc.googleapis.com/v1/projects/PROJECT_ID/regions/REGION/jobs:submit

Request JSON body:

{
  "projectId": "PROJECT_ID",
  "job": {
    "placement": {
      "clusterName": "CLUSTER_NAME"
    },
    "statusHistory": [],
    "reference": {
      "jobId": "",
      "projectId": "PROJECT_ID"
    },
    "sparkSqlJob": {
      "properties": {
        "spark.sql.catalog."CATALOG_NAME": "org.apache.iceberg.spark.SparkCatalog",
        "spark.sql.catalog."CATALOG_NAME".catalog-impl": "org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog",
        "spark.sql.catalog."CATALOG_NAME".gcp_project": "PROJECT_ID",
        "spark.sql.catalog."CATALOG_NAME".gcp_location": "LOCATION",
        "spark.sql.catalog."CATALOG_NAME".warehouse": "gs://BUCKET/WAREHOUSE_FOLDER"
      },
      "jarFileUris": [
        "gs://BUCKET/iceberg-spark-runtime-3.5_2.12-1.6.1.jar,BIGLAKE_ICEBERG_CATALOG_JAR"
      ],
      "scriptVariables": {},
      "queryFileUri": "gs://BUCKET/iceberg-table.sql"
    }
  }
}

To send your request, expand one of these options:

You should receive a JSON response similar to the following:

{
  "reference": {
    "projectId": "PROJECT_ID",
    "jobId": "..."
  },
  "placement": {
    "clusterName": "CLUSTER_NAME",
    "clusterUuid": "..."
  },
  "status": {
    "state": "PENDING",
    "stateStartTime": "..."
  },
  "submittedBy": "USER",
  "sparkSqlJob": {
    "queryFileUri": "gs://BUCKET/iceberg-table.sql",
    "properties": {
      "spark.sql.catalog.USER_catalog": "org.apache.iceberg.spark.SparkCatalog",
      "spark.sql.catalog.USER_catalog.catalog-impl": "org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog",
      "spark.sql.catalog.USER_catalog.gcp_project": "PROJECT_ID",
      "spark.sql.catalog.USER_catalog.gcp_location": "LOCATION",
      "spark.sql.catalog.USER_catalog.warehouse": "gs://BUCKET/WAREHOUSE_FOLDER"
    },
    "jarFileUris": [
      "gs://BUCKET/iceberg-spark-runtime-3.5_2.12-1.6.1.jar",
      "BIGLAKE_ICEBERG_CATALOG_JAR"
    ]
  },
  "driverControlFilesUri": "gs://dataproc-...",
  "driverOutputResourceUri": "gs://dataproc-.../driveroutput",
  "jobUuid": "...",
  "region": "REGION"
}

To monitor job progress and view job output, go to the Managed Service for Apache Spark Jobs page in the Google Cloud console, then click the Job ID to open the Job details page.

To view table metadata in BigQuery

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery Studio

  2. View Iceberg table metadata.

Spark SQL CLI

The following steps show you how to create an Iceberg table with table metadata stored in the Lakehouse REST Catalog using the Spark SQL CLI running on the master node of a Managed Service for Apache Spark cluster.

  1. Use SSH to connect to the master node of your Managed Service for Apache Spark cluster.

  2. In the SSH session terminal, use the vi or nano text editor to copy the following commands into an iceberg-table.sql file.

    SET CATALOG_NAME = `CATALOG_NAME`;
    SET BUCKET = `BUCKET`;
    SET WAREHOUSE_FOLDER = `WAREHOUSE_FOLDER`;
    USE `${CATALOG_NAME}`;
    CREATE NAMESPACE IF NOT EXISTS `${CATALOG_NAME}`.example_namespace;
    DROP TABLE IF EXISTS `${CATALOG_NAME}`.example_namespace.example_table;
    CREATE TABLE `${CATALOG_NAME}`.example_namespace.example_table (id int, data string) USING ICEBERG LOCATION 'gs://${BUCKET}/${WAREHOUSE_FOLDER}';
    INSERT INTO `${CATALOG_NAME}`.example_namespace.example_table VALUES (1, 'first row');
    ALTER TABLE `${CATALOG_NAME}`.example_namespace.example_table ADD COLUMNS (newDoubleCol double);
    DESCRIBE TABLE `${CATALOG_NAME}`.example_namespace.example_table;
    

    Replace the following:

    • CATALOG_NAME: Iceberg catalog name.
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used for the Iceberg warehouse.
  3. In the SSH session terminal, run the following spark-sql command to create the iceberg table.

    spark-sql \
    --conf spark.sql.catalog.CATALOG_NAME=org.apache.iceberg.spark.SparkCatalog \
    --conf spark.sql.catalog.CATALOG_NAME.type=rest \
    --conf spark.sql.catalog.CATALOG_NAME.uri=https://biglake.googleapis.com/iceberg/v1/restcatalog \
    --conf spark.sql.catalog.CATALOG_NAME.warehouse=gs://BUCKET/WAREHOUSE_FOLDER \
    --conf spark.sql.catalog.CATALOG_NAME.io-impl=org.apache.iceberg.gcp.gcs.GCSFileIO \
    --conf spark.sql.catalog.CATALOG_NAME.header.x-goog-user-project=PROJECT_ID \
    --conf spark.sql.catalog.CATALOG_NAME.rest.auth.type=org.apache.iceberg.gcp.auth.GoogleAuthManager \
    --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
    -f iceberg-table.sql
    

    Replace the following:

    • PROJECT_ID: Your Google Cloud project ID.
    • CATALOG_NAME: Iceberg catalog name.
    • BUCKET and WAREHOUSE_FOLDER: Cloud Storage bucket and folder used for the Iceberg warehouse.
  4. View table metadata in BigQuery

    1. In the Google Cloud console, go to the BigQuery page.

      Go to BigQuery Studio

    2. View Iceberg table metadata.

Zeppelin web interface

The following steps show you how to create an Iceberg table with table metadata stored in the Lakehouse REST Catalog using the Zeppelin web interface running on the master node of a Managed Service for Apache Spark cluster.

  1. In the Google Cloud console, go to the Managed Service for Apache Spark Clusters page.

    Go to the Managed Service for Apache Spark Clusters page

  2. Select your cluster name to open the Cluster details page.

  3. Click the Web Interfaces tab display a list of Component Gateway links to the web interfaces of default and optional components installed on the cluster.

  4. Click the Zeppelin link to open the Zeppelin web interface.

  5. From the Zeppelin notebook menu, click Create new note.

  6. In the Create new note dialog, input a name for the notebook, and accept the default spark interpreter. Click Create to open the notebook.

  7. Copy the following PySpark code into your Zeppelin notebook after filling in the variables.

    %pyspark
    from pyspark.sql import SparkSession
    project_id = "PROJECT_ID" catalog = "CATALOG_NAME" namespace = "NAMESPACE" warehouse_dir = "gs://BUCKET/WAREHOUSE_DIRECTORY"
    spark = SparkSession.builder \ .appName("Lakehouse REST Catalog Iceberg") \ .config(f"spark.sql.catalog.{catalog}", "org.apache.iceberg.spark.SparkCatalog") \ .config(f"spark.sql.catalog.{catalog}.type", "rest") \ .config(f"spark.sql.catalog.{catalog}.uri", "https://biglake.googleapis.com/iceberg/v1/restcatalog") \ .config(f"spark.sql.catalog.{catalog}.warehouse", f"{warehouse_dir}") \ .config(f"spark.sql.catalog.{catalog}.io-impl", "org.apache.iceberg.gcp.gcs.GCSFileIO") \ .config(f"spark.sql.catalog.{catalog}.header.x-goog-user-project", f"{project_id}") \ .config(f"spark.sql.catalog.{catalog}.rest.auth.type", "org.apache.iceberg.gcp.auth.GoogleAuthManager") \ .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \ .getOrCreate()
    spark.sql(f"USE `{catalog}`;") spark.sql(f"CREATE NAMESPACE IF NOT EXISTS `{namespace}`;") spark.sql(f"USE `{namespace}`;")
    \# Create table and display schema spark.sql("DROP TABLE IF EXISTS example_iceberg_table") spark.sql("CREATE TABLE example_iceberg_table (id int, data string) USING ICEBERG") spark.sql("DESCRIBE example_iceberg_table;")
    \# Insert table data. spark.sql("INSERT INTO example_iceberg_table VALUES (1, 'first row');")
    \# Alter table, then display schema. spark.sql("ALTER TABLE example_iceberg_table ADD COLUMNS (newDoubleCol double);")
    \# Select and display the contents of the table. spark.sql("SELECT * FROM example_iceberg_table").show()

    Replace the following:

    • PROJECT_ID: Your Google Cloud project ID.
    • CATALOG_NAME and NAMESPACE: The Iceberg catalog name and namespace combine to identify the Iceberg table (catalog.namespace.table_name).
    • BUCKET and WAREHOUSE_DIRECTORY: Cloud Storage bucket and folder used as Iceberg warehouse directory.
  8. Click the run icon or press Shift-Enter to run the code.

  9. View table metadata in BigQuery

    1. In the Google Cloud console, go to the BigQuery page.

      Go to BigQuery Studio

    2. View Iceberg table metadata.