BigLake tables for Apache Iceberg (hereafter BigLake Iceberg tables) are Iceberg tables that you create from open source engines and store in Cloud Storage. Like all tables that use BigLake metastore, they can be read by open source engines and BigQuery. However, the open source engine that created the table is the only engine that can write to it.
Before you begin
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the BigLake API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles.
Required roles
To get the permissions that you need to manage BigLake Iceberg tables, ask your administrator to grant you the following IAM roles:
-
Create and delete tables:
-
BigLake Admin (
roles/biglake.admin) on the project -
Storage Admin (
roles/storage.admin) on the Cloud Storage bucket
-
BigLake Admin (
-
Read table data in credential vending mode:
BigLake Viewer (
roles/biglake.viewer) on the project -
Write table data in credential vending mode:
BigLake Editor (
roles/biglake.editor) on the project -
Read table data in non-credential vending mode:
-
BigLake Viewer (
roles/biglake.viewer) on the project -
Storage Object Viewer (
roles/storage.objectViewer) on the Cloud Storage bucket
-
BigLake Viewer (
-
Write table data in non-credential vending mode:
-
BigLake Editor (
roles/biglake.editor) on the project -
Storage Object User (
roles/storage.objectUser) on the Cloud Storage bucket
-
BigLake Editor (
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Create a BigLake Iceberg table
Spark
spark.sql("CREATE TABLE NAMESPACE_NAME.TABLE_NAME (id int, data string) USING ICEBERG;") spark.sql("DESCRIBE NAMESPACE_NAME.TABLE_NAME").show()
Replace the following:
NAMESPACE_NAME: the name of your namespaceTABLE_NAME: a name for your table
Trino
CREATE TABLE SCHEMA_NAME.TABLE_NAME (id int, data varchar); DESCRIBE SCHEMA_NAME.TABLE_NAME;
Replace the following:
SCHEMA_NAME: the name of your schemaTABLE_NAME: a name for your table
List tables
Spark
spark.sql("SHOW TABLES").show()
Trino
SHOW TABLES;
Insert data into the table
The following example inserts sample data into the table:
Spark
spark.sql("INSERT INTO TABLE_NAME VALUES (1, \"first row\"), (2, \"second row\"), (3, \"third row\");")
Trino
INSERT INTO TABLE_NAME VALUES (1, 'first row'), (2, 'second row'), (3, 'third row');
Query a table
The following example selects all data from the table:
Spark
spark.sql("SELECT * FROM TABLE_NAME;").show()
Trino
SELECT * FROM TABLE_NAME;
BigQuery
SELECT * FROM `PROJECT_NAME.WAREHOUSE_NAME.NAMESPACE_OR_SCHEMA_NAME.TABLE_NAME`;
Replace the following:
PROJECT_NAME: The project that is billed for using BigLake metastore.WAREHOUSE_NAME: The name of the Cloud Storage bucket. For example, if your URI isgs://iceberg_bucket, useiceberg_bucket.NAMESPACE_OR_SCHEMA_NAME: The table namespace if using Spark or table schema name if using Trino.
Alter a table
The following example adds a column to the table:
Spark
spark.sql("ALTER TABLE TABLE_NAME ADD COLUMNS ( desc string);") spark.sql("DESCRIBE NAMESPACE_NAME.TABLE_NAME").show()
Trino
ALTER TABLE TABLE_NAME ADD COLUMN desc varchar; DESCRIBE SCHEMA_NAME.TABLE_NAME;
Delete a table
Spark
spark.sql("DROP TABLE TABLE_NAME;")
Trino
DROP TABLE TABLE_NAME;
Pricing
For pricing details, see BigLake pricing.
What's next
- Learn about the Iceberg REST catalog.