Ce document explique comment réutiliser les règles de qualité des données de Knowledge Catalog (anciennement Dataplex Universal Catalog) pour définir et gérer des règles métier standardisées.
La réutilisation des règles vous permet de partager des définitions de règles métier complexes ou standardisées entre plusieurs règles et analyses de qualité des données à l'aide de modèles de règles. Ce document explique également comment configurer, créer et gérer des modèles de règles réutilisables, et comment associer des règles de qualité des données à des entrées de catalogue en tant qu'aspects de métadonnées.
Cas d'utilisation
Vous pouvez utiliser la réutilisation des règles de qualité des données dans les cas suivants :
- Standardiser et partager des définitions de règles : utilisez des modèles de règles personnalisés pour stocker des définitions de règles métier complexes ou standardisées. Cela réduit le temps et les efforts nécessaires pour distribuer des définitions courantes à l'aide d'expressions SQL basées sur des modèles. Par exemple, une équipe centrale de gouvernance des données peut définir un modèle d'adresse e-mail valide ou de numéro de sécurité sociale valide qui est réutilisé dans l'ensemble de l'organisation, ce qui garantit la cohérence et réduit la charge opérationnelle liée à la gestion des règles en double.
- Mettre en œuvre une qualité axée sur la gouvernance : déclarez les règles de données en tant que métadonnées à l'aide des aspects Knowledge Catalog sur les entrées de table BigQuery et de termes de glossaire d'entreprise. Vos règles sont ainsi consultables et réutilisables. Par exemple, lorsque vous associez une colonne à un terme de glossaire, elle peut hériter automatiquement des règles de validation définies pour ce terme, ce qui permet d'appliquer des règles de gouvernance automatisées grâce à l'héritage de métadonnées sémantiques.
- Rechercher et découvrir des règles réutilisables : recherchez les règles existantes dans votre organisation à l'aide de la recherche sémantique. Les analystes et les ingénieurs de données peuvent ainsi découvrir des ensembles de règles validés et standardisés (tels que "Constantes financières de référence") et amorcer la qualité des données pour de nouveaux projets sans écrire de code SQL à partir de zéro.
- Éliminer les problèmes de démarrage à froid : tirez parti des modèles de règles système pour les évaluations fréquemment utilisées, telles que les vérifications de valeurs nulles ou les attentes de plage. Ces modèles intégrés vous permettent de configurer rapidement la surveillance de la qualité des données pour les cas d'utilisation courants sans avoir à écrire de code SQL personnalisé.
- Séparer les préoccupations : permettez aux équipes centrales de gouvernance de créer des modèles de règles validés, tandis que les équipes d'ingénierie se concentrent sur l'application de ces règles à leurs actifs de données sans avoir à écrire ni à gérer de code SQL complexe. Cette division claire des responsabilités améliore l'agilité organisationnelle et garantit que les normes de qualité des données sont appliquées de manière cohérente dans l'ensemble de l'entreprise.
Avant de commencer
-
Activer l'API Dataplex
Rôles requis pour activer les API
Pour activer les API, vous avez besoin du rôle IAM Administrateur d'utilisation du service (
roles/serviceusage.serviceUsageAdmin), qui contient l'autorisationserviceusage.services.enable. Découvrez comment attribuer des rôles.
Avant d'utiliser la réutilisation des règles de qualité des données, assurez-vous de remplir les conditions suivantes.
Configurer l'environnement de l'API Dataplex
Pour utiliser les exemples d'API REST de ce document, configurez un alias pour gcurl et configurez la variable d'environnement ${DATAPLEX_API}.
Définissez un alias de
gcurl. Cela crée un raccourci qui inclut votre jeton d'authentification et définit le type de contenu JSON pour les requêtes API :alias gcurl='curl -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json"'Définissez la variable
DATAPLEX_API:DATAPLEX_API="dataplex.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION"
Remplacez les éléments suivants :
PROJECT_ID: ID de votre projet.LOCATION: emplacement de votre analyse ou de vos ressources (par exemple,us-central1).
Configurer un compte de service
Un compte de service est obligatoire pour exécuter des analyses de qualité des données avec des règles réutilisables. Créez un compte de service disposant des rôles et autorisations Identity and Access Management suivants :
- Vous devez disposer de l'autorisation
iam.serviceAccounts.actAssur le projet hébergeant le compte de service (généralement à l'aide du rôleroles/iam.serviceAccountUser). - Accordez à l'agent de service Dataplex (
service-PROJECT_ID@gcp-sa-dataplex.iam.gserviceaccount.com) pour le projet d'analyse l'autorisationiam.serviceAccounts.getAccessTokensur le compte de service (par exemple, à l'aide du rôleroles/iam.serviceAccountTokenCreator). - Le compte de service doit disposer des autorisations suivantes :
bigquery.tables.getDatasur la table à analyser (par exemple, à l'aide deroles/bigquery.dataViewer).bigquery.jobs.insertdans le projet d'analyse (par exemple, à l'aide deroles/bigquery.jobUser).roles/bigquery.dataEditorsur l'ensemble de données d'exportation (si vous utilisez l'exportation).
Rôles et autorisations requis
Assurez-vous de disposer des rôles IAM suivants pour vos tâches spécifiques :
- Gestion des analyses de données : rôles requis pour l'analyse des données afin de gérer les ressources d'analyse des données.
- Gestion des modèles de règles : pour créer ou mettre à jour des modèles de règles, vous devez
disposer des autorisations nécessaires pour gérer les entrées dans le
groupe d’entrées ou le projet du modèle de règles. Plus précisément,
roles/dataplex.catalogEditorouroles/dataplex.entryOwneraccordent ces autorisations. - Référencer des modèles de règles à partir de règles : vous devez disposer des autorisations
dataplex.entries.getetdataplex.entries.getDatasur le groupe d'entrées ou le projet du modèle de règles référencé par une règle. - Associer des règles de qualité des données à des tables BigQuery : pour associer des règles de qualité des données en tant que métadonnées Knowledge Catalog, vous devez disposer de l'un des éléments suivants :
bigquery.tables.updateouroles/bigquery.dataEditorsur la table etdataplex.entryGroups.useDataRulesAspectsur le groupe d'entrées@bigquerydans l'emplacement de la table.roles/dataplex.catalogEditorsur le groupe d'entrées@bigquery.
- Associer des règles de qualité des données à des termes de glossaire d'entreprise : pour associer des règles de qualité des données en tant que métadonnées Knowledge Catalog, vous devez disposer de l'un des éléments suivants :
dataplex.glossaryTerms.updatesur le terme etdataplex.entryGroups.useDataRulesAspectsur le groupe d'entrées@dataplex.roles/dataplex.catalogEditorsur le groupe d'entrées@dataplex.
- Créer des analyses de qualité des données avec des règles basées sur des entrées : vous devez disposer de l'un des éléments suivants :
bigquery.tables.getetbigquery.tables.getDatasur la table.dataplex.entries.getetdataplex.entries.getDatasur le@bigquerygroupe d'entrées dans l'emplacement de la table.
Syntaxe des requêtes SQL pour les modèles de règles
Lorsque vous écrivez la logique SQL pour un modèle de règles, vous devez fournir une instruction qui renvoie des lignes non valides. Si la requête renvoie des lignes, la règle échoue. Pour en savoir plus, consultez SqlAssertion.
Suivez ces consignes pour écrire du code SQL pour un modèle de règles :
- Omettez le point-virgule de fin de l'instruction SQL.
- Utilisez
${param(name)}pour référencer les paramètres d'entrée, par exemple,${param(min_value)}. - Utilisez
$${...}to escape a literal${...}and prevent it from being replaced as a parameter. - Parameter variables are case-sensitive.
System-supported parameters
You can use the following system-supported parameters in your rule template SQL:
${project()}: The project ID of the resource being scanned.${dataset()}: The BigQuery dataset ID of the resource being scanned, formatted asPROJECT_ID.DATASET_ID.${table()}: The BigQuery table ID of the resource being scanned, formatted asPROJECT_ID.DATASET_ID.TABLE_ID.${column()}: The column the rule is evaluated on. An error occurs during rule evaluation if the rule is attached to the table level but references${column()}.${data()}: A reference to the data source table and all of its precondition filters like row filters, sampling percentages, and incremental filters defined in the scan specification. For more information, see Data reference parameter.
Example 1: Validate column values are between two values
The following example validates that all values in a column are between a minimum and maximum value:
SELECT *
FROM ${data()}
WHERE
NOT ((${column()}>=${param(min_value)} AND ${column()}<=${param(max_value)}) IS TRUE)
Note the following:
- Using
NOT(condition) IS TRUEreturns invalid rows, including rows withNULLvalues in the column. - Using
${data()}limits the scope of rows evaluated to the source table and its filters, such as row filters, sampling percentages, and incremental filters. - Using
${column()}lets you reference the column that the rule using this template is evaluated on.
Example 2: Foreign key validation
The following example verifies that each value in a column exists in a primary key column of another table:
SELECT t.*
FROM ${data()} AS t
LEFT JOIN `${param(reference_table)}` AS s
ON t.${column()} = s.`${param(reference_column)}`
WHERE s.`${param(reference_column)}` IS NULL
Input parameters for this template are as follows:
reference_table: The name of the reference table containing the primary keys. Use the formatPROJECT_ID.DATASET_ID.TABLE_ID.reference_column: The name of the primary key column in the reference table.
System rule templates
Knowledge Catalog provides system rule templates that can be used in any
region. Knowledge Catalog manages these templates in the
dataplex-templates project under the rule-library entry group. An example of
a full resource name is
projects/dataplex-templates/locations/global/entryGroups/rule-library/entries/non_null_expectation.
To view the list of all the available system rule templates, see System rule templates list.
To find the available list of system rule templates, select one of the following options:
Console
In the Google Cloud console, go to the Data profiling & quality page.
Click Rule libraries > System.
To see the list of available system rule templates, click rule-library.
When creating a new rule, you can select the system rule templates in the Choose rule types menu.
REST
To find the available list of system rule templates, use the
entries.list method:
gcurl "https://dataplex.googleapis.com/v1/projects/dataplex-templates/locations/global/entryGroups/rule-library/entries"
Known differences between system rule templates and built-in rules
The following table describes the differences between system rule templates and built-in rules:
| Feature | System rule templates | Built-in rules |
|---|---|---|
| Source | Reusable templates in the catalog | Built-in in the API |
| Referencing | Can be referenced by catalog entries and scans | Can only be used in scans |
The following list describes additional differences in how metrics are calculated for system rule templates:
- Assertion Row Count metric: This metric is populated for all template reference rules, not just SQL assertion rules.
- Statistic Range Expectation rule template: Rule metrics from evaluation
of rules referencing this template wouldn't contain the
nullCountmetric. Because it is an aggregate rule, theignore nullcapability isn't supported, and rule success is determined by the aggregate statistic being within the defined range. Uniqueness Expectation rule template: This template calculates
passedCountdifferently than the built-inUniquenessExpectationrule. The rule template returns all rows for which duplicate values or null rows exist, which can result in fewer passing rows if duplicates are present.For example, if a column contains the values
(a, a, b, b, c, d, e):- Built-in uniqueness rule: Returns 5 passing rows:
(a, b, c, d, e). - Uniqueness rule template: Returns 4 failing rows:
(a, a, b, b). The number of passing rows is 3 (7 total rows minus 4 failed rows):(c, d, e).
- Built-in uniqueness rule: Returns 5 passing rows:
Metadata aspects
This section describes the fields and values for the data-rules and
data-quality-rule-template aspect types.
data-rules aspect fields
To define data rules, use the dataplex-types.global.data-rules aspect. The following table describes the fields for this aspect.
| Field | Type | Description |
|---|---|---|
rules |
Array | Required. A list of data quality rules. |
rules[].name |
String | Required. A name for the rule. |
rules[].dimension |
String | Optional. The data quality dimension for the rule. |
rules[].description |
String | Optional. The description of the rule. |
rules[].suspended |
Boolean | Optional. Whether the rule is active or suspended. Default is false. |
rules[].threshold |
Double | Optional. The passing threshold for the rule, from 0.0 to 1.0. Default is 1.0. |
rules[].type |
Enum | Required. The type of the rule. The only supported value is TEMPLATE_REFERENCE. |
rules[].ignore_null |
Boolean | Optional. If true, rows with null values in the column are ignored when determining the success criteria. |
rules[].attributes |
Map | Optional. Custom key-value pairs associated with the rule. |
rules[].templateReference |
Object | Required. A reference to the rule template. |
rules[].templateReference.name |
String | Required. The resource name of the rule template. |
rules[].templateReference.values |
Map | Optional. The parameter names and values for the rule template. |
rules[].templateReference.values[].parameterValue.value |
String | Required. The value for the parameter. |
The following example shows a data-rules aspect in a payload.json file:
{
"aspects": {
"dataplex-types.global.data-rules": {
"data": {
"rules": [
{
"name": "valid-email",
"dimension": "VALIDITY",
"type": "TEMPLATE_REFERENCE",
"templateReference": {
"name": "projects/my-project/locations/us-central1/entryGroups/my-rules/entries/email-check",
"values": {
"column_name": {
"value": "email"
}
}
}
}
]
}
}
}
}
data-quality-rule-template aspect fields
Use the data-quality-rule-template aspect to define a custom data quality
rule template. The following table describes the fields for the
dataplex-types.global.data-quality-rule-template aspect.
| Field | Type | Description |
|---|---|---|
dimension |
String | Required. The dimension for the rule template. |
sqlCollection |
Array | Required. A list of SQL queries for the rule template. |
sqlCollection[].sql.query |
String | Required. The SQL query that returns invalid rows. |
inputParameters |
Map | Optional. A map of input parameters for the rule template. |
inputParameters[].parameterDescription.description |
String | Optional. The description of the input parameter. |
inputParameters[].parameterDescription.defaultValue |
String | Optional. The default value for the parameter if no value is provided. |
capabilities |
Array | Optional. A list of template capabilities, such as THRESHOLD or IGNORE_NULL. |
The following example displays the structure of a data-quality-rule-template
aspect:
{
"entryType": "projects/dataplex-types/locations/global/entryTypes/data-quality-rule-template",
"aspects": {
"dataplex-types.global.data-quality-rule-template": {
"data": {
"dimension": "COMPLETENESS",
"sqlCollection": [
{
"query": "SELECT * FROM ${data()} WHERE ${column()} > ${param(p1)}"
}
],
"inputParameters": {
"p1": {
"description": "The parameter description"
}
},
"capabilities": [
"THRESHOLD",
"IGNORE_NULL"
]
}
}
}
}
Manage data quality rule templates
This section describes how to create, edit, and delete rule templates.
Create a rule library
To create a rule library, you must create a Knowledge Catalog entry group.
Console
In the Google Cloud console, go to the Data profiling & quality page.
Go to Rule libraries > Custom, and click Create.
In the Create rule library window, fill in the following fields:
- Optional: Enter a display name.
- In Rule library ID, enter an ID. For more information, see the resource naming conventions.
- Optional: Enter a description.
- In the Location menu, select a location. It can't be changed later.
- Optional: Add labels. Labels are key-value pairs that let you group related objects together or with other Google Cloud resources.
- Click Save.
REST
To create a rule library by using the API, you must create an entry group with
the required label goog-dataplex-entry-group-type: rule_library:
gcurl -X POST "https://${DATAPLEX_API}/entryGroups?entryGroup_id=RULE_LIBRARY_ID" \ --data @- << EOF { "labels": { "goog-dataplex-entry-group-type": "rule_library" }, "description": "DESCRIPTION" } EOF
Replace the following:
RULE_LIBRARY_ID: a unique ID for your rule library.DESCRIPTION: an optional description for the rule library.
Terraform
To create a rule library, use the
google_dataplex_entry_group
resource:
resource "google_dataplex_entry_group" "rule_library" { project = "PROJECT_ID" location = "LOCATION" entry_group_id = "RULE_LIBRARY_ID" description = "DESCRIPTION" labels = { "goog-dataplex-entry-group-type" = "rule_library" } }
Replace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule library (for example,us-central1).RULE_LIBRARY_ID: a unique ID for your rule library.DESCRIPTION: an optional description for the rule library.
Create a rule template
To create a custom rule template, select one of the following:
Console
In the Google Cloud console, go to the Data profiling & quality page.
Go to Rule libraries > Custom.
Click the rule library where you want to add a template, and then click Create.
In the Create rule template window, fill in the following fields:
- Optional: Enter a name for the template.
- In Template ID, enter an ID. For more information, see the resource naming conventions.
- Optional: Enter a description.
- In the Dimension menu, select a dimension. For more information, see Dimensions.
In the SQL query field, enter the following example query that validates each column value is between two values:
SELECT * FROM ${data()} WHERE NOT(${column()}>=${param(min_value)} AND ${column()}<=${param(max_value)}) IS TRUEOptional: To enable the rule referencing this template to specify a threshold for success criteria, select Support threshold.
Optional: To allow rules referencing this template to ignore null values in the column for determining success criteria, select Support ignore null.
In Input Parameters, click Add input parameter, and then for each parameter used in the SQL query, enter an input name, description, and default value. In the preceding example, the names would be
min_valueandmax_value.Click Save.
REST
To create a custom rule template, create an entry of type
data-quality-rule-template:
gcurl -X POST "https://${DATAPLEX_API}/entryGroups/ENTRY_GROUP_ID/entries?entry_id=TEMPLATE_ID" \ --data @- << EOF { "entryType": "projects/dataplex-types/locations/global/entryTypes/data-quality-rule-template", "entrySource": { "displayName": "DISPLAY_NAME", "description": "DESCRIPTION" }, "aspects": { "dataplex-types.global.data-quality-rule-template": { "data": { "dimension": "VALIDITY", "sqlCollection": [ { "query": "SELECT t.* FROM ${data()} AS t LEFT JOIN `${param(reference_table)}` AS s ON t.${column()} = s.`${param(reference_column)}` WHERE s.`${param(reference_column)}` IS NULL" } ], "inputParameters": { "PARAMETER_NAME": { "description": "PARAMETER_DESCRIPTION" } }, "capabilities": ["THRESHOLD"] } } } } EOF
Replace the following:
ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: a unique ID for your rule template.DISPLAY_NAME: a display name for the rule template.DESCRIPTION: a description of the rule template.PARAMETER_NAME: the name of an input parameter used in the SQL query.PARAMETER_DESCRIPTION: a description of the input parameter.
Terraform
To create a custom rule template, use the
google_dataplex_entry
resource:
resource "google_dataplex_entry" "rule_template" { project = "PROJECT_ID" location = "LOCATION" entry_id = "TEMPLATE_ID" entry_group_id = "ENTRY_GROUP_ID" entry_type = "projects/dataplex-types/locations/global/entryTypes/data-quality-rule-template" entry_source { display_name = "DISPLAY_NAME" description = "DESCRIPTION" } aspects { aspect_key = "dataplex-types.global.data-quality-rule-template" aspect { data = jsonencode({ dimension = "VALIDITY" sqlCollection = [ { query = "SELECT t.* FROM $${data()} AS t LEFT JOIN `$${param(reference_table)}` AS s ON t.$${column()} = s.`$${param(reference_column)}` WHERE s.`$${param(reference_column)}` IS NULL" } ] inputParameters = { "PARAMETER_NAME" = { description = "PARAMETER_DESCRIPTION" } } capabilities = ["THRESHOLD"] }) } } }Replace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template (for example,us-central1).ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: a unique ID for your rule template.DISPLAY_NAME: a display name for the rule template.DESCRIPTION: a description of the rule template.PARAMETER_NAME: the name of an input parameter used in the SQL query.PARAMETER_DESCRIPTION: a description of the input parameter.Update a rule template
To update an existing rule template, select one of the following options:
Console
In the Google Cloud console, go to the Data profiling & quality page.
Go to Rule libraries > Custom.
Click the rule library that contains the template you want to update.
In the Rule templates list, click the template that you want to update.
On the rule template details page, click Edit.
Update the fields, and then click Save.
REST
To update a custom rule template, patch the entry or specific aspect:
gcurl -X PATCH "https://${DATAPLEX_API}/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID?updateMask=aspects" \ --data @- << EOF { "aspects": { "dataplex-types.global.data-quality-rule-template": { "data": { "dimension": "VALIDITY", "sqlCollection": [ { "query": "SELECT * FROM ${data()} WHERE ${column()} IS NOT NULL" } ] } } } } EOFReplace the following:
ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the rule template that you want to update.Terraform
To update a custom rule template, use the
google_dataplex_entryresource: resource "google_dataplex_entry" "rule_template" { project = "PROJECT_ID" location = "LOCATION" entry_id = "TEMPLATE_ID" entry_group_id = "ENTRY_GROUP_ID" entry_type = "projects/dataplex-types/locations/global/entryTypes/data-quality-rule-template" aspects { aspect_key = "dataplex-types.global.data-quality-rule-template" aspect { data = jsonencode({ dimension = "VALIDITY" sqlCollection = [ { query = "SELECT * FROM $${data()} WHERE $${column()} IS NOT NULL" } ] }) } } }Replace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template (for example,us-central1).ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the rule template that you want to update.Delete a rule template
To delete an existing rule template, select one of the following options:
Console
In the Google Cloud console, go to the Data profiling & quality page.
Go to Rule libraries > Custom.
Click the rule library that contains the template you want to delete.
In the Rule templates list, click the template that you want to delete.
Click Delete, and then click Delete again to confirm.
REST
To delete a custom rule template, delete the entry:
gcurl -X DELETE \ "https://${DATAPLEX_API}/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID"Replace the following:
ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the rule template that you want to delete.Create a data quality scan using template rules
Use your custom templates to define rules for a data quality scan.
Console
In the Google Cloud console, go to the Data profiling & quality page.
Follow the steps to create a data quality scan, but update the following:
- In the Define scan window, in the Credential type menu, select Service account, and then enter a service account. A service account is mandatory for using rule templates.
- In the Data quality rules window, define the rules to configure for this data quality scan:
- Click Add rules > Template rules.
- You can either select Attach rule to entire table, or in Choose columns, browse and select the columns to apply rules for.
- In Choose rule templates, select the rule templates to use. Only the rule templates in the same location as the scan or in a global location can be used. Alternatively, you can also select system rule templates from the list.
- Click Ok.
- Click Edit rule, and then add rule specific parameters.
- Click Save.
- Select the rules that you want to add, and then click Select. The rules are now added to your current rules list.
- Optional: Repeat the previous steps to add additional rules to the data quality scan.
- Click Continue.
- Proceed with the remaining scan configuration.
- Click Create to only create the scan, or click Run scan to create and immediately run the scan.
REST
To create a data quality scan that references a rule template, specify the
templateReference. Custom rule templates use project-specific paths, while system rule templates use a global path:projects/dataplex-templates/locations/global/entryGroups/rule-library/entries/<var>SYSTEM_TEMPLATE_ID</var>.The following example creates a scan that uses a custom rule template and includes a filter to selectively run rules:
gcurl -X POST "https://${DATAPLEX_API}/dataScans?data_scan_id=DATASCAN_ID" \ --data @- << EOF { "data": { "resource": "//bigquery.googleapis.com/projects/BIGQUERY_PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID" }, "executionIdentity": { "serviceAccount": { "email": "SERVICE_ACCOUNT_EMAIL" } }, "executionSpec": { "trigger": { "onDemand": {} } }, "type": "DATA_QUALITY", "dataQualitySpec": { "rules": [ { "templateReference": { "name": "projects/PROJECT_ID/locations/LOCATION/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID", "values": { "PARAMETER_NAME": { "value" : "PARAMETER_VALUE" } } }, "column": "COLUMN_NAME", "name": "RULE_NAME" } ], "filter": "FILTER_CONDITION" } } EOFReplace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template and scan (for example,us-central1).DATASCAN_ID: the ID of the data quality scan.BIGQUERY_PROJECT_ID: the project ID of the BigQuery table.DATASET_ID: the BigQuery dataset ID.TABLE_ID: the BigQuery table ID.SERVICE_ACCOUNT_EMAIL: the email ID of the service account to run the scan.ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the custom rule template.SYSTEM_TEMPLATE_ID: the ID of the system rule template (for example,non_null_expectation).PARAMETER_NAME: the name of an input parameter for the rule template.PARAMETER_VALUE: the value for the input parameter.COLUMN_NAME: the column to apply the rule to.RULE_NAME: a name for the rule instance.FILTER_CONDITION: an optional AIP-160 filter string to selectively run rules (for example,name = \"RULE_NAME\").Terraform
To create a data quality scan that references a rule template, use the
google_dataplex_datascanresource: resource "google_dataplex_datascan" "scan" { data_scan_id = "DATASCAN_ID" location = "LOCATION" project = "PROJECT_ID" data { resource = "//bigquery.googleapis.com/projects/BIGQUERY_PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID" } execution_spec { service_account = "SERVICE_ACCOUNT_EMAIL" trigger { on_demand {} } } data_quality_spec { rules { column = "COLUMN_NAME" name = "RULE_NAME" dimension = "VALIDITY" template_reference { name = "projects/PROJECT_ID/locations/LOCATION/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID" values = { "PARAMETER_NAME" = { value = "PARAMETER_VALUE" } } } } filter = "FILTER_CONDITION" } }Replace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template and scan (for example,us-central1).DATASCAN_ID: the ID of the data quality scan.BIGQUERY_PROJECT_ID: the project ID of the BigQuery table.DATASET_ID: the BigQuery dataset ID.TABLE_ID: the BigQuery table ID.SERVICE_ACCOUNT_EMAIL: the email ID of the service account to run the scan.ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the custom rule template.PARAMETER_NAME: the name of an input parameter for the rule template.PARAMETER_VALUE: the value for the input parameter.COLUMN_NAME: the column to apply the rule to.RULE_NAME: a name for the rule instance.FILTER_CONDITION: an optional AIP-160 filter string to selectively run rules.Run and monitor data quality scans
After you create a data quality scan, you must run it to validate your data. For more information, see Run a data quality scan.
You can then monitor the scan jobs and view the results. For more information, see View the data quality scan results.
Attach data quality rules to catalog entries
You can declare data quality rules as aspects in Knowledge Catalog to make them searchable and reusable across scans.
BigQuery table
To define rules directly on a BigQuery table entry, select one of the following:
Console
In the Google Cloud console, go to the Knowledge Catalog Search page.
Search for and select the table that you want to attach rules to.
Click Data quality > Rules management > Create rules.
In the Create rules window, do the following:
- In the Choose create option menu, select Create new rule.
- In Choose columns, click Browse. Select the columns to apply rules for.
- In the Choose rule types menu, select the rule templates to use. Only the rule templates in the same location as the scan can be used.
- Click Edit rule, and then add rule specific parameters.
Click Save.
The Rules management page displays all entry rules.
REST
To attach rules to a specific column using the API, patch the
@bigqueryentry with adata-rulesaspect targeted to that column: gcurl -X PATCH "https://${DATAPLEX_API}/entryGroups/@bigquery/entries/projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/tables/TABLE_ID?updateMask=aspects&aspect_keys=projects/dataplex-types/locations/global/aspectTypes/data-rules@Schema.COLUMN_NAME" \ --data @- << EOF { "aspects": { "dataplex-types.global.data-rules@Schema.COLUMN_NAME": { "aspectType": "projects/dataplex-types/locations/global/aspectTypes/data-rules", "data": { "rules": [ { "templateReference": "projects/PROJECT_ID/locations/LOCATION/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID", "column": "COLUMN_NAME", "values": { "PARAMETER_NAME": { "value" : "PARAMETER_VALUE" } } } ] } } } } EOFReplace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template and aspect.DATASET_ID: the BigQuery dataset ID.TABLE_ID: the BigQuery table ID.COLUMN_NAME: the column to apply the rule to.ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the rule template.PARAMETER_NAME: the name of an input parameter for the rule template.PARAMETER_VALUE: the value for the input parameter.Terraform
To attach rules to a specific column, use the
google_dataplex_entryresource: resource "google_dataplex_entry" "bq_table_metadata" { project = "PROJECT_ID" location = "LOCATION" entry_id = "bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID" entry_group_id = "@bigquery" aspects { aspect_key = "dataplex-types.global.data-rules@Schema.COLUMN_NAME" aspect { data = jsonencode({ rules = [ { name = "RULE_NAME" dimension = "VALIDITY" templateReference = "projects/PROJECT_ID/locations/LOCATION/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID" values = { "PARAMETER_NAME" = { value = "PARAMETER_VALUE" } } } ] }) } } }Replace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template and aspect.DATASET_ID: the BigQuery dataset ID.TABLE_ID: the BigQuery table ID.COLUMN_NAME: the column to apply the rule to.ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the rule template.PARAMETER_NAME: the name of an input parameter for the rule template.PARAMETER_VALUE: the value for the input parameter.RULE_NAME: a unique name for the rule.Business glossary terms
You can attach rules to business glossary terms. Rules attached to terms are automatically inherited by linked BigQuery tables.
Console
In the Google Cloud console, go to the Knowledge Catalog Glossaries page.
Search for and select the business glossary term.
In the Data quality rules section, click Add.
In the Create rules window, do the following:
- In the Choose create option menu, select Create new rule.
- In the Choose rule types menu, select the rule templates to use. Only the rule templates in the same location as the scan can be used.
- Click Edit rule, and then add rule specific parameters.
- Click Save.
Attach the term to a BigQuery table or columns. For more information, see Manage links between terms and data assets.
REST
To attach rules to a term using the API, patch the
@dataplexentry for the glossary term: gcurl -X PATCH "https://${DATAPLEX_API}/entryGroups/@dataplex/entries/projects/PROJECT_ID/locations/LOCATION/glossaries/GLOSSARY_ID/terms/TERM_ID?updateMask=aspects&aspect_keys=projects/dataplex-types/locations/global/aspectTypes/data-rules" \ --data @- << EOF { "aspects": { "dataplex-types.global.data-rules": { "aspectType": "projects/dataplex-types/locations/global/aspectTypes/data-rules", "data": { "rules": [ { "templateReference": "projects/PROJECT_ID/locations/LOCATION/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID", "column": "COLUMN_NAME", "values": { "PARAMETER_NAME": { "value" : "PARAMETER_VALUE" } } } ] } } } } EOFReplace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template and aspect.GLOSSARY_ID: the ID of the business glossary.TERM_ID: the ID of the glossary term.ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the rule template.COLUMN_NAME: the column to apply the rule to.PARAMETER_NAME: the name of an input parameter for the rule template.PARAMETER_VALUE: the value for the input parameter.Terraform
To attach rules to a business glossary term, use the
google_dataplex_entryresource: resource "google_dataplex_entry" "glossary_term_rules" { project = "PROJECT_ID" location = "LOCATION" entry_id = "projects/PROJECT_ID/locations/LOCATION/glossaries/GLOSSARY_ID/terms/TERM_ID" entry_group_id = "@dataplex" aspects { aspect_key = "dataplex-types.global.data-rules" aspect { data = jsonencode({ rules = [ { name = "RULE_NAME" dimension = "VALIDITY" templateReference = "projects/PROJECT_ID/locations/LOCATION/entryGroups/ENTRY_GROUP_ID/entries/TEMPLATE_ID" values = { "PARAMETER_NAME" = { value = "PARAMETER_VALUE" } } } ] }) } } }Replace the following:
PROJECT_ID: your project ID.LOCATION: the location for your rule template and aspect.GLOSSARY_ID: the ID of the business glossary.TERM_ID: the ID of the glossary term.ENTRY_GROUP_ID: the ID of the entry group that stores your rule template.TEMPLATE_ID: the ID of the rule template.PARAMETER_NAME: the name of an input parameter for the rule template.PARAMETER_VALUE: the value for the input parameter.RULE_NAME: a unique name for the rule.Import rules from another table
You can import data quality rules from an existing BigQuery table entry to your current table.
Console
In the Google Cloud console, go to the Knowledge Catalog Search page.
Select the table you want to manage rules for.
Click Data quality > Rules management.
Click Create rules.
In the Create rules window, do the following:
- In the Choose create option menu, select Import rules from another table.
- In Table, click Browse. Search for and select the source table containing the rules that you want to copy.
- Select the rules. You can also edit the rules.
Click Save.
The Rules management tab displays the new rules.
REST
To import rules, you must fetch the
data-rulesaspect from the source entry and apply it to the target entry.
Get the
data-rulesaspect from the source entry: gcurl "https://dataplex.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/entryGroups/ENTRY_GROUP_ID/entries/SOURCE_ENTRY_ID?view=FULL"Extract the
ruleslist from thedataplex-types.global.data-rulesaspect.Attach the rules to a target entry.
Replace the following:
PROJECT_ID: your project ID.LOCATION: the location of the source entry.ENTRY_GROUP_ID: the ID of the entry group for the source entry.SOURCE_ENTRY_ID: the ID of the source entry.View data quality rules for BigQuery table
You can view all rules applicable to a table, including rules attached directly and rules inherited from linked glossary terms.
Console
In the Google Cloud console, go to the Knowledge Catalog Search page.
Search for and select the table.
Click Data quality > Rules management to view all rules.
Create a data quality scan using rules from catalog
You can selectively run rules declared on catalog entries in a scan.
Console
In the Google Cloud console, go to the Data profiling & quality page.
Follow the steps to create a data quality scan, but update the following:
- In the Define scan window, do the following:
- In the Credential type menu, select Service account, and then enter a service account. A service account is mandatory for using rule templates.
- For Rule type, select Create with entry based rule.
- In the Data quality rules section, rules applicable to the table entry are displayed, including rules inherited from linked glossary terms. To filter the rules, do the following:
- In the Filter items field, filter items to selectively run rules.
- Click Apply. Filtered rules are displayed.
- Proceed with the remaining scan configuration.
- Click Create to only create the scan, or click Run scan to create and immediately run the scan.
Subsequent runs evaluate rules attached to the entry or inherited from glossary terms as observed at the time of execution.
REST
To run rules from catalog entries, set
enableCatalogBasedRulestotrue. You can also specify a filter.To create the scan, use the following code:
gcurl -X POST "https://${DATAPLEX_API}/dataScans?data_scan_id=DATASCAN_ID" \ --data @- << EOF { "type": "DATA_QUALITY", "data": { "resource": "//bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID" }, "executionIdentity": { "serviceAccount": { "email": "SERVICE_ACCOUNT_EMAIL" } }, "executionSpec": { "trigger": { "onDemand": {} } }, "dataQualitySpec": { "enableCatalogBasedRules": true, "filter": "FILTER_CONDITION" } } EOFReplace the following:
PROJECT_ID: your project ID.LOCATION: the location for your data scan.DATASCAN_ID: the ID of the data quality scan.DATASET_ID: the BigQuery dataset ID.TABLE_ID: the BigQuery table ID.SERVICE_ACCOUNT_EMAIL: the email ID of the service account to run the scan.FILTER_CONDITION: an AIP-160 filter string to selectively run rules (for example,attributes.environment = \"prod\").Terraform
To run rules from catalog entries, use the
google_dataplex_datascanresource: resource "google_dataplex_datascan" "scan" { data_scan_id = "DATASCAN_ID" location = "LOCATION" project = "PROJECT_ID" data { resource = "//bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID" } execution_spec { service_account = "SERVICE_ACCOUNT_EMAIL" trigger { on_demand {} } } data_quality_spec { enable_catalog_based_rules = true filter = "FILTER_CONDITION" } }Replace the following:
PROJECT_ID: your project ID.LOCATION: the location for your data scan.DATASCAN_ID: the ID of the data quality scan.DATASET_ID: the BigQuery dataset ID.TABLE_ID: the BigQuery table ID.SERVICE_ACCOUNT_EMAIL: the email ID of the service account to run the scan.FILTER_CONDITION: an AIP-160 filter string to selectively run rules.Pricing
Using Knowledge Catalog rule reusability involves the following pricing elements:
- BigQuery charges: BigQuery charges for the job that runs in the scan project. For more information, see BigQuery pricing.
- Knowledge Catalog data quality scan: There's no charge for processing since BigQuery charges for the job.
- Metadata storage:
data-rulesaspect anddata-quality-rule-templateaspect storage is charged as metadata storage. For more information, see Knowledge Catalog pricing.What's next
- Learn more about auto data quality overview.
- Learn how to use auto data quality scans.
- View a complete list of system rule templates.
- Learn about metadata management.
Sauf indication contraire, le contenu de cette page est régi par une licence Creative Commons Attribution 4.0, et les échantillons de code sont régis par une licence Apache 2.0. Pour en savoir plus, consultez les Règles du site Google Developers. Java est une marque déposée d'Oracle et/ou de ses sociétés affiliées.
Dernière mise à jour le 2026/04/17 (UTC).