Create custom constraints
Google Cloud Organization Policy gives you centralized, programmatic control over your organization's resources. As the organization policy administrator, you can define an organization policy, which is a set of restrictions called constraints that apply to Google Cloud resources and descendants of those resources in the Google Cloud resource hierarchy. You can enforce organization policies at the organization, folder, or project level.
Organization Policy provides predefined constraints for various Google Cloud services. However, if you want more granular, customizable control over the specific fields that are restricted in your organization policies, you can also create custom constraints and use those custom constraints in an organization policy.
Benefits
You can use a custom organization policy to allow or deny specific operations on Serverless for Apache Spark batches, sessions, and session templates. For example, if a request to create a batch workload fails to satisfy custom constraint validation as set by your organization policy, the request will fail, and an error will be returned to the caller.
Policy inheritance
By default, organization policies are inherited by the descendants of the resources on which you enforce the policy. For example, if you enforce a policy on a folder, Google Cloud enforces the policy on all projects in the folder. To learn more about this behavior and how to change it, refer to Hierarchy evaluation rules.
Pricing
The Organization Policy Service, including predefined and custom constraints, is offered at no charge.
Before you begin
- Set up your project
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator role
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
Enable the Serverless for Apache Spark API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles.-
Install the Google Cloud CLI.
-
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
-
To initialize the gcloud CLI, run the following command:
gcloud init -
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator role
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
Enable the Serverless for Apache Spark API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles.-
Install the Google Cloud CLI.
-
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
-
To initialize the gcloud CLI, run the following command:
gcloud init - Ensure that you know your organization ID.
Required roles
To get the permissions that
you need to manage organization policies,
ask your administrator to grant you the
Organization policy administrator (roles/orgpolicy.policyAdmin)
IAM role on the organization resource.
For more information about granting roles, see Manage access to projects, folders, and organizations.
This predefined role contains the permissions required to manage organization policies. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to manage organization policies:
-
orgpolicy.constraints.list -
orgpolicy.policies.create -
orgpolicy.policies.delete -
orgpolicy.policies.list -
orgpolicy.policies.update -
orgpolicy.policy.get -
orgpolicy.policy.set
You might also be able to get these permissions with custom roles or other predefined roles.
Create a custom constraint
A custom constraint is defined in a YAML file by the resources, methods,
conditions, and actions it is applied to. Serverless for Apache Spark supports
custom constraints that are applied to the CREATE method of the batch and
session resources.
For more information about how to create a custom constraint, see Creating and managing custom organization policies.
Create a custom constraint for a batch resource
To create a YAML file for a Serverless for Apache Spark custom constraint for a batch resource, use the following format:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME
resourceTypes:
- dataproc.googleapis.com/Batch
methodTypes:
- CREATE
condition: CONDITION
actionType: ACTION
displayName: DISPLAY_NAME
description: DESCRIPTION
Replace the following:
ORGANIZATION_ID: your organization ID, such as123456789.CONSTRAINT_NAME: the name you want for your new custom constraint. A custom constraint must start withcustom., and can only include uppercase letters, lowercase letters, or numbers, for example,custom.batchMustHaveSpecifiedCategoryLabel. The maximum length of this field is 70 characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..CONDITION: a CEL condition that is written against a representation of a supported service resource. This field has a maximum length of 1000 characters. For more information about the resources available to write conditions against, see Dataproc Serverless constraints on resources and operations. Sample condition:("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']).ACTION: the action to take if the condition is met. This can be eitherALLOWorDENY.DISPLAY_NAME: a human-friendly name for the constraint. Sample display name: "Enforce batch 'category' label requirement". This field has a maximum length of 200 characters.DESCRIPTION: a human-friendly description of the constraint to display as an error message when the policy is violated. This field has a maximum length of 2000 characters. Sample description: "Only allow Dataproc batch creation if it has a 'category' label with a 'retail', 'ads', or 'service' value".
Create a custom constraint for a session resource
To create a YAML file for a Serverless for Apache Spark custom constraint for a session resource, use the following format:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME
resourceTypes:
- dataproc.googleapis.com/Session
methodTypes:
- CREATE
condition: CONDITION
actionType: ACTION
displayName: DISPLAY_NAME
description: DESCRIPTION
Replace the following:
ORGANIZATION_ID: your organization ID, such as123456789.CONSTRAINT_NAME: the name you want for your new custom constraint. A custom constraint must start withcustom., and can only include uppercase letters, lowercase letters, or numbers, for example,custom.SessionNameMustStartWithTeamName. The maximum length of this field is 70 characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..CONDITION: a CEL condition that is written against a representation of a supported service resource. This field has a maximum length of 1000 characters. For more information about the resources available to write conditions against, see Dataproc Serverless constraints on resources and operations. Sample condition:(resource.name.startsWith("dataproc").ACTION: the action to take if the condition is met. This can be eitherALLOWorDENY.DISPLAY_NAME: a human-friendly name for the constraint. Sample display name: "Enforce session to have a ttl < 2 hours". This field has a maximum length of 200 characters.DESCRIPTION: a human-friendly description of the constraint to display as an error message when the policy is violated. This field has a maximum length of 2000 characters. Sample description: "Only allow session creation if it sets an allowable TTL".
Create a custom constraint for a session template resource
To create a YAML file for a Serverless for Apache Spark custom constraint for a session template resource, use the following format:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME
resourceTypes:
- dataproc.googleapis.com/SessionTemplate
methodTypes:
- CREATE
- UPDATE
condition: CONDITION
actionType: ACTION
displayName: DISPLAY_NAME
description: DESCRIPTION
Replace the following:
ORGANIZATION_ID: your organization ID, such as123456789.CONSTRAINT_NAME: the name you want for your new custom constraint. A custom constraint must start withcustom., and can only include uppercase letters, lowercase letters, or numbers, for example,custom.SessionTemplateNameMustStartWithTeamName. The maximum length of this field is 70 characters, not counting the prefix, for example,organizations/123456789/customConstraints/custom..CONDITION: a CEL condition that is written against a representation of a supported service resource. This field has a maximum length of 1000 characters. For more information about the resources available to write conditions against, see Constraints on resources and operations. Sample condition:(resource.name.startsWith("dataproc").ACTION: the action to take if the condition is met. This can be eitherALLOWorDENY.DISPLAY_NAME: a human-friendly name for the constraint. Sample display name: "Enforce session template to have a ttl < 2 hours". This field has a maximum length of 200 characters.DESCRIPTION: a human-friendly description of the constraint to display as an error message when the policy is violated. This field has a maximum length of 2000 characters. Sample description: "Only allow session template creation if it sets an allowable TTL".
Set up a custom constraint
Console
To create a custom constraint, do the following:
- In the Google Cloud console, go to the Organization policies page.
- From the project picker, select the project that you want to set the organization policy for.
- Click Custom constraint.
- In the Display name box, enter a human-readable name for the constraint. This name is used in error messages and can be used for identification and debugging. Don't use PII or sensitive data in display names because this name could be exposed in error messages. This field can contain up to 200 characters.
-
In the Constraint ID box, enter the name that you want for your new custom
constraint. A custom constraint can only contain letters (including upper and lowercase) or
numbers, for example
custom.disableGkeAutoUpgrade. This field can contain up to 70 characters, not counting the prefix (custom.), for example,organizations/123456789/customConstraints/custom. Don't include PII or sensitive data in your constraint ID, because it could be exposed in error messages. - In the Description box, enter a human-readable description of the constraint. This description is used as an error message when the policy is violated. Include details about why the policy violation occurred and how to resolve the policy violation. Don't include PII or sensitive data in your description, because it could be exposed in error messages. This field can contain up to 2000 characters.
-
In the Resource type box, select the name of the Google Cloud REST resource
containing the object and field that you want to restrict—for example,
container.googleapis.com/NodePool. Most resource types support up to 20 custom constraints. If you attempt to create more custom constraints, the operation fails. - Under Enforcement method, select whether to enforce the constraint on a REST CREATE method or on both CREATE and UPDATE methods. If you enforce the constraint with the UPDATE method on a resource that violates the constraint, changes to that resource are blocked by the organization policy unless the change resolves the violation.
- To define a condition, click Edit condition.
-
In the Add condition panel, create a CEL condition that refers to a supported
service resource, for example,
resource.management.autoUpgrade == false. This field can contain up to 1000 characters. For details about CEL usage, see Common Expression Language. For more information about the service resources you can use in your custom constraints, see Custom constraint supported services. - Click Save.
- Under Action, select whether to allow or deny the evaluated method if the condition is met.
- Click Create constraint.
Not all Google Cloud services support both methods. To see supported methods for each service, find the service in Services that support custom constraints.
The deny action means that the operation to create or update the resource is blocked if the condition evaluates to true.
The allow action means that the operation to create or update the resource is permitted only if the condition evaluates to true. Every other case except ones explicitly listed in the condition is blocked.
When you have entered a value into each field, the equivalent YAML configuration for this custom constraint appears on the right.
gcloud
- To create a custom constraint, create a YAML file using the following format:
-
ORGANIZATION_ID: your organization ID, such as123456789. -
CONSTRAINT_NAME: the name that you want for your new custom constraint. A custom constraint can only contain letters (including upper and lowercase) or numbers, for example,. This field can contain up to 70 characters.custom.batchMustHaveSpecifiedCategoryLabel -
RESOURCE_NAME: the fully qualified name of the Google Cloud resource containing the object and field that you want to restrict. For example,dataproc.googleapis.com/batch. -
CONDITION: a CEL condition that is written against a representation of a supported service resource. This field can contain up to 1000 characters. For example,.("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']) -
ACTION: the action to take if theconditionis met. Can only beALLOW. -
DISPLAY_NAME: a human-friendly name for the constraint. This field can contain up to 200 characters. -
DESCRIPTION: a human-friendly description of the constraint to display as an error message when the policy is violated. This field can contain up to 2000 characters. -
After you have created the YAML file for a new custom constraint, you must set it up to make
it available for organization policies in your organization. To set up a custom constraint,
use the
gcloud org-policies set-custom-constraintcommand: -
To verify that the custom constraint exists, use the
gcloud org-policies list-custom-constraintscommand:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME resourceTypes: RESOURCE_NAME methodTypes: - CREATE condition: "CONDITION" actionType: ACTION displayName: DISPLAY_NAME description: DESCRIPTION
Replace the following:
For more information about the resources available to write conditions against, see Supported resources.
The allow action means that if the condition evaluates to true, the operation to create or update the resource is permitted. This also means that every other case except the one explicitly listed in the condition is blocked.
gcloud org-policies set-custom-constraint CONSTRAINT_PATH
Replace CONSTRAINT_PATH with the full path to your custom constraint
file. For example, /home/user/customconstraint.yaml.
After this operation is complete, your custom constraints are available as organization policies in your list of Google Cloud organization policies.
gcloud org-policies list-custom-constraints --organization=ORGANIZATION_ID
Replace ORGANIZATION_ID with the ID of your organization resource.
For more information, see Viewing organization policies.
Enforce a custom constraint
You can enforce a constraint by creating an organization policy that references it, and then applying that organization policy to a Google Cloud resource.Console
- In the Google Cloud console, go to the Organization policies page.
- From the project picker, select the project that you want to set the organization policy for.
- From the list on the Organization policies page, select your constraint to view the Policy details page for that constraint.
- To configure the organization policy for this resource, click Manage policy.
- On the Edit policy page, select Override parent's policy.
- Click Add a rule.
- In the Enforcement section, select whether this organization policy is enforced or not.
- Optional: To make the organization policy conditional on a tag, click Add condition. Note that if you add a conditional rule to an organization policy, you must add at least one unconditional rule or the policy cannot be saved. For more information, see Scope organization policies with tags.
- Click Test changes to simulate the effect of the organization policy. For more information, see Test organization policy changes with Policy Simulator.
- To enforce the organization policy in dry-run mode, click Set dry run policy. For more information, see Test organization policies.
- After you verify that the organization policy in dry-run mode works as intended, set the live policy by clicking Set policy.
gcloud
- To create an organization policy with boolean rules, create a policy YAML file that references the constraint:
-
PROJECT_ID: the project that you want to enforce your constraint on. -
CONSTRAINT_NAME: the name you defined for your custom constraint. For example,.custom.batchMustHaveSpecifiedCategoryLabel -
To enforce the organization policy in
dry-run mode, run
the following command with the
dryRunSpecflag: -
After you verify that the organization policy in dry-run mode works as intended, set the
live policy with the
org-policies set-policycommand and thespecflag:
name: projects/PROJECT_ID/policies/CONSTRAINT_NAME spec: rules: - enforce: true dryRunSpec: rules: - enforce: true
Replace the following:
gcloud org-policies set-policy POLICY_PATH --update-mask=dryRunSpec
Replace POLICY_PATH with the full path to your organization policy
YAML file. The policy requires up to 15 minutes to take effect.
gcloud org-policies set-policy POLICY_PATH --update-mask=spec
Replace POLICY_PATH with the full path to your organization policy
YAML file. The policy requires up to 15 minutes to take effect.
Test the custom constraint
This section describes how to test custom constraints for batch, session, and session template resources.
Test the custom constraint for a batch resource
The following batch creation example assumes a custom constraint has been
created and enforced on batch creation to require that the batch has a "category"
label attached with a value of "retail", "ads" or "service:
("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']).
gcloud dataproc batches submit spark \
--region us-west1
--jars file:///usr/lib/spark/examples/jars/spark-examples.jar \
--class org.apache.spark.examples.SparkPi \
--network default \
--labels category=foo \
--100
Sample output:
Operation denied by custom org policies: ["customConstraints/custom.batchMustHaveSpecifiedCategoryLabel": ""Only allow Dataproc batch creation if it has a 'category' label with
a 'retail', 'ads', or 'service' value""]
Test the custom constraint for a session resource
The following session creation example assumes a custom constraint has been
created and enforced on session creation to require that the session has a
name starting with orgName.
gcloud beta dataproc sessions create spark test-session
--location us-central1
Sample output:
Operation denied by custom org policy:
["customConstraints/custom.denySessionNameNotStartingWithOrgName": "Deny session
creation if its name does not start with 'orgName'"]
Test the custom constraint for a session template resource
The following session template creation example assumes a custom constraint has
been created and enforced on session template creation and update to require that
the session template has a name starting with orgName.
gcloud beta dataproc session-templates import test-session-template
--source=saved-template.yaml
Sample output:
Operation denied by custom org policy:
["customConstraints/custom.denySessionTemplateNameNotStartingWithOrgName":
"Deny session template creation or update if its name does not start with
'orgName'"]
Constraints on resources and operations
This section lists the available Google Cloud Serverless for Apache Spark custom constraints for batch and session resources.
Supported batch constraints
The following Serverless for Apache Spark custom constraints are available to use when you create (submit) a batch workload:
General
resource.labels
PySparkBatch
resource.pysparkBatch.mainPythonFileUriresource.pysparkBatch.argsresource.pysparkBatch.pythonFileUrisresource.pysparkBatch.jarFileUrisresource.pysparkBatch.fileUrisresource.pysparkBatch.archiveUris
SparkBatch
resource.sparkBatch.mainJarFileUriresource.sparkBatch.mainClassresource.sparkBatch.argsresource.sparkBatch.jarFileUrisresource.sparkBatch.fileUrisresource.sparkBatch.archiveUris
SparRBatch
resource.sparkRBatch.mainRFileUriresource.sparkRBatch.argsresource.sparkRBatch.fileUrisresource.sparkRBatch.archiveUris
SparkSqlBatch
resource.sparkSqlBatch.queryFileUriresource.sparkSqlBatch.queryVariablesresource.sparkSqlBatch.jarFileUris
RuntimeConfig
resource.runtimeConfig.versionresource.runtimeConfig.containerImageresource.runtimeConfig.propertiesresource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepositoryresource.runtimeConfig.autotuningConfig.scenariosresource.runtimeConfig.cohort
ExecutionConfig
resource.environmentConfig.executionConfig.serviceAccountresource.environmentConfig.executionConfig.networkUriresource.environmentConfig.executionConfig.subnetworkUriresource.environmentConfig.executionConfig.networkTagsresource.environmentConfig.executionConfig.kmsKeyresource.environmentConfig.executionConfig.idleTtlresource.environmentConfig.executionConfig.ttlresource.environmentConfig.executionConfig.stagingBucketresource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
PeripheralsConfig
resource.environmentConfig.peripheralsConfig.metastoreServiceresource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
Supported session constraints
The following session attributes are available to use when you create custom constraints on serverless sessions:
General
resource.nameresource.sparkConnectSessionresource.userresource.sessionTemplate
JupyterSession
resource.jupyterSession.kernelresource.jupyterSession.displayName
RuntimeConfig
resource.runtimeConfig.versionresource.runtimeConfig.containerImageresource.runtimeConfig.propertiesresource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepositoryresource.runtimeConfig.autotuningConfig.scenariosresource.runtimeConfig.cohort
ExecutionConfig
resource.environmentConfig.executionConfig.serviceAccountresource.environmentConfig.executionConfig.networkUriresource.environmentConfig.executionConfig.subnetworkUriresource.environmentConfig.executionConfig.networkTagsresource.environmentConfig.executionConfig.kmsKeyresource.environmentConfig.executionConfig.idleTtlresource.environmentConfig.executionConfig.ttlresource.environmentConfig.executionConfig.stagingBucketresource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
PeripheralsConfig
resource.environmentConfig.peripheralsConfig.metastoreServiceresource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
Supported session template constraints
The following session template attributes are available to use when you create custom constraints on serverless session templates:
General
resource.nameresource.descriptionresource.sparkConnectSession
JupyterSession
resource.jupyterSession.kernelresource.jupyterSession.displayName
RuntimeConfig
resource.runtimeConfig.versionresource.runtimeConfig.containerImageresource.runtimeConfig.propertiesresource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepositoryresource.runtimeConfig.autotuningConfig.scenariosresource.runtimeConfig.cohort
ExecutionConfig
resource.environmentConfig.executionConfig.serviceAccountresource.environmentConfig.executionConfig.networkUriresource.environmentConfig.executionConfig.subnetworkUriresource.environmentConfig.executionConfig.networkTagsresource.environmentConfig.executionConfig.kmsKeyresource.environmentConfig.executionConfig.idleTtlresource.environmentConfig.executionConfig.ttlresource.environmentConfig.executionConfig.stagingBucketresource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
PeripheralsConfig
resource.environmentConfig.peripheralsConfig.metastoreServiceresource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
Example custom constraints for common use cases
This section includes example custom constraints for common uses cases for batch and session resources.
Example custom constraints for a batch resource
The following table provides examples of Serverless for Apache Spark batch custom constraints:
| Description | Constraint syntax |
|---|---|
| Batch must attach a "category" label with allowed values. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustHaveSpecifiedCategoryLabel resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: ("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']) actionType: ALLOW displayName: Enforce batch "category" label requirement. description: Only allow batch creation if it attaches a "category" label with an allowable value. |
| Batch must set an allowed runtime version. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseAllowedVersion resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"]) actionType: ALLOW displayName: Enforce batch runtime version. description: Only allow batch creation if it sets an allowable runtime version. |
| Must use SparkSQL. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseSparkSQL resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.sparkSqlBatch)) actionType: ALLOW displayName: Enforce batch only use SparkSQL Batch. description: Only allow creation of SparkSQL Batch. |
| Batch must set TTL less than 2 hours. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustSetLessThan2hTtl resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl <= duration('2h')) actionType: ALLOW displayName: Enforce batch TTL. description: Only allow batch creation if it sets an allowable TTL. |
| Batch can't set more than 20 Spark initial executors. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.executor.instances'])>20) actionType: DENY displayName: Enforce maximum number of batch Spark executor instances. description: Deny batch creation if it specifies more than 20 Spark executor instances. |
| Batch can't set more than 20 Spark dynamic allocation initial executors. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20) actionType: DENY displayName: Enforce maximum number of batch dynamic allocation initial executors. description: Deny batch creation if it specifies more than 20 Spark dynamic allocation initial executors. |
| Batch must not allow more than 20 dynamic allocation executors. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationMaxExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (resource.runtimeConfig.properties['spark.dynamicAllocation.enabled']=='false') || (('spark.dynamicAllocation.maxExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.maxExecutors'])<=20)) actionType: ALLOW displayName: Enforce batch maximum number of dynamic allocation executors. description: Only allow batch creation if dynamic allocation is disabled or the maximum number of dynamic allocation executors is set to less than or equal to 20. |
| Batch must set the KMS key to an allowed pattern. |
name: organizations/ORGANIZATION_ID/custom.batchKmsPattern resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$') actionType: ALLOW displayName: Enforce batch KMS Key pattern. description: Only allow batch creation if it sets the KMS key to an allowable pattern. |
| Batch must set the staging bucket prefix to an allowed value. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchStagingBucketPrefix
resourceTypes:
- dataproc.googleapis.com/Batch
methodTypes:
- CREATE
condition: resource.environmentConfig.executionConfig.stagingBucket.startsWith( |
Batch executor memory setting must end with a suffix m
and be less than 20000 m. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.batchExecutorMemoryMax resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: ('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000) actionType: ALLOW displayName: Enforce batch executor maximum memory. description: Only allow batch creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m. |
Example custom constraints for a session resource
The following table provides examples of Serverless for Apache Spark session custom constraints:
| Description | Constraint syntax |
|---|---|
Session must set sessionTemplate to empty string. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustBeEmpty resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.sessionTemplate == "" actionType: ALLOW displayName: Enforce empty session templates. description: Only allow session creation if session template is empty string. |
sessionTemplate must be equal to approved template IDs. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateIdMustBeApproved resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.sessionTemplate.startsWith("https://www.googleapis.com/compute/v1/projects/") && resource.sessionTemplate.contains("/locations/") && resource.sessionTemplate.contains("/sessionTemplates/") && ( resource.sessionTemplate.endsWith("/1") || resource.sessionTemplate.endsWith("/2") || resource.sessionTemplate.endsWith("/13") ) actionType: ALLOW displayName: Enforce templateId must be 1, 2, or 13. description: Only allow session creation if session template ID is in the approved list, that is, 1, 2 and 13. |
| Session must use end user credentials to authenticate the workload. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.AllowEUCSessions resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType=="END_USER_CREDENTIALS" actionType: ALLOW displayName: Require end user credential authenticated sessions. description: Allow session creation only if the workload is authenticated using end-user credentials. |
| Session must set an allowed runtime version. |
name: organizations/ORGANIZATION_ID/custom.sessionMustUseAllowedVersion resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"]) actionType: ALLOW displayName: Enforce session runtime version. description: Only allow session creation if it sets an allowable runtime version. |
| Session must set TTL less than 2 hours. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionMustSetLessThan2hTtl resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl <= duration('2h')) actionType: ALLOW displayName: Enforce session TTL. description: Only allow session creation if it sets an allowable TTL. |
| Session can't set more than 20 Spark initial executors. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.executor.instances'])>20) actionType: DENY displayName: Enforce maximum number of session Spark executor instances. description: Deny session creation if it specifies more than 20 Spark executor instances. |
| Session can't set more than 20 Spark dynamic allocation initial executors. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionDynamicAllocationInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20) actionType: DENY displayName: Enforce maximum number of session dynamic allocation initial executors. description: Deny session creation if it specifies more than 20 Spark dynamic allocation initial executors. |
| Session must set the KMS key to an allowed pattern. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionKmsPattern resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$') actionType: ALLOW displayName: Enforce session KMS Key pattern. description: Only allow session creation if it sets the KMS key to an allowable pattern. |
| Session must set the staging bucket prefix to an allowed value. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionStagingBucketPrefix
resourceTypes:
- dataproc.googleapis.com/Session
methodTypes:
- CREATE
condition: resource.environmentConfig.executionConfig.stagingBucket.startsWith( |
Session executor memory setting must end with a suffix m
and be less than 20000 m. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionExecutorMemoryMax resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: ('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000) actionType: ALLOW displayName: Enforce session executor maximum memory. description: Only allow session creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m. |
Example custom constraints for a session template resource
The following table provides examples of Serverless for Apache Spark session template custom constraints:
| Description | Constraint syntax |
|---|---|
Session template name must end with org-name. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.denySessionTemplateNameNotEndingWithOrgName resourceTypes: - dataproc.googleapis.com/SessionTemplate methodTypes: - CREATE - UPDATE condition: '!resource.name.endsWith(''org-name'')' actionType: DENY displayName: DenySessionTemplateNameNotEndingWithOrgName description: Deny session template creation/update if its name does not end with 'org-name' |
| Session template must set an allowed runtime version. |
name: organizations/ORGANIZATION_ID/custom.sessionTemplateMustUseAllowedVersion resourceTypes: - dataproc.googleapis.com/SessionTemplate methodTypes: - CREATE - UPDATE condition: (has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"]) actionType: ALLOW displayName: Enforce session template runtime version. description: Only allow session template creation or update if it sets an allowable runtime version. |
| Session template must set TTL less than 2 hours. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustSetLessThan2hTtl resourceTypes: - dataproc.googleapis.com/SessionTemplate methodTypes: - CREATE - UPDATE condition: (has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl <= duration('2h')) actionType: ALLOW displayName: Enforce session template TTL. description: Only allow session template creation or update if it sets an allowable TTL. |
| Session template can't set more than 20 Spark initial executors. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/SessionTemplate methodTypes: - CREATE - UPDATE condition: (has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.executor.instances'])>20) actionType: DENY displayName: Enforce maximum number of session Spark executor instances. description: Deny session template creation or update if it specifies more than 20 Spark executor instances. |
| Session template can't set more than 20 Spark dynamic allocation initial executors. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateDynamicAllocationInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/SessionTemplate methodTypes: - CREATE - UPDATE condition: (has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20) actionType: DENY displayName: Enforce maximum number of session dynamic allocation initial executors. description: Deny session template creation or update if it specifies more than 20 Spark dynamic allocation initial executors. |
| Session template must set the KMS key to an allowed pattern. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateKmsPattern resourceTypes: - dataproc.googleapis.com/SessionTemplate methodTypes: - CREATE - UPDATE condition: matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$') actionType: ALLOW displayName: Enforce session KMS Key pattern. description: Only allow session template creation or update if it sets the KMS key to an allowable pattern. |
| Session template must set the staging bucket prefix to an allowed value. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateStagingBucketPrefix
resourceTypes:
- dataproc.googleapis.com/SessionTemplate
methodTypes:
- CREATE
- UPDATE
condition: resource.environmentConfig.executionConfig.stagingBucket.startsWith( |
Session template executor memory setting must end with a suffix m
and be less than 20000 m. |
name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateExecutorMemoryMax resourceTypes: - dataproc.googleapis.com/SessionTemplate methodTypes: - CREATE - UPDATE condition: ('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000) actionType: ALLOW displayName: Enforce session executor maximum memory. description: Only allow session template creation or update if the executor memory setting ends with a suffix 'm' and is less than 20000 m. |
What's next
- For more information about organization policies, see Introduction to the Organization Policy Service.
- Learn more about how to create and manage organization policies.
- See the full list of predefined Organization policy constraints.