Class LlmPolicy (0.3.0)

LlmPolicy(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.

Attributes

Name Description
max_conversation_messages int
Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used.
model_settings google.cloud.ces_v1beta.types.ModelSettings
Optional. Model settings.
prompt str
Required. Policy prompt.
policy_scope google.cloud.ces_v1beta.types.Guardrail.LlmPolicy.PolicyScope
Required. Defines when to apply the policy check during the conversation. If set to POLICY_SCOPE_UNSPECIFIED, the policy will be applied to the user input. When applying the policy to the agent response, additional latency will be introduced before the agent can respond.
fail_open bool
Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail.
allow_short_utterance bool
Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped.

Classes

PolicyScope

PolicyScope(value)

Defines when to apply the policy check during the conversation.