- Resource: Guardrail
- JSON representation
- Guardrail.ContentFilter
- Guardrail.ContentFilter.MatchType
- Guardrail.LlmPromptSecurity
- Guardrail.LlmPromptSecurity.DefaultSecuritySettings
- Guardrail.LlmPolicy
- Guardrail.LlmPolicy.PolicyScope
- Guardrail.ModelSafety
- Guardrail.ModelSafety.SafetySetting
- Guardrail.ModelSafety.HarmCategory
- Guardrail.ModelSafety.HarmBlockThreshold
- Guardrail.CodeCallback
- TriggerAction
- TriggerAction.RespondImmediately
- TriggerAction.Response
- TriggerAction.TransferAgent
- TriggerAction.GenerativeAnswer
- Methods
Resource: Guardrail
Guardrail contains a list of checks and balances to keep the agents safe and secure.
| JSON representation |
|---|
{ "name": string, "displayName": string, "description": string, "enabled": boolean, "action": { object ( |
| Fields | |
|---|---|
name |
Identifier. The unique identifier of the guardrail. Format: |
displayName |
Required. Display name of the guardrail. |
description |
Optional. Description of the guardrail. |
enabled |
Optional. Whether the guardrail is enabled. |
action |
Optional. Action to take when the guardrail is triggered. |
createTime |
Output only. Timestamp when the guardrail was created. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
updateTime |
Output only. Timestamp when the guardrail was last updated. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
etag |
Etag used to ensure the object hasn't changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes. |
Union field guardrail_type. Guardrail type. guardrail_type can be only one of the following: |
|
contentFilter |
Optional. Guardrail that bans certain content from being used in the conversation. |
llmPromptSecurity |
Optional. Guardrail that blocks the conversation if the prompt is considered unsafe based on the LLM classification. |
llmPolicy |
Optional. Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification. |
modelSafety |
Optional. Guardrail that blocks the conversation if the LLM response is considered unsafe based on the model safety settings. |
codeCallback |
Optional. Guardrail that potentially blocks the conversation based on the result of the callback execution. |
Guardrail.ContentFilter
Guardrail that bans certain content from being used in the conversation.
| JSON representation |
|---|
{
"bannedContents": [
string
],
"bannedContentsInUserInput": [
string
],
"bannedContentsInAgentResponse": [
string
],
"matchType": enum ( |
| Fields | |
|---|---|
bannedContents[] |
Optional. List of banned phrases. Applies to both user inputs and agent responses. |
bannedContentsInUserInput[] |
Optional. List of banned phrases. Applies only to user inputs. |
bannedContentsInAgentResponse[] |
Optional. List of banned phrases. Applies only to agent responses. |
matchType |
Required. Match type for the content filter. |
disregardDiacritics |
Optional. If true, diacritics are ignored during matching. |
Guardrail.ContentFilter.MatchType
Match type for the content filter.
| Enums | |
|---|---|
MATCH_TYPE_UNSPECIFIED |
Match type is not specified. |
SIMPLE_STRING_MATCH |
Content is matched for substrings character by character. |
WORD_BOUNDARY_STRING_MATCH |
Content only matches if the pattern found in the text is surrounded by word delimiters. Banned phrases can also contain word delimiters. |
REGEXP_MATCH |
Content is matched using regular expression syntax. |
Guardrail.LlmPromptSecurity
Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification.
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field security_config. Defines the security configuration mode. The user must choose one of the following configurations. security_config can be only one of the following: |
|
defaultSettings |
Optional. Use the system's predefined default security settings. To select this mode, include an empty 'defaultSettings' message in the request. The 'defaultPromptTemplate' field within will be populated by the server in the response. |
customPolicy |
Optional. Use a user-defined LlmPolicy to configure the security guardrail. |
Guardrail.LlmPromptSecurity.DefaultSecuritySettings
Configuration for default system security settings.
| JSON representation |
|---|
{ "defaultPromptTemplate": string } |
| Fields | |
|---|---|
defaultPromptTemplate |
Output only. The default prompt template used by the system. This field is for display purposes to show the user what prompt the system uses by default. It is OUTPUT_ONLY. |
Guardrail.LlmPolicy
Guardrail that blocks the conversation if the LLM response is considered violating the policy based on the LLM classification.
| JSON representation |
|---|
{ "maxConversationMessages": integer, "modelSettings": { object ( |
| Fields | |
|---|---|
maxConversationMessages |
Optional. When checking this policy, consider the last 'n' messages in the conversation. When not set a default value of 10 will be used. |
modelSettings |
Optional. Model settings. |
prompt |
Required. Policy prompt. |
policyScope |
Required. Defines when to apply the policy check during the conversation. If set to |
failOpen |
Optional. If an error occurs during the policy check, fail open and do not trigger the guardrail. |
allowShortUtterance |
Optional. By default, the LLM policy check is bypassed for short utterances. Enabling this setting applies the policy check to all utterances, including those that would normally be skipped. |
Guardrail.LlmPolicy.PolicyScope
Defines when to apply the policy check during the conversation.
| Enums | |
|---|---|
POLICY_SCOPE_UNSPECIFIED |
Policy scope is not specified. |
USER_QUERY |
Policy check is triggered on user input. |
AGENT_RESPONSE |
Policy check is triggered on agent response. Applying this policy scope will introduce additional latency before the agent can respond. |
USER_QUERY_AND_AGENT_RESPONSE |
Policy check is triggered on both user input and agent response. Applying this policy scope will introduce additional latency before the agent can respond. |
Guardrail.ModelSafety
Model safety settings overrides. When this is set, it will override the default settings and trigger the guardrail if the response is considered unsafe.
| JSON representation |
|---|
{
"safetySettings": [
{
object ( |
| Fields | |
|---|---|
safetySettings[] |
Required. List of safety settings. |
Guardrail.ModelSafety.SafetySetting
Safety setting.
| JSON representation |
|---|
{ "category": enum ( |
| Fields | |
|---|---|
category |
Required. The harm category. |
threshold |
Required. The harm block threshold. |
Guardrail.ModelSafety.HarmCategory
Harm category.
| Enums | |
|---|---|
HARM_CATEGORY_UNSPECIFIED |
The harm category is unspecified. |
HARM_CATEGORY_HATE_SPEECH |
The harm category is hate speech. |
HARM_CATEGORY_DANGEROUS_CONTENT |
The harm category is dangerous content. |
HARM_CATEGORY_HARASSMENT |
The harm category is harassment. |
HARM_CATEGORY_SEXUALLY_EXPLICIT |
The harm category is sexually explicit content. |
Guardrail.ModelSafety.HarmBlockThreshold
Probability based thresholds levels for blocking.
| Enums | |
|---|---|
HARM_BLOCK_THRESHOLD_UNSPECIFIED |
Unspecified harm block threshold. |
BLOCK_LOW_AND_ABOVE |
Block low threshold and above (i.e. block more). |
BLOCK_MEDIUM_AND_ABOVE |
Block medium threshold and above. |
BLOCK_ONLY_HIGH |
Block only high threshold (i.e. block less). |
BLOCK_NONE |
Block none. |
OFF |
Turn off the safety filter. |
Guardrail.CodeCallback
Guardrail that blocks the conversation based on the code callbacks provided.
| JSON representation |
|---|
{ "beforeAgentCallback": { object ( |
| Fields | |
|---|---|
beforeAgentCallback |
Optional. The callback to execute before the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing. |
afterAgentCallback |
Optional. The callback to execute after the agent is called. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing. |
beforeModelCallback |
Optional. The callback to execute before the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing. |
afterModelCallback |
Optional. The callback to execute after the model is called. If there are multiple calls to the model, the callback will be executed multiple times. Each callback function is expected to return a structure (e.g., a dict or object) containing at least: - 'decision': Either 'OK' or 'TRIGGER'. - 'reason': A string explaining the decision. A 'TRIGGER' decision may halt further processing. |
TriggerAction
Action that is taken when a certain precondition is met.
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field action. The action to take. action can be only one of the following: |
|
respondImmediately |
Optional. Immediately respond with a preconfigured response. |
transferAgent |
Optional. Transfer the conversation to a different agent. |
generativeAnswer |
Optional. Respond with a generative answer. |
TriggerAction.RespondImmediately
The agent will immediately respond with a preconfigured response.
| JSON representation |
|---|
{
"responses": [
{
object ( |
| Fields | |
|---|---|
responses[] |
Required. The canned responses for the agent to choose from. The response is chosen randomly. |
TriggerAction.Response
Represents a response from the agent.
| JSON representation |
|---|
{ "text": string, "disabled": boolean } |
| Fields | |
|---|---|
text |
Required. Text for the agent to respond with. |
disabled |
Optional. Whether the response is disabled. Disabled responses are not used by the agent. |
TriggerAction.TransferAgent
The agent will transfer the conversation to a different agent.
| JSON representation |
|---|
{ "agent": string } |
| Fields | |
|---|---|
agent |
Required. The name of the agent to transfer the conversation to. The agent must be in the same app as the current agent. Format: |
TriggerAction.GenerativeAnswer
The agent will immediately respond with a generative answer.
| JSON representation |
|---|
{ "prompt": string } |
| Fields | |
|---|---|
prompt |
Required. The prompt to use for the generative answer. |
Methods |
|
|---|---|
|
Creates a new guardrail in the given app. |
|
Deletes the specified guardrail. |
|
Gets details of the specified guardrail. |
|
Lists guardrails in the given app. |
|
Updates the specified guardrail. |