Parameters(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Generative model parameters to control the model behavior.
.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
Attributes |
|
---|---|
Name | Description |
temperature |
float
The temperature used for sampling during response generation. Value ranges from 0 to 1. Temperature controls the degree of randomness in token selection. Lower temperature means less randomness, while higher temperature means more randomness. Valid range: [0.0, 1.0] This field is a member of oneof _ _temperature .
|
input_token_limit |
google.cloud.dialogflowcx_v3beta1.types.LlmModelSettings.Parameters.InputTokenLimit
The input token limit. This setting is currently only supported by playbooks. This field is a member of oneof _ _input_token_limit .
|
output_token_limit |
google.cloud.dialogflowcx_v3beta1.types.LlmModelSettings.Parameters.OutputTokenLimit
The output token limit. This setting is currently only supported by playbooks. Only one of output_token_limit and max_output_tokens is allowed to be set. This field is a member of oneof _ _output_token_limit .
|
Classes
InputTokenLimit
InputTokenLimit(value)
The input token limits for 1 LLM call. For the limit of each model, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models for more information.
OutputTokenLimit
OutputTokenLimit(value)
The output token limits for 1 LLM call. The limits are subject to change. For the limit of each model, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models for more information.