public enum LlmModelSettings.Parameters.OutputTokenLimit extends Enum<LlmModelSettings.Parameters.OutputTokenLimit> implements ProtocolMessageEnumThe output token limits for 1 LLM call. The limits are subject to change. For the limit of each model, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models for more information.
Protobuf enum
google.cloud.dialogflow.cx.v3beta1.LlmModelSettings.Parameters.OutputTokenLimit
Implements
ProtocolMessageEnumStatic Fields |
|
|---|---|
| Name | Description |
OUTPUT_TOKEN_LIMIT_LONG |
Input token limit up to 2k. |
OUTPUT_TOKEN_LIMIT_LONG_VALUE |
Input token limit up to 2k. |
OUTPUT_TOKEN_LIMIT_MEDIUM |
Input token limit up to 1k. |
OUTPUT_TOKEN_LIMIT_MEDIUM_VALUE |
Input token limit up to 1k. |
OUTPUT_TOKEN_LIMIT_SHORT |
Input token limit up to 512 tokens. |
OUTPUT_TOKEN_LIMIT_SHORT_VALUE |
Input token limit up to 512 tokens. |
OUTPUT_TOKEN_LIMIT_UNSPECIFIED |
Limit not specified. |
OUTPUT_TOKEN_LIMIT_UNSPECIFIED_VALUE |
Limit not specified. |
UNRECOGNIZED |
|
Static Methods |
|
|---|---|
| Name | Description |
forNumber(int value) |
|
getDescriptor() |
|
internalGetValueMap() |
|
valueOf(Descriptors.EnumValueDescriptor desc) |
|
valueOf(int value) |
Deprecated. Use #forNumber(int) instead. |
valueOf(String name) |
|
values() |
|
Methods |
|
|---|---|
| Name | Description |
getDescriptorForType() |
|
getNumber() |
|
getValueDescriptor() |
|