public enum LlmModelSettings.Parameters.InputTokenLimit extends Enum<LlmModelSettings.Parameters.InputTokenLimit> implements ProtocolMessageEnumThe input token limits for 1 LLM call. For the limit of each model, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models for more information.
Protobuf enum
google.cloud.dialogflow.cx.v3beta1.LlmModelSettings.Parameters.InputTokenLimit
Implements
ProtocolMessageEnumStatic Fields |
|
|---|---|
| Name | Description |
INPUT_TOKEN_LIMIT_LONG |
Input token limit up to 100k. |
INPUT_TOKEN_LIMIT_LONG_VALUE |
Input token limit up to 100k. |
INPUT_TOKEN_LIMIT_MEDIUM |
Input token limit up to 32k. |
INPUT_TOKEN_LIMIT_MEDIUM_VALUE |
Input token limit up to 32k. |
INPUT_TOKEN_LIMIT_SHORT |
Input token limit up to 8k. |
INPUT_TOKEN_LIMIT_SHORT_VALUE |
Input token limit up to 8k. |
INPUT_TOKEN_LIMIT_UNSPECIFIED |
Limit not specified. Treated as 'INPUT_TOKEN_LIMIT_SHORT'. |
INPUT_TOKEN_LIMIT_UNSPECIFIED_VALUE |
Limit not specified. Treated as 'INPUT_TOKEN_LIMIT_SHORT'. |
UNRECOGNIZED |
|
Static Methods |
|
|---|---|
| Name | Description |
forNumber(int value) |
|
getDescriptor() |
|
internalGetValueMap() |
|
valueOf(Descriptors.EnumValueDescriptor desc) |
|
valueOf(int value) |
Deprecated. Use #forNumber(int) instead. |
valueOf(String name) |
|
values() |
|
Methods |
|
|---|---|
| Name | Description |
getDescriptorForType() |
|
getNumber() |
|
getValueDescriptor() |
|