str
The name of a LLM model used for parsing. Format:
- projects/{project_id}/locations/{location}/publishers/{publisher}/models/{model}
max_parsing_requests_per_min
int
The maximum number of requests the job is
allowed to make to the LLM model per minute.
Consult
https://cloud.google.com/vertex-ai/generative-ai/docs/quotas
and your document size to set an appropriate
value here. If unspecified, a default value of
5000 QPM would be used.
global_max_parsing_requests_per_min
int
The maximum number of requests the job is allowed to make to
the LLM model per minute in this project. Consult
https://cloud.google.com/vertex-ai/generative-ai/docs/quotas
and your document size to set an appropriate value here. If
this value is not specified, max_parsing_requests_per_min
will be used by indexing pipeline job as the global limit.
custom_parsing_prompt
str
The prompt to use for parsing. If not
specified, a default prompt will be used.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2026-04-01 UTC."],[],[]]