- 1.122.0 (latest)
- 1.121.0
- 1.120.0
- 1.119.0
- 1.118.0
- 1.117.0
- 1.95.1
- 1.94.0
- 1.93.1
- 1.92.0
- 1.91.0
- 1.90.0
- 1.89.0
- 1.88.0
- 1.87.0
- 1.86.0
- 1.85.0
- 1.84.0
- 1.83.0
- 1.82.0
- 1.81.0
- 1.80.0
- 1.79.0
- 1.78.0
- 1.77.0
- 1.76.0
- 1.75.0
- 1.74.0
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
LLMMetric(
*,
name: typing.Optional[str] = None,
customFunction: typing.Optional[typing.Callable[[...], typing.Any]] = None,
promptTemplate: typing.Optional[str] = None,
judgeModel: typing.Optional[str] = None,
judgeModelGenerationConfig: typing.Optional[
google.genai.types.GenerationConfig
] = None,
judgeModelSamplingCount: typing.Optional[int] = None,
judgeModelSystemInstruction: typing.Optional[str] = None,
returnRawOutput: typing.Optional[bool] = None,
parseAndReduceFn: typing.Optional[typing.Callable[[...], typing.Any]] = None,
aggregateSummaryFn: typing.Optional[typing.Callable[[...], typing.Any]] = None,
rubricGroupName: typing.Optional[str] = None,
metricSpecParameters: typing.Optional[dict[str, typing.Any]] = None,
**extra_data: typing.Any
)A metric that uses LLM-as-a-judge for evaluation.
Methods
LLMMetric
LLMMetric(
*,
name: typing.Optional[str] = None,
customFunction: typing.Optional[typing.Callable[[...], typing.Any]] = None,
promptTemplate: typing.Optional[str] = None,
judgeModel: typing.Optional[str] = None,
judgeModelGenerationConfig: typing.Optional[
google.genai.types.GenerationConfig
] = None,
judgeModelSamplingCount: typing.Optional[int] = None,
judgeModelSystemInstruction: typing.Optional[str] = None,
returnRawOutput: typing.Optional[bool] = None,
parseAndReduceFn: typing.Optional[typing.Callable[[...], typing.Any]] = None,
aggregateSummaryFn: typing.Optional[typing.Callable[[...], typing.Any]] = None,
rubricGroupName: typing.Optional[str] = None,
metricSpecParameters: typing.Optional[dict[str, typing.Any]] = None,
**extra_data: typing.Any
)Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be
validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
load
load(
config_path: str, client: typing.Optional[typing.Any] = None
) -> vertexai._genai.types.common.LLMMetricLoads a metric configuration from a YAML or JSON file.
This method allows for the creation of an LLMMetric instance from a local file path or a Google Cloud Storage (GCS) URI. It will automatically detect the file type (.yaml, .yml, or .json) and parse it accordingly.
| Exceptions | |
|---|---|
| Type | Description |
ValueError |
If the file path is invalid or the file content cannot be parsed. |
ImportError |
If a required library like 'PyYAML' or 'google-cloud-storage' is not installed. |
IOError |
If the file cannot be read from the specified path. |
model_post_init
model_post_init(context: Any, /) -> NoneThis function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that's what pydantic-core passes when calling it.
validate_judge_model_sampling_count
validate_judge_model_sampling_count(
value: typing.Optional[int],
) -> typing.Optional[int]Validates judge_model_sampling_count to be between 1 and 32.
validate_prompt_template
validate_prompt_template(
value: typing.Union[str, vertexai._genai.types.common.MetricPromptBuilder],
) -> strValidates prompt template to be a non-empty string.