Package Classes (1.129.0)

Summary of entries of Classes for aiplatform.

Classes

Client

Gen AI Client for the Vertex SDK.

Use this client to interact with Vertex-specific Gemini features.

AgentEngines

API documentation for AgentEngines class.

AsyncAgentEngines

API documentation for AsyncAgentEngines class.

AsyncEvals

API documentation for AsyncEvals class.

Evals

API documentation for Evals class.

AsyncPromptOptimizer

Prompt Optimizer

PromptOptimizer

Prompt Optimizer

AsyncPrompts

API documentation for AsyncPrompts class.

Prompts

API documentation for Prompts class.

AcceleratorType

Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.

AgentEngine

An agent engine instance.

AgentEngineConfig

Config for agent engine methods.

AgentEngineConfigDict

Config for agent engine methods.

AgentEngineDict

An agent engine instance.

AgentEngineGenerateMemoriesOperation

Operation that generates memories for an agent engine.

AgentEngineGenerateMemoriesOperationDict

Operation that generates memories for an agent engine.

AgentEngineMemoryConfig

Config for creating a Memory.

AgentEngineMemoryConfigDict

Config for creating a Memory.

AgentEngineMemoryOperation

Operation that has an agent engine memory as a response.

AgentEngineMemoryOperationDict

Operation that has an agent engine memory as a response.

AgentEngineOperation

Operation that has an agent engine as a response.

AgentEngineOperationDict

Operation that has an agent engine as a response.

AgentEngineRollbackMemoryOperation

Operation that rolls back a memory.

AgentEngineRollbackMemoryOperationDict

Operation that rolls back a memory.

AgentEngineSandboxOperation

Operation that has an agent engine sandbox as a response.

AgentEngineSandboxOperationDict

Operation that has an agent engine sandbox as a response.

AgentEngineSessionOperation

Operation that has an agent engine session as a response.

AgentEngineSessionOperationDict

Operation that has an agent engine session as a response.

AgentServerMode

The agent server mode.

AggregatedMetricResult

Evaluation result for a single metric for an evaluation dataset.

AggregatedMetricResultDict

Evaluation result for a single metric for an evaluation dataset.

AppendAgentEngineSessionEventConfig

Config for appending agent engine session event.

AppendAgentEngineSessionEventConfigDict

Config for appending agent engine session event.

AppendAgentEngineSessionEventResponse

Response for appending agent engine session event.

AppendAgentEngineSessionEventResponseDict

Response for appending agent engine session event.

ApplicableGuideline

Applicable guideline for the optimize_prompt method.

ApplicableGuidelineDict

Applicable guideline for the optimize_prompt method.

AssembleDataset

Represents the assembled dataset.

AssembleDatasetConfig

Config for assembling a multimodal dataset resource.

AssembleDatasetConfigDict

Config for assembling a multimodal dataset resource.

AssembleDatasetDict

Represents the assembled dataset.

AssessDatasetConfig

Config for assessing a multimodal dataset resource.

AssessDatasetConfigDict

Config for assessing a multimodal dataset resource.

BatchPredictionResourceUsageAssessmentConfig

Config for batch prediction resource usage assessment.

BatchPredictionResourceUsageAssessmentConfigDict

Config for batch prediction resource usage assessment.

BatchPredictionResourceUsageAssessmentResult

Result of batch prediction resource usage assessment.

BatchPredictionResourceUsageAssessmentResultDict

Result of batch prediction resource usage assessment.

BatchPredictionValidationAssessmentConfig

Config for batch prediction validation assessment.

BatchPredictionValidationAssessmentConfigDict

Config for batch prediction validation assessment.

BigQueryRequestSet

Represents a BigQuery request set.

BigQueryRequestSetDict

Represents a BigQuery request set.

BigQuerySource

The BigQuery location for the input content.

BigQuerySourceDict

The BigQuery location for the input content.

BleuInput

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

BleuInputDict

API documentation for BleuInputDict class.

BleuInstance

Bleu instance.

BleuInstanceDict

Bleu instance.

BleuMetricValue

Bleu metric value for an instance.

BleuMetricValueDict

Bleu metric value for an instance.

BleuResults

Results for bleu metric.

BleuResultsDict

Results for bleu metric.

CandidateResponse

Responses from model or agent.

CandidateResponseDict

Responses from model or agent.

CandidateResult

Result for a single candidate.

CandidateResultDict

Result for a single candidate.

Chunk

A chunk of data.

ChunkDict

A chunk of data.

CometResult

Spec for Comet result - calculates the comet score for the given instance using the version specified in the spec.

CometResultDict

Spec for Comet result - calculates the comet score for the given instance using the version specified in the spec.

ContainerSpec

The spec of a Container.

ContainerSpecDict

The spec of a Container.

ContentMap

Map of placeholder in metric prompt template to contents of model input.

ContentMapContents

Map of placeholder in metric prompt template to contents of model input.

ContentMapContentsDict

Map of placeholder in metric prompt template to contents of model input.

ContentMapDict

Map of placeholder in metric prompt template to contents of model input.

CreateAgentEngineConfig

Config for create agent engine.

CreateAgentEngineConfigDict

Config for create agent engine.

CreateAgentEngineSandboxConfig

Config for creating a Sandbox.

CreateAgentEngineSandboxConfigDict

Config for creating a Sandbox.

CreateAgentEngineSessionConfig

Config for creating a Session.

CreateAgentEngineSessionConfigDict

Config for creating a Session.

CreateDatasetConfig

Config for creating a dataset resource to store prompts.

CreateDatasetConfigDict

Config for creating a dataset resource to store prompts.

CreateDatasetVersionConfig

Config for creating a dataset version resource to store prompts.

CreateDatasetVersionConfigDict

Config for creating a dataset version resource to store prompts.

CreateEvaluationItemConfig

Config to create an evaluation item.

CreateEvaluationItemConfigDict

Config to create an evaluation item.

CreateEvaluationRunConfig

Config to create an evaluation run.

CreateEvaluationRunConfigDict

Config to create an evaluation run.

CreateEvaluationSetConfig

Config to create an evaluation set.

CreateEvaluationSetConfigDict

Config to create an evaluation set.

CreateMultimodalDatasetConfig

Config for creating a dataset resource to store multimodal dataset.

CreateMultimodalDatasetConfigDict

Config for creating a dataset resource to store multimodal dataset.

CreatePromptConfig

Config for creating a prompt.

CreatePromptConfigDict

Config for creating a prompt.

CreatePromptVersionConfig

Config for creating a prompt version.

CreatePromptVersionConfigDict

Config for creating a prompt version.

CustomJob

Represents a job that runs custom workloads such as a Docker container or a Python package.

CustomJobDict

Represents a job that runs custom workloads such as a Docker container or a Python package.

CustomJobSpec

Represents a job that runs custom workloads such as a Docker container or a Python package.

CustomJobSpecDict

Represents a job that runs custom workloads such as a Docker container or a Python package.

CustomOutput

Spec for custom output.

CustomOutputDict

Spec for custom output.

Dataset

Represents a dataset resource to store prompts.

DatasetDict

Represents a dataset resource to store prompts.

DatasetOperation

Represents the create dataset operation.

DatasetOperationDict

Represents the create dataset operation.

DatasetVersion

Represents a dataset version resource to store prompts.

DatasetVersionDict

Represents a dataset version resource to store prompts.

DeleteAgentEngineConfig

Config for deleting agent engine.

DeleteAgentEngineConfigDict

Config for deleting agent engine.

DeleteAgentEngineMemoryConfig

Config for deleting an Agent Engine Memory.

DeleteAgentEngineMemoryConfigDict

Config for deleting an Agent Engine Memory.

DeleteAgentEngineMemoryOperation

Operation for deleting agent engines.

DeleteAgentEngineMemoryOperationDict

Operation for deleting agent engines.

DeleteAgentEngineOperation

Operation for deleting agent engines.

DeleteAgentEngineOperationDict

Operation for deleting agent engines.

DeleteAgentEngineSandboxConfig

Config for deleting an Agent Engine Sandbox.

DeleteAgentEngineSandboxConfigDict

Config for deleting an Agent Engine Sandbox.

DeleteAgentEngineSandboxOperation

Operation for deleting agent engines.

DeleteAgentEngineSandboxOperationDict

Operation for deleting agent engines.

DeleteAgentEngineSessionConfig

Config for deleting an Agent Engine Session.

DeleteAgentEngineSessionConfigDict

Config for deleting an Agent Engine Session.

DeleteAgentEngineSessionOperation

Operation for deleting agent engine sessions.

DeleteAgentEngineSessionOperationDict

Operation for deleting agent engine sessions.

DeletePromptConfig

Config for deleting a prompt.

DeletePromptConfigDict

Config for deleting a prompt.

DeletePromptOperation

Operation for deleting prompts.

DeletePromptOperationDict

Operation for deleting prompts.

DeletePromptVersionOperation

Operation for deleting prompt versions.

DeletePromptVersionOperationDict

Operation for deleting prompt versions.

DiskSpec

Represents the spec of disk options.

DiskSpecDict

Represents the spec of disk options.

DnsPeeringConfig

DNS peering configuration. These configurations are used to create DNS peering zones in the Vertex tenant project VPC, enabling resolution of records within the specified domain hosted in the target network's Cloud DNS.

DnsPeeringConfigDict

DNS peering configuration. These configurations are used to create DNS peering zones in the Vertex tenant project VPC, enabling resolution of records within the specified domain hosted in the target network's Cloud DNS.

EnvVar

Represents an environment variable present in a Container or Python Module.

EnvVarDict

Represents an environment variable present in a Container or Python Module.

EvalCase

A comprehensive representation of a GenAI interaction for evaluation.

EvalCaseDict

A comprehensive representation of a GenAI interaction for evaluation.

EvalCaseMetricResult

Evaluation result for a single evaluation case for a single metric.

EvalCaseMetricResultDict

Evaluation result for a single evaluation case for a single metric.

EvalCaseResult

Eval result for a single evaluation case.

EvalCaseResultDict

Eval result for a single evaluation case.

EvalRunInferenceConfig

Optional parameters for inference.

EvalRunInferenceConfigDict

Optional parameters for inference.

EvaluateDatasetConfig

Config for evaluate instances.

EvaluateDatasetConfigDict

Config for evaluate instances.

EvaluateDatasetOperation

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

EvaluateDatasetOperationDict

API documentation for EvaluateDatasetOperationDict class.

EvaluateDatasetRequestParameters

Parameters for batch dataset evaluation.

EvaluateDatasetRequestParametersDict

Parameters for batch dataset evaluation.

EvaluateInstancesConfig

Config for evaluate instances.

EvaluateInstancesConfigDict

Config for evaluate instances.

EvaluateInstancesResponse

Result of evaluating an LLM metric.

EvaluateInstancesResponseDict

Result of evaluating an LLM metric.

EvaluateMethodConfig

Optional parameters for the evaluate method.

EvaluateMethodConfigDict

Optional parameters for the evaluate method.

EvaluationDataset

The dataset used for evaluation.

EvaluationDatasetDict

The dataset used for evaluation.

EvaluationInstance

A single instance to be evaluated.

EvaluationInstanceDict

A single instance to be evaluated.

EvaluationItem

EvaluationItem is a single evaluation request or result.

The content of an EvaluationItem is immutable - it cannot be updated once created. EvaluationItems can be deleted when no longer needed.

EvaluationItemDict

EvaluationItem is a single evaluation request or result.

The content of an EvaluationItem is immutable - it cannot be updated once created. EvaluationItems can be deleted when no longer needed.

EvaluationItemRequest

Single evaluation request.

EvaluationItemRequestDict

Single evaluation request.

EvaluationItemResult

Represents the result of an evaluation item.

EvaluationItemResultDict

Represents the result of an evaluation item.

EvaluationItemType

The type of the EvaluationItem.

EvaluationPrompt

Represents the prompt to be evaluated.

EvaluationPromptDict

Represents the prompt to be evaluated.

EvaluationResult

Result of an evaluation run for an evaluation dataset.

EvaluationResultDict

Result of an evaluation run for an evaluation dataset.

EvaluationRun

Represents an evaluation run.

EvaluationRunAgentConfig

This field is experimental and may change in future versions.

Agent config for an evaluation run.

EvaluationRunAgentConfigDict

This field is experimental and may change in future versions.

Agent config for an evaluation run.

EvaluationRunConfig

The evaluation configuration used for the evaluation run.

EvaluationRunConfigDict

The evaluation configuration used for the evaluation run.

EvaluationRunDataSource

Represents an evaluation run data source.

EvaluationRunDataSourceDict

Represents an evaluation run data source.

EvaluationRunDict

Represents an evaluation run.

EvaluationRunInferenceConfig

This field is experimental and may change in future versions.

Configuration that describes an agent.

EvaluationRunInferenceConfigDict

This field is experimental and may change in future versions.

Configuration that describes an agent.

EvaluationRunMetadata

Metadata for an evaluation run.

EvaluationRunMetadataDict

Metadata for an evaluation run.

EvaluationRunMetric

The metric used for evaluation run.

EvaluationRunMetricDict

The metric used for evaluation run.

EvaluationRunResults

Represents the results of an evaluation run.

EvaluationRunResultsDict

Represents the results of an evaluation run.

EvaluationRunState

Represents the state of an evaluation run.

EvaluationSet

Represents an evaluation set.

EvaluationSetDict

Represents an evaluation set.

Event

Represents an event in a conversation between agents and users.

It is used to store the content of the conversation, as well as the actions taken by the agents like function calls, function responses, intermediate NL responses etc.

EventActions

Actions are parts of events that are executed by the agent.

EventActionsDict

Actions are parts of events that are executed by the agent.

EventDict

Represents an event in a conversation between agents and users.

It is used to store the content of the conversation, as well as the actions taken by the agents like function calls, function responses, intermediate NL responses etc.

EventMetadata

Metadata relating to a LLM response event.

EventMetadataDict

Metadata relating to a LLM response event.

ExactMatchInput

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

ExactMatchInputDict

API documentation for ExactMatchInputDict class.

ExactMatchInstance

Exact match instance.

ExactMatchInstanceDict

Exact match instance.

ExactMatchMetricValue

Exact match metric value for an instance.

ExactMatchMetricValueDict

Exact match metric value for an instance.

ExactMatchResults

Results for exact match metric.

ExactMatchResultsDict

Results for exact match metric.

ExactMatchSpec

Spec for exact match metric.

ExactMatchSpecDict

Spec for exact match metric.

ExecuteCodeAgentEngineSandboxConfig

Config for executing code in an Agent Engine sandbox.

ExecuteCodeAgentEngineSandboxConfigDict

Config for executing code in an Agent Engine sandbox.

ExecuteSandboxEnvironmentResponse

The response for executing a sandbox environment.

ExecuteSandboxEnvironmentResponseDict

The response for executing a sandbox environment.

GcsSource

Cloud storage source holds the dataset.

Currently only one Cloud Storage file path is supported.

GcsSourceDict

Cloud storage source holds the dataset.

Currently only one Cloud Storage file path is supported.

GeminiExample

Represents a Gemini example.

GeminiExampleDict

Represents a Gemini example.

GeminiRequestReadConfig

Represents the config for reading Gemini requests.

GeminiRequestReadConfigDict

Represents the config for reading Gemini requests.

GeminiTemplateConfig

Represents a Gemini template config.

GeminiTemplateConfigDict

Represents a Gemini template config.

GenerateAgentEngineMemoriesConfig

Config for generating memories.

GenerateAgentEngineMemoriesConfigDict

Config for generating memories.

GenerateInstanceRubricsResponse

Response for generating rubrics.

GenerateInstanceRubricsResponseDict

Response for generating rubrics.

GenerateMemoriesRequestDirectContentsSource

The direct contents source for generating memories.

GenerateMemoriesRequestDirectContentsSourceDict

The direct contents source for generating memories.

GenerateMemoriesRequestDirectContentsSourceEvent

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

GenerateMemoriesRequestDirectContentsSourceEventDict

API documentation for GenerateMemoriesRequestDirectContentsSourceEventDict class.

GenerateMemoriesRequestDirectMemoriesSource

The direct memories source for generating memories.

GenerateMemoriesRequestDirectMemoriesSourceDict

The direct memories source for generating memories.

GenerateMemoriesRequestDirectMemoriesSourceDirectMemory

A direct memory to upload to Memory Bank.

GenerateMemoriesRequestDirectMemoriesSourceDirectMemoryDict

A direct memory to upload to Memory Bank.

GenerateMemoriesRequestVertexSessionSource

The vertex session source for generating memories.

GenerateMemoriesRequestVertexSessionSourceDict

The vertex session source for generating memories.

GenerateMemoriesResponse

The response for generating memories.

GenerateMemoriesResponseDict

The response for generating memories.

GenerateMemoriesResponseGeneratedMemory

A memory that was generated.

GenerateMemoriesResponseGeneratedMemoryAction

The action to take.

GenerateMemoriesResponseGeneratedMemoryDict

A memory that was generated.

GetAgentEngineConfig

Config for create agent engine.

GetAgentEngineConfigDict

Config for create agent engine.

GetAgentEngineMemoryConfig

Config for getting an Agent Engine Memory.

GetAgentEngineMemoryConfigDict

Config for getting an Agent Engine Memory.

GetAgentEngineMemoryRevisionConfig

Config for getting an Agent Engine Memory Revision.

GetAgentEngineMemoryRevisionConfigDict

Config for getting an Agent Engine Memory Revision.

GetAgentEngineOperationConfig

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

GetAgentEngineOperationConfigDict

API documentation for GetAgentEngineOperationConfigDict class.

GetAgentEngineSandboxConfig

Config for getting an Agent Engine Memory.

GetAgentEngineSandboxConfigDict

Config for getting an Agent Engine Memory.

GetAgentEngineSessionConfig

Config for getting an Agent Engine Session.

GetAgentEngineSessionConfigDict

Config for getting an Agent Engine Session.

GetDatasetOperationConfig

Config for getting a dataset version operation.

GetDatasetOperationConfigDict

Config for getting a dataset version operation.

GetEvaluationItemConfig

Config for get evaluation item.

GetEvaluationItemConfigDict

Config for get evaluation item.

GetEvaluationRunConfig

Config for get evaluation run.

GetEvaluationRunConfigDict

Config for get evaluation run.

GetEvaluationSetConfig

Config for get evaluation set.

GetEvaluationSetConfigDict

Config for get evaluation set.

GetMultimodalDatasetOperationConfig

Config for getting a multimodal dataset operation.

GetMultimodalDatasetOperationConfigDict

Config for getting a multimodal dataset operation.

GetPromptConfig

Config for getting a prompt.

GetPromptConfigDict

Config for getting a prompt.

IdentityType

The identity type to use for the Reasoning Engine. If not specified, the service_account field will be used if set, otherwise the default Vertex AI Reasoning Engine Service Agent in the project will be used.

Importance

Importance level of the rubric.

IntermediateExtractedMemory

An extracted memory that is the intermediate result before consolidation.

IntermediateExtractedMemoryDict

An extracted memory that is the intermediate result before consolidation.

JobState

Output only. The detailed state of the job.

LLMBasedMetricSpec

Specification for an LLM based metric.

LLMBasedMetricSpecDict

Specification for an LLM based metric.

LLMMetric

A metric that uses LLM-as-a-judge for evaluation.

Language

The coding language supported in this environment.

ListAgentEngineConfig

Config for listing agent engines.

ListAgentEngineConfigDict

Config for listing agent engines.

ListAgentEngineMemoryConfig

Config for listing agent engine memories.

ListAgentEngineMemoryConfigDict

Config for listing agent engine memories.

ListAgentEngineMemoryRevisionsConfig

Config for listing Agent Engine memory revisions.

ListAgentEngineMemoryRevisionsConfigDict

Config for listing Agent Engine memory revisions.

ListAgentEngineMemoryRevisionsResponse

Response for listing agent engine memory revisions.

ListAgentEngineMemoryRevisionsResponseDict

Response for listing agent engine memory revisions.

ListAgentEngineSandboxesConfig

Config for listing agent engine sandboxes.

ListAgentEngineSandboxesConfigDict

Config for listing agent engine sandboxes.

ListAgentEngineSandboxesResponse

Response for listing agent engine sandboxes.

ListAgentEngineSandboxesResponseDict

Response for listing agent engine sandboxes.

ListAgentEngineSessionEventsConfig

Config for listing agent engine session events.

ListAgentEngineSessionEventsConfigDict

Config for listing agent engine session events.

ListAgentEngineSessionEventsResponse

Response for listing agent engine session events.

ListAgentEngineSessionEventsResponseDict

Response for listing agent engine session events.

ListAgentEngineSessionsConfig

Config for listing agent engine sessions.

ListAgentEngineSessionsConfigDict

Config for listing agent engine sessions.

ListDatasetVersionsResponse

Response for listing prompt datasets.

ListDatasetVersionsResponseDict

Response for listing prompt datasets.

ListDatasetsResponse

Response for listing prompt datasets.

ListDatasetsResponseDict

Response for listing prompt datasets.

ListMultimodalDatasetsConfig

Config for listing multimodal datasets.

ListMultimodalDatasetsConfigDict

Config for listing multimodal datasets.

ListMultimodalDatasetsResponse

Response for listing multimodal datasets.

ListMultimodalDatasetsResponseDict

Response for listing multimodal datasets.

ListPromptsConfig

Config for listing prompt datasets and dataset versions.

ListPromptsConfigDict

Config for listing prompt datasets and dataset versions.

ListReasoningEnginesMemoriesResponse

Response for listing agent engine memories.

ListReasoningEnginesMemoriesResponseDict

Response for listing agent engine memories.

ListReasoningEnginesResponse

Response for listing agent engines.

ListReasoningEnginesResponseDict

Response for listing agent engines.

ListReasoningEnginesSessionsResponse

Response for listing agent engine sessions.

ListReasoningEnginesSessionsResponseDict

Response for listing agent engine sessions.

LustreMount

Represents a mount configuration for Lustre file system.

LustreMountDict

Represents a mount configuration for Lustre file system.

MachineConfig

The machine config of the code execution environment.

MachineSpec

Specification of a single machine.

MachineSpecDict

Specification of a single machine.

ManagedTopicEnum

The managed topic.

MapInstance

Instance data specified as a map.

MapInstanceDict

Instance data specified as a map.

Memory

A memory.

MemoryBankCustomizationConfig

Configuration for organizing memories for a particular scope.

MemoryBankCustomizationConfigDict

Configuration for organizing memories for a particular scope.

MemoryBankCustomizationConfigGenerateMemoriesExample

An example of how to generate memories for a particular scope.

MemoryBankCustomizationConfigGenerateMemoriesExampleConversationSource

A conversation source for the example. This is similar to DirectContentsSource.

MemoryBankCustomizationConfigGenerateMemoriesExampleConversationSourceDict

A conversation source for the example. This is similar to DirectContentsSource.

MemoryBankCustomizationConfigGenerateMemoriesExampleConversationSourceEvent

The conversation source event for generating memories.

MemoryBankCustomizationConfigGenerateMemoriesExampleConversationSourceEventDict

The conversation source event for generating memories.

MemoryBankCustomizationConfigGenerateMemoriesExampleDict

An example of how to generate memories for a particular scope.

MemoryBankCustomizationConfigGenerateMemoriesExampleGeneratedMemory

A memory generated by the operation.

MemoryBankCustomizationConfigGenerateMemoriesExampleGeneratedMemoryDict

A memory generated by the operation.

MemoryBankCustomizationConfigMemoryTopic

A topic of information that should be extracted from conversations and stored as memories.

MemoryBankCustomizationConfigMemoryTopicCustomMemoryTopic

A custom memory topic defined by the developer.

MemoryBankCustomizationConfigMemoryTopicCustomMemoryTopicDict

A custom memory topic defined by the developer.

MemoryBankCustomizationConfigMemoryTopicDict

A topic of information that should be extracted from conversations and stored as memories.

MemoryBankCustomizationConfigMemoryTopicManagedMemoryTopic

A managed memory topic defined by the system.

MemoryBankCustomizationConfigMemoryTopicManagedMemoryTopicDict

A managed memory topic defined by the system.

MemoryDict

A memory.

MemoryRevision

A memory revision.

MemoryRevisionDict

A memory revision.

MemoryTopicId

The topic ID for a memory.

MemoryTopicIdDict

The topic ID for a memory.

Message

Represents a single message turn in a conversation.

MessageDict

Represents a single message turn in a conversation.

Metadata

Metadata for a chunk.

MetadataDict

Metadata for a chunk.

Metric

The metric used for evaluation.

MetricDict

The metric used for evaluation.

MetricPromptBuilder

Builder class for structured LLM-based metric prompt template.

MetricResult

Result for a single metric on a single instance.

MetricResultDict

Result for a single metric on a single instance.

MetricxResult

Spec for MetricX result - calculates the MetricX score for the given instance using the version specified in the spec.

MetricxResultDict

Spec for MetricX result - calculates the MetricX score for the given instance using the version specified in the spec.

MultimodalDataset

Represents a multimodal dataset.

MultimodalDatasetDict

Represents a multimodal dataset.

MultimodalDatasetOperation

Represents the create dataset operation.

MultimodalDatasetOperationDict

Represents the create dataset operation.

NfsMount

Represents a mount configuration for Network File System (NFS) to mount.

NfsMountDict

Represents a mount configuration for Network File System (NFS) to mount.

ObservabilityEvalCase

A single evaluation case instance for data stored in GCP Observability.

ObservabilityEvalCaseDict

A single evaluation case instance for data stored in GCP Observability.

OptimizeConfig

Config for Prompt Optimizer.

OptimizeConfigDict

Config for Prompt Optimizer.

OptimizeResponse

Response for the optimize_prompt method.

OptimizeResponseDict

Response for the optimize_prompt method.

OptimizeResponseEndpoint

Response for the optimize_prompt method.

OptimizeResponseEndpointDict

Response for the optimize_prompt method.

OptimizeTarget

None

PairwiseChoice

Output only. Pairwise metric choice.

PairwiseMetricInput

Pairwise metric instance.

PairwiseMetricInputDict

Pairwise metric instance.

PairwiseMetricInstance

Pairwise metric instance.

PairwiseMetricInstanceDict

Pairwise metric instance.

PairwiseMetricResult

Spec for pairwise metric result.

PairwiseMetricResultDict

Spec for pairwise metric result.

ParsedResponse

Response for the optimize_prompt method.

ParsedResponseDict

Response for the optimize_prompt method.

PointwiseMetricInput

Pointwise metric input.

PointwiseMetricInputDict

Pointwise metric input.

PointwiseMetricInstance

Pointwise metric instance.

PointwiseMetricInstanceDict

Pointwise metric instance.

PointwiseMetricResult

Spec for pointwise metric result.

PointwiseMetricResultDict

Spec for pointwise metric result.

PredefinedMetricSpec

Spec for predefined metric.

PredefinedMetricSpecDict

Spec for predefined metric.

Prompt

Represents a prompt.

PromptDict

Represents a prompt.

PromptOptimizerConfig

VAPO Prompt Optimizer Config.

PromptOptimizerConfigDict

VAPO Prompt Optimizer Config.

PromptOptimizerMethod

The method for data driven prompt optimization.

PromptRef

Reference to a prompt.

PromptRefDict

Reference to a prompt.

PromptTemplate

A prompt template for creating prompts with variables.

PromptTemplateData

Message to hold a prompt template and the values to populate the template.

PromptTemplateDataDict

Message to hold a prompt template and the values to populate the template.

PromptTemplateDict

A prompt template for creating prompts with variables.

PromptVersionRef

Reference to a prompt version.

PromptVersionRefDict

Reference to a prompt version.

PscInterfaceConfig

The PSC interface config.

PscInterfaceConfigDict

The PSC interface config.

PythonPackageSpec

The spec of a Python packaged code.

PythonPackageSpecDict

The spec of a Python packaged code.

QueryAgentEngineConfig

Config for querying agent engines.

QueryAgentEngineConfigDict

Config for querying agent engines.

QueryReasoningEngineResponse

The response for querying an agent engine.

QueryReasoningEngineResponseDict

The response for querying an agent engine.

RawOutput

Raw output.

RawOutputDict

Raw output.

ReasoningEngine

An agent engine.

ReasoningEngineContextSpec

The configuration for agent engine sub-resources to manage context.

ReasoningEngineContextSpecDict

The configuration for agent engine sub-resources to manage context.

ReasoningEngineContextSpecMemoryBankConfig

Specification for a Memory Bank.

ReasoningEngineContextSpecMemoryBankConfigDict

Specification for a Memory Bank.

ReasoningEngineContextSpecMemoryBankConfigGenerationConfig

Configuration for how to generate memories.

ReasoningEngineContextSpecMemoryBankConfigGenerationConfigDict

Configuration for how to generate memories.

ReasoningEngineContextSpecMemoryBankConfigSimilaritySearchConfig

Configuration for how to perform similarity search on memories.

ReasoningEngineContextSpecMemoryBankConfigSimilaritySearchConfigDict

Configuration for how to perform similarity search on memories.

ReasoningEngineContextSpecMemoryBankConfigTtlConfig

Configuration for automatically setting the TTL ("time-to-live") of the memories in the Memory Bank.

ReasoningEngineContextSpecMemoryBankConfigTtlConfigDict

Configuration for automatically setting the TTL ("time-to-live") of the memories in the Memory Bank.

ReasoningEngineContextSpecMemoryBankConfigTtlConfigGranularTtlConfig

Configuration for TTL of the memories in the Memory Bank based on the action that created or updated the memory.

ReasoningEngineContextSpecMemoryBankConfigTtlConfigGranularTtlConfigDict

Configuration for TTL of the memories in the Memory Bank based on the action that created or updated the memory.

ReasoningEngineDict

An agent engine.

ReasoningEngineSpec

The specification of an agent engine.

ReasoningEngineSpecDeploymentSpec

The specification of a Reasoning Engine deployment.

ReasoningEngineSpecDeploymentSpecDict

The specification of a Reasoning Engine deployment.

ReasoningEngineSpecDict

The specification of an agent engine.

ReasoningEngineSpecPackageSpec

User-provided package specification, containing pickled object and package requirements.

ReasoningEngineSpecPackageSpecDict

User-provided package specification, containing pickled object and package requirements.

ReasoningEngineSpecSourceCodeSpec

Specification for deploying from source code.

ReasoningEngineSpecSourceCodeSpecDict

Specification for deploying from source code.

ReasoningEngineSpecSourceCodeSpecInlineSource

Specifies source code provided as a byte stream.

ReasoningEngineSpecSourceCodeSpecInlineSourceDict

Specifies source code provided as a byte stream.

ReasoningEngineSpecSourceCodeSpecPythonSpec

Specification for running a Python application from source.

ReasoningEngineSpecSourceCodeSpecPythonSpecDict

Specification for running a Python application from source.

ReservationAffinity

A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity.

ReservationAffinityDict

A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity.

ResponseCandidate

A model-generated content to the prompt.

ResponseCandidateDict

A model-generated content to the prompt.

ResponseCandidateResult

Aggregated metric results for a single response candidate of an EvalCase.

ResponseCandidateResultDict

Aggregated metric results for a single response candidate of an EvalCase.

RestoreVersionConfig

Config for restoring a prompt version.

RestoreVersionConfigDict

Config for restoring a prompt version.

RestoreVersionOperation

Represents the restore version operation.

RestoreVersionOperationDict

Represents the restore version operation.

RetrieveAgentEngineMemoriesConfig

Config for retrieving memories.

RetrieveAgentEngineMemoriesConfigDict

Config for retrieving memories.

RetrieveMemoriesRequestSimilaritySearchParams

The parameters for semantic similarity search based retrieval.

RetrieveMemoriesRequestSimilaritySearchParamsDict

The parameters for semantic similarity search based retrieval.

RetrieveMemoriesRequestSimpleRetrievalParams

The parameters for simple (non-similarity search) retrieval.

RetrieveMemoriesRequestSimpleRetrievalParamsDict

The parameters for simple (non-similarity search) retrieval.

RetrieveMemoriesResponse

The response for retrieving memories.

RetrieveMemoriesResponseDict

The response for retrieving memories.

RetrieveMemoriesResponseRetrievedMemory

A retrieved memory.

RetrieveMemoriesResponseRetrievedMemoryDict

A retrieved memory.

RollbackAgentEngineMemoryConfig

Config for rolling back a memory.

RollbackAgentEngineMemoryConfigDict

Config for rolling back a memory.

RougeInput

Rouge input.

RougeInputDict

Rouge input.

RougeInstance

Rouge instance.

RougeInstanceDict

Rouge instance.

RougeMetricValue

Rouge metric value for an instance.

RougeMetricValueDict

Rouge metric value for an instance.

RougeResults

Results for rouge metric.

RougeResultsDict

Results for rouge metric.

Rubric

Message representing a single testable criterion for evaluation.

One input prompt could have multiple rubrics.

RubricBasedMetricInput

Input for a rubric-based metrics.

RubricBasedMetricInputDict

Input for a rubric-based metrics.

RubricBasedMetricInstance

Defines an instance for Rubric-based metrics, allowing various input formats.

RubricBasedMetricInstanceDict

Defines an instance for Rubric-based metrics, allowing various input formats.

RubricBasedMetricResult

Result for a rubric-based metric.

RubricBasedMetricResultDict

Result for a rubric-based metric.

RubricBasedMetricSpec

Specification for a metric that is based on rubrics.

RubricBasedMetricSpecDict

Specification for a metric that is based on rubrics.

RubricContent

Content of the rubric, defining the testable criteria.

RubricContentDict

Content of the rubric, defining the testable criteria.

RubricContentProperty

Defines criteria based on a specific property.

RubricContentPropertyDict

Defines criteria based on a specific property.

RubricContentType

Specifies the type of rubric content to generate.

RubricDict

Message representing a single testable criterion for evaluation.

One input prompt could have multiple rubrics.

RubricEnhancedContents

Rubric-enhanced contents for evaluation.

RubricEnhancedContentsDict

Rubric-enhanced contents for evaluation.

RubricGenerationConfig

Config for generating rubrics.

RubricGenerationConfigDict

Config for generating rubrics.

RubricGenerationSpec

Spec for generating rubrics.

RubricGenerationSpecDict

Spec for generating rubrics.

RubricGroup

A group of rubrics, used for grouping rubrics based on a metric or a version.

RubricGroupDict

A group of rubrics, used for grouping rubrics based on a metric or a version.

RubricVerdict

Represents the verdict of an evaluation against a single rubric.

RubricVerdictDict

Represents the verdict of an evaluation against a single rubric.

SamplingConfig

Sampling config for a BigQuery request set.

SamplingConfigDict

Sampling config for a BigQuery request set.

SamplingMethod

Represents the sampling method for a BigQuery request set.

SandboxEnvironment

A sandbox environment.

SandboxEnvironmentConnectionInfo

The connection information of the SandboxEnvironment.

SandboxEnvironmentConnectionInfoDict

The connection information of the SandboxEnvironment.

SandboxEnvironmentDict

A sandbox environment.

SandboxEnvironmentSpec

The specification of a sandbox environment.

SandboxEnvironmentSpecCodeExecutionEnvironment

The code execution environment with customized settings.

SandboxEnvironmentSpecCodeExecutionEnvironmentDict

The code execution environment with customized settings.

SandboxEnvironmentSpecComputerUseEnvironment

The computer use environment with customized settings.

SandboxEnvironmentSpecComputerUseEnvironmentDict

The computer use environment with customized settings.

SandboxEnvironmentSpecDict

The specification of a sandbox environment.

SavedQuery

A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.

SavedQueryDict

A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.

Scheduling

All parameters related to queuing and scheduling of custom jobs.

SchedulingDict

All parameters related to queuing and scheduling of custom jobs.

SchemaPredictParamsGroundingConfig

The configuration for grounding checking.

SchemaPredictParamsGroundingConfigDict

The configuration for grounding checking.

SchemaPredictParamsGroundingConfigSourceEntry

Single source entry for the grounding checking.

SchemaPredictParamsGroundingConfigSourceEntryDict

Single source entry for the grounding checking.

SchemaPromptApiSchema

The A2 schema of a prompt.

SchemaPromptApiSchemaDict

The A2 schema of a prompt.

SchemaPromptInstancePromptExecution

A prompt instance's parameters set that contains a set of variable values.

SchemaPromptInstancePromptExecutionDict

A prompt instance's parameters set that contains a set of variable values.

SchemaPromptInstanceVariableValue

Represents a prompt instance variable.

SchemaPromptInstanceVariableValueDict

Represents a prompt instance variable.

SchemaPromptSpecMultimodalPrompt

Prompt variation that embeds preambles to prompt string.

SchemaPromptSpecMultimodalPromptDict

Prompt variation that embeds preambles to prompt string.

SchemaPromptSpecPartList

Represents a prompt spec part list.

SchemaPromptSpecPartListDict

Represents a prompt spec part list.

SchemaPromptSpecPromptMessage

Represents a prompt message.

SchemaPromptSpecPromptMessageDict

Represents a prompt message.

SchemaPromptSpecReferenceSentencePair

A pair of sentences used as reference in source and target languages.

SchemaPromptSpecReferenceSentencePairDict

A pair of sentences used as reference in source and target languages.

SchemaPromptSpecReferenceSentencePairList

A list of reference sentence pairs.

SchemaPromptSpecReferenceSentencePairListDict

A list of reference sentence pairs.

SchemaPromptSpecStructuredPrompt

Represents a structured prompt.

SchemaPromptSpecStructuredPromptDict

Represents a structured prompt.

SchemaPromptSpecTranslationExample

The translation example that contains reference sentences from various sources.

SchemaPromptSpecTranslationExampleDict

The translation example that contains reference sentences from various sources.

SchemaPromptSpecTranslationFileInputSource

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

SchemaPromptSpecTranslationFileInputSourceDict

API documentation for SchemaPromptSpecTranslationFileInputSourceDict class.

SchemaPromptSpecTranslationGcsInputSource

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

SchemaPromptSpecTranslationGcsInputSourceDict

API documentation for SchemaPromptSpecTranslationGcsInputSourceDict class.

SchemaPromptSpecTranslationOption

Optional settings for translation prompt.

SchemaPromptSpecTranslationOptionDict

Optional settings for translation prompt.

SchemaPromptSpecTranslationPrompt

Prompt variation for Translation use case.

SchemaPromptSpecTranslationPromptDict

Prompt variation for Translation use case.

SchemaPromptSpecTranslationSentenceFileInput

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

SchemaPromptSpecTranslationSentenceFileInputDict

API documentation for SchemaPromptSpecTranslationSentenceFileInputDict class.

SchemaTablesDatasetMetadata

Represents the metadata schema for multimodal dataset metadata.

SchemaTablesDatasetMetadataBigQuerySource

Represents the BigQuery source for multimodal dataset metadata.

SchemaTablesDatasetMetadataBigQuerySourceDict

Represents the BigQuery source for multimodal dataset metadata.

SchemaTablesDatasetMetadataDict

Represents the metadata schema for multimodal dataset metadata.

SchemaTablesDatasetMetadataInputConfig

Represents the input config for multimodal dataset metadata.

SchemaTablesDatasetMetadataInputConfigDict

Represents the input config for multimodal dataset metadata.

SchemaTextPromptDatasetMetadata

Represents the text prompt dataset metadata.

SchemaTextPromptDatasetMetadataDict

Represents the text prompt dataset metadata.

SecretEnvVar

Represents an environment variable where the value is a secret in Cloud Secret Manager.

SecretEnvVarDict

Represents an environment variable where the value is a secret in Cloud Secret Manager.

SecretRef

Reference to a secret stored in the Cloud Secret Manager that will provide the value for this environment variable.

SecretRefDict

Reference to a secret stored in the Cloud Secret Manager that will provide the value for this environment variable.

Session

A session.

SessionDict

A session.

SessionEvent

A session event.

SessionEventDict

A session event.

State

Output only. The runtime state of the SandboxEnvironment.

Strategy

This determines which type of scheduling strategy to use.

SummaryMetric

Represents a summary metric for an evaluation run.

SummaryMetricDict

Represents a summary metric for an evaluation run.

ToolCallValidInput

Tool call valid input.

ToolCallValidInputDict

Tool call valid input.

ToolCallValidInstance

Tool call valid instance.

ToolCallValidInstanceDict

Tool call valid instance.

ToolCallValidMetricValue

Tool call valid metric value for an instance.

ToolCallValidMetricValueDict

Tool call valid metric value for an instance.

ToolCallValidResults

Results for tool call valid metric.

ToolCallValidResultsDict

Results for tool call valid metric.

ToolCallValidSpec

Spec for tool call valid metric.

ToolCallValidSpecDict

Spec for tool call valid metric.

ToolNameMatchInput

Tool name match input.

ToolNameMatchInputDict

Tool name match input.

ToolNameMatchInstance

Tool name match instance.

ToolNameMatchInstanceDict

Tool name match instance.

ToolNameMatchMetricValue

Tool name match metric value for an instance.

ToolNameMatchMetricValueDict

Tool name match metric value for an instance.

ToolNameMatchResults

Results for tool name match metric.

ToolNameMatchResultsDict

Results for tool name match metric.

ToolNameMatchSpec

Spec for tool name match metric.

ToolNameMatchSpecDict

Spec for tool name match metric.

ToolParameterKVMatchInput

Tool parameter kv match input.

ToolParameterKVMatchInputDict

Tool parameter kv match input.

ToolParameterKVMatchInstance

Tool parameter kv match instance.

ToolParameterKVMatchInstanceDict

Tool parameter kv match instance.

ToolParameterKVMatchMetricValue

Tool parameter key value match metric value for an instance.

ToolParameterKVMatchMetricValueDict

Tool parameter key value match metric value for an instance.

ToolParameterKVMatchResults

Results for tool parameter key value match metric.

ToolParameterKVMatchResultsDict

Results for tool parameter key value match metric.

ToolParameterKVMatchSpec

Spec for tool parameter kv match metric.

ToolParameterKVMatchSpecDict

Spec for tool parameter kv match metric.

ToolParameterKeyMatchInput

Tool parameter key match input.

ToolParameterKeyMatchInputDict

Tool parameter key match input.

ToolParameterKeyMatchInstance

Tool parameter key match instance.

ToolParameterKeyMatchInstanceDict

Tool parameter key match instance.

ToolParameterKeyMatchMetricValue

Tool parameter key match metric value for an instance.

ToolParameterKeyMatchMetricValueDict

Tool parameter key match metric value for an instance.

ToolParameterKeyMatchResults

Results for tool parameter key match metric.

ToolParameterKeyMatchResultsDict

Results for tool parameter key match metric.

ToolParameterKeyMatchSpec

Spec for tool parameter key match metric.

ToolParameterKeyMatchSpecDict

Spec for tool parameter key match metric.

TuningResourceUsageAssessmentConfig

Config for tuning resource usage assessment.

TuningResourceUsageAssessmentConfigDict

Config for tuning resource usage assessment.

TuningResourceUsageAssessmentResult

Result of tuning resource usage assessment.

TuningResourceUsageAssessmentResultDict

Result of tuning resource usage assessment.

TuningValidationAssessmentConfig

Config for tuning validation assessment.

TuningValidationAssessmentConfigDict

Config for tuning validation assessment.

TuningValidationAssessmentResult

The result of a tuning validation assessment.

TuningValidationAssessmentResultDict

The result of a tuning validation assessment.

Type

Specifies the reservation affinity type.

UnifiedMetric

The unified metric used for evaluation.

UnifiedMetricDict

The unified metric used for evaluation.

UpdateAgentEngineConfig

Config for updating agent engine.

UpdateAgentEngineConfigDict

Config for updating agent engine.

UpdateAgentEngineMemoryConfig

Config for updating agent engine memory.

UpdateAgentEngineMemoryConfigDict

Config for updating agent engine memory.

UpdateAgentEngineSessionConfig

Config for updating agent engine session.

UpdateAgentEngineSessionConfigDict

Config for updating agent engine session.

UpdateDatasetConfig

Config for creating a dataset resource to store prompts.

UpdateDatasetConfigDict

Config for creating a dataset resource to store prompts.

UpdateMultimodalDatasetConfig

Config for updating a multimodal dataset resource.

UpdateMultimodalDatasetConfigDict

Config for updating a multimodal dataset resource.

VertexBaseConfig

Base config for Vertex AI.

VertexBaseConfigDict

Base config for Vertex AI.

WinRateStats

Statistics for win rates for a single metric.

WinRateStatsDict

Statistics for win rates for a single metric.

WorkerPoolSpec

Represents the spec of a worker pool in a job.

WorkerPoolSpecDict

Represents the spec of a worker pool in a job.

AG2Agent

An AG2 Agent.

AdkApp

An ADK Application.

AgentEngine

Represents a Vertex AI Agent Engine resource.

AsyncQueryable

Protocol for Agent Engines that can be queried asynchronously.

AsyncStreamQueryable

Protocol for Agent Engines that can stream responses asynchronously.

Cloneable

Protocol for Agent Engines that can be cloned.

LangchainAgent

A Langchain Agent.

See https://cloud.google.com/vertex-ai/generative-ai/docs/reasoning-engine/develop for details.

LanggraphAgent

A LangGraph Agent.

ModuleAgent

Agent that is defined by a module and an agent name.

This agent is instantiated by importing a module and instantiating an agent from that module. It also allows to register operations that are defined in the agent.

OperationRegistrable

Protocol for agents that have registered operations.

Queryable

Protocol for Agent Engines that can be queried.

StreamQueryable

Protocol for Agent Engines that can stream responses.

CustomMetric

The custom evaluation metric.

A fully-customized CustomMetric that can be used to evaluate a single model by defining a metric function for a computation-based metric. The CustomMetric is computed on the client-side using the user-defined metric function in SDK only, not by the Vertex Gen AI Evaluation Service.

Attributes: name: The name of the metric. metric_function: The user-defined evaluation function to compute a metric score. Must use the dataset row dictionary as the metric function input and return per-instance metric result as a dictionary output. The metric score must mapped to the name of the CustomMetric as key.

EvalResult

Evaluation result.

EvalTask

A class representing an EvalTask.

An evaluation task assesses the ability of a Gen AI model, agent or application to perform a specific task in response to prompts. Each evaluation task includes an evaluation dataset, which can be a set of test cases and a set of metrics for assessment. These tasks provide the framework for running evaluations in a standardized and repeatable way, allowing for comparative assessment with varying run-specific parameters.

Dataset Details:

Default dataset column names:
    * prompt_column_name: "prompt"
    * reference_column_name: "reference"
    * response_column_name: "response"
    * baseline_model_response_column_name: "baseline_model_response"
    * rubrics_column_name: "rubrics"


Requirement for different use cases:
  * Bring-your-own-response (BYOR): You already have the data that you
      want to evaluate stored in the dataset. Response column name can be
      customized by providing `response_column_name` parameter, or in the
      `metric_column_mapping`. For BYOR pairwise evaluation, the baseline
      model response column name can be customized by providing
      `baseline_model_response_column_name` parameter, or
      in the `metric_column_mapping`. If the `response` column or
      `baseline_model_response` column is present while the
      corresponding model is specified, an error will be raised.

  * Perform model/agent inference without a prompt template: You have a dataset
      containing the input prompts to the model/agent and want to perform
      inference before evaluation. A column named `prompt` is required
      in the evaluation dataset and is used directly as input to the model/agent.

  * Perform model/agent inference with a prompt template: You have a dataset
      containing the input variables to the prompt template and want to
      assemble the prompts for inference. Evaluation dataset
      must contain column names corresponding to the variable names in
      the prompt template. For example, if prompt template is
      "Instruction: {instruction}, context: {context}", the dataset must
      contain `instruction` and `context` columns.

Metrics Details:

The supported metrics descriptions, rating rubrics, and the required
input variables can be found on the Vertex AI public documentation page.
[Evaluation methods and metrics](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval).

Usage Examples:

1. To perform bring-your-own-response(BYOR) evaluation, provide the model
responses in the `response` column in the dataset. If a pairwise metric is
used for BYOR evaluation, provide the baseline model responses in the
`baseline_model_response` column.

  ```
  eval_dataset = pd.DataFrame({
          "prompt"  : [...],
          "reference": [...],
          "response" : [...],
          "baseline_model_response": [...],
  })
  eval_task = EvalTask(
    dataset=eval_dataset,
    metrics=[
            "bleu",
            "rouge_l_sum",
            MetricPromptTemplateExamples.Pointwise.FLUENCY,
            MetricPromptTemplateExamples.Pairwise.SAFETY
    ],
    experiment="my-experiment",
  )
  eval_result = eval_task.evaluate(experiment_run_name="eval-experiment-run")
  ```

2. To perform evaluation with Gemini model inference, specify the `model`
parameter with a `GenerativeModel` instance.  The input column name to the
model is `prompt` and must be present in the dataset.

  ```
  eval_dataset = pd.DataFrame({
        "reference": [...],
        "prompt"  : [...],
  })
  result = EvalTask(
      dataset=eval_dataset,
      metrics=["exact_match", "bleu", "rouge_1", "rouge_l_sum"],
      experiment="my-experiment",
  ).evaluate(
      model=GenerativeModel("gemini-1.5-pro"),
      experiment_run_name="gemini-eval-run"
  )
  ```

3. If a `prompt_template` is specified, the `prompt` column is not required.
Prompts can be assembled from the evaluation dataset, and all prompt
template variable names must be present in the dataset columns.
  ```
  eval_dataset = pd.DataFrame({
      "context"    : [...],
      "instruction": [...],
  })
  result = EvalTask(
      dataset=eval_dataset,
      metrics=[MetricPromptTemplateExamples.Pointwise.SUMMARIZATION_QUALITY],
  ).evaluate(
      model=GenerativeModel("gemini-1.5-pro"),
      prompt_template="{instruction}. Article: {context}. Summary:",
  )
  ```

4. To perform evaluation with custom model inference, specify the `model`
parameter with a custom inference function. The input column name to the
custom inference function is `prompt` and must be present in the dataset.

  ```
  from openai import OpenAI
  client = OpenAI()
  def custom_model_fn(input: str) -> str:
    response = client.chat.completions.create(
      model="gpt-3.5-turbo",
      messages=[
        {"role": "user", "content": input}
      ]
    )
    return response.choices[0].message.content

  eval_dataset = pd.DataFrame({
        "prompt"  : [...],
        "reference": [...],
  })
  result = EvalTask(
      dataset=eval_dataset,
      metrics=[MetricPromptTemplateExamples.Pointwise.SAFETY],
      experiment="my-experiment",
  ).evaluate(
      model=custom_model_fn,
      experiment_run_name="gpt-eval-run"
  )
  ```

5. To perform pairwise metric evaluation with model inference step, specify
the `baseline_model` input to a `PairwiseMetric` instance and the candidate
`model` input to the `EvalTask.evaluate()` function. The input column name
to both models is `prompt` and must be present in the dataset.

  ```
  baseline_model = GenerativeModel("gemini-1.0-pro")
  candidate_model = GenerativeModel("gemini-1.5-pro")

  pairwise_groundedness = PairwiseMetric(
      metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template(
          "pairwise_groundedness"
      ),
      baseline_model=baseline_model,
  )
  eval_dataset = pd.DataFrame({
        "prompt"  : [...],
  })
  result = EvalTask(
      dataset=eval_dataset,
      metrics=[pairwise_groundedness],
      experiment="my-pairwise-experiment",
  ).evaluate(
      model=candidate_model,
      experiment_run_name="gemini-pairwise-eval-run",
  )
  ```

MetricPromptTemplateExamples

Examples of metric prompt templates for model-based evaluation.

Pairwise

Example PairwiseMetric instances.

Pointwise

Example PointwiseMetric instances.

PairwiseMetric

A Model-based Pairwise Metric.

A model-based evaluation metric that compares two generative models' responses side-by-side, and allows users to A/B test their generative models to determine which model is performing better.

For more details on when to use pairwise metrics, see Evaluation methods and metrics.

Result Details:

* In `EvalResult.summary_metrics`, win rates for both the baseline and
candidate model are computed. The win rate is computed as proportion of
wins of one model's responses to total attempts as a decimal value
between 0 and 1.

* In `EvalResult.metrics_table`, a pairwise metric produces two
evaluation results per dataset row:
    * `pairwise_choice`: The choice shows whether the candidate model or
      the baseline model performs better, or if they are equally good.
    * `explanation`: The rationale behind each verdict using
      chain-of-thought reasoning. The explanation helps users scrutinize
      the judgment and builds appropriate trust in the decisions.

See [documentation
page](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#understand-results)
for more details on understanding the metric results.

Usage Examples:

```
baseline_model = GenerativeModel("gemini-1.0-pro")
candidate_model = GenerativeModel("gemini-1.5-pro")

pairwise_groundedness = PairwiseMetric(
    metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template(
        "pairwise_groundedness"
    ),
    baseline_model=baseline_model,
)
eval_dataset = pd.DataFrame({
      "prompt"  : [...],
})
pairwise_task = EvalTask(
    dataset=eval_dataset,
    metrics=[pairwise_groundedness],
    experiment="my-pairwise-experiment",
)
pairwise_result = pairwise_task.evaluate(
    model=candidate_model,
    experiment_run_name="gemini-pairwise-eval-run",
)
```

PairwiseMetricPromptTemplate

Pairwise metric prompt template for pairwise model-based metrics.

PointwiseMetric

A Model-based Pointwise Metric.

A model-based evaluation metric that evaluate a single generative model's response.

For more details on when to use model-based pointwise metrics, see Evaluation methods and metrics.

Usage Examples:

```
candidate_model = GenerativeModel("gemini-1.5-pro")
eval_dataset = pd.DataFrame({
    "prompt"  : [...],
})
fluency_metric = PointwiseMetric(
    metric="fluency",
    metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template('fluency'),
)
pointwise_eval_task = EvalTask(
    dataset=eval_dataset,
    metrics=[
        fluency_metric,
        MetricPromptTemplateExamples.Pointwise.GROUNDEDNESS,
    ],
)
pointwise_result = pointwise_eval_task.evaluate(
    model=candidate_model,
)
```

PointwiseMetricPromptTemplate

Pointwise metric prompt template for pointwise model-based metrics.

PromptTemplate

A prompt template for creating prompts with variables.

The PromptTemplate class allows users to define a template string with variables represented in curly braces {variable}. The variable names cannot contain spaces and must start with a letter or underscore, followed by letters, digits, or underscore. These variables can be replaced with specific values using the assemble method, providing flexibility in generating dynamic prompts.

Usage:

```
template_str = "Hello, {name}! Today is {day}. How are you?"
prompt_template = PromptTemplate(template_str)
completed_prompt = prompt_template.assemble(name="John", day="Monday")
print(completed_prompt)
```

Rouge

The ROUGE Metric.

Calculates the recall of n-grams in prediction as compared to reference and returns a score ranging between 0 and 1. Supported rouge types are rougen[1-9], rougeL, and rougeLsum.

Candidate

A response candidate generated by the model.

ChatSession

Chat session holds the chat history.

Content

The multi-part content of a message.

Usage:

```
response = model.generate_content(contents=[
    Content(role="user", parts=[Part.from_text("Why is sky blue?")])
])
```

FinishReason

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

FunctionCall

Function call.

FunctionDeclaration

A representation of a function declaration.

Usage: Create function declaration and tool:

```
get_current_weather_func = generative_models.FunctionDeclaration(
    name="get_current_weather",
    description="Get the current weather in a given location",
    parameters={
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": [
                    "celsius",
                    "fahrenheit",
                ]
            }
        },
        "required": [
            "location"
        ]
    },
    # Optional:
    response={
        "type": "object",
        "properties": {
            "weather": {
                "type": "string",
                "description": "The weather in the city"
            },
        },
    },
)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```

Use tool in `GenerativeModel.generate_content`:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```

Use tool in chat:

```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

GenerationConfig

Parameters for the generation.

Modality

The modalities of the response.

ModelConfig

Config for model selection.

FeatureSelectionPreference

Options for feature selection preference.

RoutingConfig

The configuration for model router requests. Deprecated, please use ModelConfig to set routing preference instead.

The routing config is either one of the two nested classes:

  • AutoRoutingMode: Automated routing.
  • ManualRoutingMode: Manual routing.

Usage:

  • AutoRoutingMode:

    routing_config=generative_models.RoutingConfig(
        routing_config=generative_models.RoutingConfig.AutoRoutingMode(
            model_routing_preference=generative_models.RoutingConfig.AutoRoutingMode.ModelRoutingPreference.BALANCED,
        ),
    )
    
  • ManualRoutingMode:

    routing_config=generative_models.RoutingConfig(
        routing_config=generative_models.RoutingConfig.ManutalRoutingMode(
            model_name="gemini-1.5-pro-001",
        ),
    )
    

AutoRoutingMode

When automated routing is specified, the routing will be determined by the routing model predicted quality and customer provided model routing preference.

ModelRoutingPreference

The model routing preference.

ManualRoutingMode

When manual routing is set, the specified model will be used directly.

GenerationResponse

The response from the model.

GenerativeModel

Initializes GenerativeModel.

Usage:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
```

HarmBlockThreshold

Probability based thresholds levels for blocking.

HarmCategory

Harm categories that will block the content.

Image

The image that can be sent to a generative model.

Part

A part of a multi-part Content message.

Usage:

```
text_part = Part.from_text("Why is sky blue?")
image_part = Part.from_image(Image.load_from_file("image.jpg"))
video_part = Part.from_uri(uri="gs://.../video.mp4", mime_type="video/mp4")
function_response_part = Part.from_function_response(
    name="get_current_weather",
    response={
        "content": {"weather_there": "super nice"},
    }
)

response1 = model.generate_content([text_part, image_part])
response2 = model.generate_content(video_part)
response3 = chat.send_message(function_response_part)
```

ResponseValidationError

API documentation for ResponseValidationError class.

SafetySetting

Parameters for the generation.

HarmBlockMethod

Probability vs severity.

HarmBlockThreshold

Probability based thresholds levels for blocking.

HarmCategory

Harm categories that will block the content.

Tool

A collection of functions that the model may use to generate response.

Usage: Create tool from function declarations:

```
get_current_weather_func = generative_models.FunctionDeclaration(...)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```

Use tool in `GenerativeModel.generate_content`:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```

Use tool in chat:

```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

ToolConfig

Config shared for all tools provided in the request.

Usage: Create ToolConfig

```
tool_config = ToolConfig(
    function_calling_config=ToolConfig.FunctionCallingConfig(
        mode=ToolConfig.FunctionCallingConfig.Mode.ANY,
        allowed_function_names=["get_current_weather_func"],
))
```

Use ToolConfig in `GenerativeModel.generate_content`:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
    tool_config=tool_config,
))
```

Use ToolConfig in chat:

```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
    tool_config=tool_config,
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

grounding

Grounding namespace.

DynamicRetrievalConfig

Config for dynamic retrieval.

Mode

The mode of the predictor to be used in dynamic retrieval.

GoogleSearchRetrieval

Tool to retrieve public web data for grounding, powered by Google Search.

Retrieval

Defines a retrieval tool that model can call to access external knowledge.

VertexAISearch

Retrieve from Vertex AI Search data store for grounding. See https://cloud.google.com/products/agent-builder

ChatMessage

A chat message.

ChatModel

ChatModel represents a language model that is capable of chat.

Examples::

chat_model = ChatModel.from_pretrained("chat-bison@001")

chat = chat_model.start_chat(
    context="My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.",
    examples=[
        InputOutputTextPair(
            input_text="Who do you work for?",
            output_text="I work for Ned.",
        ),
        InputOutputTextPair(
            input_text="What do I like?",
            output_text="Ned likes watching movies.",
        ),
    ],
    temperature=0.3,
)

chat.send_message("Do you know any cool events this weekend?")

ChatSession

ChatSession represents a chat session with a language model.

Within a chat session, the model keeps context and remembers the previous conversation.

CodeChatModel

CodeChatModel represents a model that is capable of completing code.

.. rubric:: Examples

code_chat_model = CodeChatModel.from_pretrained("codechat-bison@001")

code_chat = code_chat_model.start_chat( context="I'm writing a large-scale enterprise application.", max_output_tokens=128, temperature=0.2, )

code_chat.send_message("Please help write a function to calculate the min of two numbers")

CodeChatSession

CodeChatSession represents a chat session with code chat language model.

Within a code chat session, the model keeps context and remembers the previous converstion.

CodeGenerationModel

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=...) instead.

GroundingSource

API documentation for GroundingSource class.

InlineContext

InlineContext represents a grounding source using provided inline context. .. attribute:: inline_context

The content used as inline context.

:type: str

VertexAISearch

VertexAISearchDatastore represents a grounding source using Vertex AI Search datastore .. attribute:: data_store_id

Data store ID of the Vertex AI Search datastore.

:type: str

WebSearch

WebSearch represents a grounding source using public web search. .. attribute:: disable_attribution

If set to True, skip finding claim attributions (i.e not generate grounding citation). Default: False.

:type: bool

InputOutputTextPair

InputOutputTextPair represents a pair of input and output texts.

TextEmbedding

Text embedding vector and statistics.

TextEmbeddingInput

Structural text embedding input.

TextEmbeddingModel

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=...) instead.

TextGenerationModel

Creates a LanguageModel.

This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=...) instead.

TextGenerationResponse

TextGenerationResponse represents a response of a language model. .. attribute:: text

The generated text

:type: str

_TunableModelMixin

Model that can be tuned with supervised fine tuning (SFT).

AutomaticFunctionCallingResponder

Responder that automatically responds to model's function calls.

CallableFunctionDeclaration

A function declaration plus a function.

Candidate

A response candidate generated by the model.

ChatSession

Chat session holds the chat history.

Content

The multi-part content of a message.

Usage:

```
response = model.generate_content(contents=[
    Content(role="user", parts=[Part.from_text("Why is sky blue?")])
])
```

FinishReason

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

FunctionCall

Function call.

FunctionDeclaration

A representation of a function declaration.

Usage: Create function declaration and tool:

```
get_current_weather_func = generative_models.FunctionDeclaration(
    name="get_current_weather",
    description="Get the current weather in a given location",
    parameters={
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": [
                    "celsius",
                    "fahrenheit",
                ]
            }
        },
        "required": [
            "location"
        ]
    },
    # Optional:
    response={
        "type": "object",
        "properties": {
            "weather": {
                "type": "string",
                "description": "The weather in the city"
            },
        },
    },
)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```

Use tool in `GenerativeModel.generate_content`:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```

Use tool in chat:

```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

GenerationConfig

Parameters for the generation.

Modality

The modalities of the response.

ModelConfig

Config for model selection.

FeatureSelectionPreference

Options for feature selection preference.

RoutingConfig

The configuration for model router requests. Deprecated, please use ModelConfig to set routing preference instead.

The routing config is either one of the two nested classes:

  • AutoRoutingMode: Automated routing.
  • ManualRoutingMode: Manual routing.

Usage:

  • AutoRoutingMode:

    routing_config=generative_models.RoutingConfig(
        routing_config=generative_models.RoutingConfig.AutoRoutingMode(
            model_routing_preference=generative_models.RoutingConfig.AutoRoutingMode.ModelRoutingPreference.BALANCED,
        ),
    )
    
  • ManualRoutingMode:

    routing_config=generative_models.RoutingConfig(
        routing_config=generative_models.RoutingConfig.ManutalRoutingMode(
            model_name="gemini-1.5-pro-001",
        ),
    )
    

AutoRoutingMode

When automated routing is specified, the routing will be determined by the routing model predicted quality and customer provided model routing preference.

ModelRoutingPreference

The model routing preference.

ManualRoutingMode

When manual routing is set, the specified model will be used directly.

GenerationResponse

The response from the model.

GenerativeModel

Initializes GenerativeModel.

Usage:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
```

HarmBlockThreshold

Probability based thresholds levels for blocking.

HarmCategory

Harm categories that will block the content.

Image

The image that can be sent to a generative model.

Part

A part of a multi-part Content message.

Usage:

```
text_part = Part.from_text("Why is sky blue?")
image_part = Part.from_image(Image.load_from_file("image.jpg"))
video_part = Part.from_uri(uri="gs://.../video.mp4", mime_type="video/mp4")
function_response_part = Part.from_function_response(
    name="get_current_weather",
    response={
        "content": {"weather_there": "super nice"},
    }
)

response1 = model.generate_content([text_part, image_part])
response2 = model.generate_content(video_part)
response3 = chat.send_message(function_response_part)
```

ResponseBlockedError

API documentation for ResponseBlockedError class.

ResponseValidationError

API documentation for ResponseValidationError class.

SafetySetting

Parameters for the generation.

HarmBlockMethod

Probability vs severity.

HarmBlockThreshold

Probability based thresholds levels for blocking.

HarmCategory

Harm categories that will block the content.

Tool

A collection of functions that the model may use to generate response.

Usage: Create tool from function declarations:

```
get_current_weather_func = generative_models.FunctionDeclaration(...)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```

Use tool in `GenerativeModel.generate_content`:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```

Use tool in chat:

```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

ToolConfig

Config shared for all tools provided in the request.

Usage: Create ToolConfig

```
tool_config = ToolConfig(
    function_calling_config=ToolConfig.FunctionCallingConfig(
        mode=ToolConfig.FunctionCallingConfig.Mode.ANY,
        allowed_function_names=["get_current_weather_func"],
))
```

Use ToolConfig in `GenerativeModel.generate_content`:

```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
    tool_config=tool_config,
))
```

Use ToolConfig in chat:

```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
    tool_config=tool_config,
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

ChatMessage

A chat message.

CountTokensResponse

The response from a count_tokens request. .. attribute:: total_tokens

The total number of tokens counted across all instances passed to the request.

:type: int

EvaluationClassificationMetric

The evaluation metric response for classification metrics.

EvaluationMetric

The evaluation metric response.

EvaluationQuestionAnsweringSpec

Spec for question answering model evaluation tasks.

EvaluationTextClassificationSpec

Spec for text classification model evaluation tasks.

EvaluationTextGenerationSpec

Spec for text generation model evaluation tasks.

EvaluationTextSummarizationSpec

Spec for text summarization model evaluation tasks.

InputOutputTextPair

InputOutputTextPair represents a pair of input and output texts.

TextEmbedding

Text embedding vector and statistics.

TextEmbeddingInput

Structural text embedding input.

TextGenerationResponse

TextGenerationResponse represents a response of a language model. .. attribute:: text

The generated text

:type: str

TuningEvaluationSpec

Specification for model evaluation to perform during tuning.

A2aAgent

A class to initialize and set up an Agent-to-Agent application.

AG2Agent

An AG2 Agent.

See https://cloud.google.com/vertex-ai/generative-ai/docs/agent-engine/develop/ag2 for details.

AdkApp

An ADK Application.

LangchainAgent

A Langchain Agent.

See https://cloud.google.com/vertex-ai/generative-ai/docs/agent-engine/develop/langchain for details.

LanggraphAgent

A LangGraph Agent.

See https://cloud.google.com/vertex-ai/generative-ai/docs/agent-engine/develop/langgraph for details.

LlamaIndexQueryPipelineAgent

A LlamaIndex Query Pipeline Agent.

This agent uses a query pipeline for LLAIndex, including prompt, model, retrieval and summarization steps. More details can be found in https://docs.llamaindex.ai/en/stable/module_guides/querying/pipeline/.

Queryable

Protocol for Reasoning Engine applications that can be queried.

ReasoningEngine

Represents a Vertex AI Reasoning Engine resource.

SourceModel

A model that is used in managed OSS supervised tuning.

Usage:

model = SourceModel(
    base_model="meta/llama3.1-8b", # OSS model name <publisher>/<model_name>
    custom_base_model="gs://user-bucket/custom-weights",
)
sft_tuning_job = sft.train(
    source_model=model,
    train_dataset="gs://my-bucket/train.jsonl",
    validation_dataset="gs://my-bucket/validation.jsonl",
    epochs=4,
    tuned_model_display_name="my-tuned-model",
    output_uri="gs://user-bucket/tuned-model"
)

while not sft_tuning_job.has_ended:
    time.sleep(60)
    sft_tuning_job.refresh()

tuned_model = aiplatform.Model(sft_tuning_job.tuned_model_name)
```

TuningJob

Represents a TuningJob that runs with Google owned models.

SupervisedTuningJob

Initializes class with project, location, and api_client.

ControlImageConfig

Control image config.

ControlReferenceImage

Control reference image.

This encapsulates the control reference image type.

EntityLabel

Entity label holding a text label and any associated confidence score.

GeneratedImage

Generated image.

GeneratedMask

Generated image mask.

Image

Image.

ImageCaptioningModel

Generates captions from image.

Examples::

model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

ImageGenerationModel

Generates images from text prompt.

Examples::

model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
    prompt="Astronaut riding a horse",
    # Optional:
    number_of_images=1,
    seed=0,
)
response[0].show()
response[0].save("image1.png")

ImageGenerationResponse

Image generation response.

ImageQnAModel

Answers questions about an image.

Examples::

model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

ImageSegmentationModel

Segments an image.

ImageSegmentationResponse

Image Segmentation response.

ImageTextModel

Generates text from images.

Examples::

model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")

captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

MaskImageConfig

Mask image config.

MaskReferenceImage

Mask reference image. This encapsulates the mask reference image type.

MultiModalEmbeddingModel

Generates embedding vectors from images and videos.

Examples::

model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")

embeddings = model.get_embeddings(
    image=image,
    video=video,
    contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding

MultiModalEmbeddingResponse

The multimodal embedding response.

RawReferenceImage

Raw reference image.

This encapsulates the raw reference image type.

ReferenceImage

Reference image.

This is a new base API object for Imagen 3.0 Capabilities.

Scribble

Input scribble for image segmentation.

StyleImageConfig

Style image config.

StyleReferenceImage

Style reference image. This encapsulates the style reference image type.

SubjectImageConfig

Subject image config.

SubjectReferenceImage

Subject reference image.

This encapsulates the subject reference image type.

Video

Video.

VideoEmbedding

Embeddings generated from video with offset times.

VideoSegmentConfig

The specific video segments (in seconds) the embeddings are generated for.

WatermarkVerificationModel

Verifies if an image has a watermark.

WatermarkVerificationResponse

WatermarkVerificationResponse(_prediction_response: Any, watermark_verification_result: Optional[str] = None)

ModelMonitor

Initializer for ModelMonitor.

ModelMonitoringJob

Initializer for ModelMonitoringJob.

Example Usage:

 my_monitoring_job = aiplatform.ModelMonitoringJob(
     model_monitoring_job_name='projects/123/locations/us-central1/modelMonitors/\
     my_model_monitor_id/modelMonitoringJobs/my_monitoring_job_id'
 )
 or
 my_monitoring_job = aiplatform.aiplatform.ModelMonitoringJob(
     model_monitoring_job_name='my_monitoring_job_id',
     model_monitor_id='my_model_monitor_id',
 )

DataDriftSpec

Data drift monitoring spec.

Data drift measures the distribution distance between the current dataset and a baseline dataset. A typical use case is to detect data drift between the recent production serving dataset and the training dataset, or to compare the recent production dataset with a dataset from a previous period.

.. rubric:: Example

feature_drift_spec=DataDriftSpec( features=["feature1"] categorical_metric_type="l_infinity", numeric_metric_type="jensen_shannon_divergence", default_categorical_alert_threshold=0.01, default_numeric_alert_threshold=0.02, feature_alert_thresholds={"feature1":0.02, "feature2":0.01}, )

FeatureAttributionSpec

Feature attribution spec.

.. rubric:: Example

feature_attribution_spec=FeatureAttributionSpec( features=["feature1"] default_alert_threshold=0.01, feature_alert_thresholds={"feature1":0.02, "feature2":0.01}, batch_dedicated_resources=BatchDedicatedResources( starting_replica_count=1, max_replica_count=2, machine_spec=my_machine_spec, ), )

FieldSchema

Field Schema.

The class identifies the data type of a single feature, which combines together to form the Schema for different fields in ModelMonitoringSchema.

ModelMonitoringSchema

Initializer for ModelMonitoringSchema.

MonitoringInput

Model monitoring data input spec.

NotificationSpec

Initializer for NotificationSpec.

ObjectiveSpec

Initializer for ObjectiveSpec.

OutputSpec

Initializer for OutputSpec.

TabularObjective

Initializer for TabularObjective.

GeneratedImage

Generated image.

Image

Image.

ImageCaptioningModel

Generates captions from image.

Examples::

model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

ImageGenerationModel

Generates images from text prompt.

Examples::

model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
    prompt="Astronaut riding a horse",
    # Optional:
    number_of_images=1,
    seed=0,
)
response[0].show()
response[0].save("image1.png")

ImageGenerationResponse

Image generation response.

ImageQnAModel

Answers questions about an image.

Examples::

model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

ImageTextModel

Generates text from images.

Examples::

model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")

captions = model.get_captions(
    image=image,
    # Optional:
    number_of_results=1,
    language="en",
)

answers = model.ask_question(
    image=image,
    question="What color is the car in this image?",
    # Optional:
    number_of_results=1,
)

MultiModalEmbeddingModel

Generates embedding vectors from images and videos.

Examples::

model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
video = Video.load_from_file("video.mp4")

embeddings = model.get_embeddings(
    image=image,
    video=video,
    contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
video_embeddings = embeddings.video_embeddings
text_embedding = embeddings.text_embedding

MultiModalEmbeddingResponse

The multimodal embedding response.

Video

Video.

VideoEmbedding

Embeddings generated from video with offset times.

VideoSegmentConfig

The specific video segments (in seconds) the embeddings are generated for.

Modules

agent_engines

API documentation for agent_engines module.

evals

API documentation for evals module.

prompt_optimizer

API documentation for prompt_optimizer module.

prompts

API documentation for prompts module.

_language_models

Classes for working with language models.

generative_models

Classes for working with the Gemini models.

language_models

Classes for working with language models.

sft

Classes for supervised tuning.

vision_models

Classes for working with vision models.