Response message for [PredictionService.GenerateContent].
Output only. Generated candidates.
modelVersionstring
Output only. The model version used to generate the response.
Output only. timestamp when the request is made to the server.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".
responseIdstring
Output only. responseId is used to identify each response. It is the encoding of the eventId.
Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.
Usage metadata about the response(s).
| JSON representation |
|---|
{ "candidates": [ { object ( |
Candidate
A response candidate generated from the model.
indexinteger
Output only. Index of the candidate.
Output only. Content parts of the candidate.
avgLogprobsnumber
Output only. Average log probability score of the candidate.
Output only. log-likelihood scores for the response tokens and top tokens
Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
Output only. List of ratings for the safety of a response candidate.
There is at most one rating per category.
Output only. Source attribution of the generated content.
Output only. metadata specifies sources used to ground generated content.
Output only. metadata related to url context retrieval tool.
finishMessagestring
Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when finishReason is set.
| JSON representation |
|---|
{ "index": integer, "content": { object ( |
LogprobsResult
Logprobs result
Length = total number of decoding steps.
Length = total number of decoding steps. The chosen candidates may or may not be in topCandidates.
| JSON representation |
|---|
{ "topCandidates": [ { object ( |
TopCandidates
Candidates with top log probabilities at each decoding step.
Sorted by log probability in descending order.
| JSON representation |
|---|
{
"candidates": [
{
object ( |
Candidate
Candidate for the logprobs token and score.
tokenstring
The candidate's token string value.
tokenIdinteger
The candidate's token id value.
logProbabilitynumber
The candidate's log probability.
| JSON representation |
|---|
{ "token": string, "tokenId": integer, "logProbability": number } |
FinishReason
The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
| Enums | |
|---|---|
FINISH_REASON_UNSPECIFIED |
The finish reason is unspecified. |
STOP |
token generation reached a natural stopping point or a configured stop sequence. |
MAX_TOKENS |
token generation reached the configured maximum output tokens. |
SAFETY |
token generation stopped because the content potentially contains safety violations. NOTE: When streaming, content is empty if content filters blocks the output. |
RECITATION |
The token generation stopped because of potential recitation. |
OTHER |
All other reasons that stopped the token generation. |
BLOCKLIST |
token generation stopped because the content contains forbidden terms. |
PROHIBITED_CONTENT |
token generation stopped for potentially containing prohibited content. |
SPII |
token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII). |
MALFORMED_FUNCTION_CALL |
The function call generated by the model is syntaxtically invalid (e.g. the function call generated is not parsable). |
MODEL_ARMOR |
The model response was blocked by Model Armor. |
IMAGE_SAFETY |
token generation stopped because generated images has safety violations. |
IMAGE_PROHIBITED_CONTENT |
Image generation stopped because generated images has other prohibited content. |
IMAGE_RECITATION |
Image generation stopped due to recitation. |
IMAGE_OTHER |
Image generation stopped because of other miscellaneous issue. |
UNEXPECTED_TOOL_CALL |
The function call generated by the model is semantically invalid (e.g. a function call is generated when function calling is not enabled or the function is not in the function declaration). |
NO_IMAGE |
The model was expected to generate an image, but none was generated. |
SafetyRating
Safety rating corresponding to the generated content.
Output only. Harm category.
Output only. Harm probability levels in the content.
probabilityScorenumber
Output only. Harm probability score.
Output only. Harm severity levels in the content.
severityScorenumber
Output only. Harm severity score.
blockedboolean
Output only. Indicates whether the content was filtered out because of this rating.
Output only. The overwritten threshold for the safety category of Gemini 2.0 image out. If minors are detected in the output image, the threshold of each safety category will be overwritten if user sets a lower threshold.
| JSON representation |
|---|
{ "category": enum ( |
HarmProbability
Harm probability levels in the content.
| Enums | |
|---|---|
HARM_PROBABILITY_UNSPECIFIED |
Harm probability unspecified. |
NEGLIGIBLE |
Negligible level of harm. |
LOW |
Low level of harm. |
MEDIUM |
Medium level of harm. |
HIGH |
High level of harm. |
HarmSeverity
Harm severity levels.
| Enums | |
|---|---|
HARM_SEVERITY_UNSPECIFIED |
Harm severity unspecified. |
HARM_SEVERITY_NEGLIGIBLE |
Negligible level of harm severity. |
HARM_SEVERITY_LOW |
Low level of harm severity. |
HARM_SEVERITY_MEDIUM |
Medium level of harm severity. |
HARM_SEVERITY_HIGH |
High level of harm severity. |
CitationMetadata
A collection of source attributions for a piece of content.
Output only. List of citations.
| JSON representation |
|---|
{
"citations": [
{
object ( |
Citation
Source attributions for content.
startIndexinteger
Output only. Start index into the content.
endIndexinteger
Output only. End index into the content.
uristring
Output only. url reference of the attribution.
titlestring
Output only. title of the attribution.
licensestring
Output only. License of the attribution.
Output only. Publication date of the attribution.
| JSON representation |
|---|
{
"startIndex": integer,
"endIndex": integer,
"uri": string,
"title": string,
"license": string,
"publicationDate": {
object ( |
UrlContextMetadata
metadata related to url context retrieval tool.
Output only. List of url context.
| JSON representation |
|---|
{
"urlMetadata": [
{
object ( |
UrlMetadata
Context of the a single url retrieval.
retrievedUrlstring
Retrieved url by the tool.
status of the url retrieval.
| JSON representation |
|---|
{
"retrievedUrl": string,
"urlRetrievalStatus": enum ( |
UrlRetrievalStatus
status of the url retrieval.
| Enums | |
|---|---|
URL_RETRIEVAL_STATUS_UNSPECIFIED |
Default value. This value is unused. |
URL_RETRIEVAL_STATUS_SUCCESS |
url retrieval is successful. |
URL_RETRIEVAL_STATUS_ERROR |
url retrieval is failed due to error. |
PromptFeedback
Content filter results for a prompt sent in the request. Note: This is sent only in the first stream chunk and only if no candidates were generated due to content violations.
Output only. The reason why the prompt was blocked.
Output only. A list of safety ratings for the prompt. There is one rating per category.
blockReasonMessagestring
Output only. A readable message that explains the reason why the prompt was blocked.
| JSON representation |
|---|
{ "blockReason": enum ( |
BlockedReason
The reason why the prompt was blocked.
| Enums | |
|---|---|
BLOCKED_REASON_UNSPECIFIED |
The blocked reason is unspecified. |
SAFETY |
The prompt was blocked for safety reasons. |
OTHER |
The prompt was blocked for other reasons. For example, it may be due to the prompt's language, or because it contains other harmful content. |
BLOCKLIST |
The prompt was blocked because it contains a term from the terminology blocklist. |
PROHIBITED_CONTENT |
The prompt was blocked because it contains prohibited content. |
MODEL_ARMOR |
The prompt was blocked by Model Armor. |
IMAGE_SAFETY |
The prompt was blocked because it contains content that is unsafe for image generation. |
JAILBREAK |
The prompt was blocked as a jailbreak attempt. |
UsageMetadata
Usage metadata about the content generation request and response. This message provides a detailed breakdown of token usage and other relevant metrics.
promptTokenCountinteger
The total number of tokens in the prompt. This includes any text, images, or other media provided in the request. When cachedContent is set, this also includes the number of tokens in the cached content.
candidatesTokenCountinteger
The total number of tokens in the generated candidates.
totalTokenCountinteger
The total number of tokens for the entire request. This is the sum of promptTokenCount, candidatesTokenCount, toolUsePromptTokenCount, and thoughtsTokenCount.
toolUsePromptTokenCountinteger
Output only. The number of tokens in the results from tool executions, which are provided back to the model as input, if applicable.
thoughtsTokenCountinteger
Output only. The number of tokens that were part of the model's generated "thoughts" output, if applicable.
cachedContentTokenCountinteger
Output only. The number of tokens in the cached content that was used for this request.
Output only. A detailed breakdown of the token count for each modality in the prompt.
Output only. A detailed breakdown of the token count for each modality in the cached content.
Output only. A detailed breakdown of the token count for each modality in the generated candidates.
Output only. A detailed breakdown by modality of the token counts from the results of tool executions, which are provided back to the model as input.
Output only. The traffic type for this request.
| JSON representation |
|---|
{ "promptTokenCount": integer, "candidatesTokenCount": integer, "totalTokenCount": integer, "toolUsePromptTokenCount": integer, "thoughtsTokenCount": integer, "cachedContentTokenCount": integer, "promptTokensDetails": [ { object ( |
TrafficType
The type of traffic that this request was processed with, indicating which quota is consumed.
| Enums | |
|---|---|
TRAFFIC_TYPE_UNSPECIFIED |
Unspecified request traffic type. |
ON_DEMAND |
The request was processed using Pay-As-You-Go quota. |
PROVISIONED_THROUGHPUT |
type for Provisioned Throughput traffic. |