This page shows you how to send chat prompts to a Gemini model by using the Google Cloud console, REST API, and supported SDKs.
To learn how to add images and other media to your request, see Image understanding.
For a list of languages supported by Gemini, see Language support.
To explore the generative AI models and APIs that are available on Gemini Enterprise Agent Platform, go to Model Garden in the Google Cloud console.
If you're looking for a way to use Gemini directly from your mobile and web apps, see the Firebase AI Logic client SDKs for Swift, Android, Web, Flutter, and Unity apps.
Generate text
For testing and iterating on chat prompts, we recommend using the Google Cloud console. To send prompts programmatically to the model, you can use the REST API, Google Gen AI SDK, Agent Platform SDK, or one of the other supported libraries and SDKs.
You can use system instructions to steer the behavior of the model based on a specific need or use case. For example, you can define a persona or role for a chatbot that responds to customer service requests. For more information, see the system instructions code samples.
You can use the Google Gen AI SDK to send requests if you're using Gemini 2.5 Flash.
Here is a text generation example.
Console
To use the Agent Studio to send a chat prompt in the Google Cloud console, do the following:
- In the Agent Platform section of the Google Cloud console, go to the Agent Studio page.
- In Start a conversation, click Text chat.
Optional: Configure the model and parameters:
- Model: Select Gemini Pro.
- Region: Select the region that you want to use.
Temperature: Use the slider or textbox to enter a value for temperature.
The temperature is used for sampling during response generation, which occurs whentopPandtopKare applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of0means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature. If the model enters infinite generation, increasing the temperature to at least
0.1may lead to improved results.1.0is the recommended starting value for temperature.Output token limit: Use the slider or textbox to enter a value for the max output limit.
Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.Specify a lower value for shorter responses and a higher value for potentially longer responses.
- Add stop sequence: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.
- Optional: To configure advanced parameters, click Advanced and
configure as follows:
Click to expand advanced configurations
Top-K: Use the slider or textbox to enter a value for top-K.
Top-K changes how the model selects tokens for output. A top-K of1means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of3means that the next token is selected from among the three most probable tokens by using temperature.For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses.
- Top-P: Use the slider or textbox to enter a value for top-P. Tokens are selected from most probable to the least until the sum of their probabilities equals the value of top-P. For the least variable results, set top-P to `0`.
- Enable Grounding: Add a grounding source and path to customize this feature.
- Enter your text prompt in the Prompt pane. The model uses previous messages as context for new responses.
- Optional: To display the number of text tokens, click View tokens. You can view the tokens or token IDs of your text prompt.
- To view the tokens in the text prompt that are highlighted with different colors marking the boundary of each token ID, click Token ID to text. Media tokens aren't supported.
- To view the token IDs, click Token ID.
To close the tokenizer tool pane, click X, or click outside of the pane.
- Click Submit.
- Optional: To save your prompt to My prompts, click Save.
- Optional: To get the Python code or a curl command for your prompt, click Get code.
- Optional: To clear all previous messages, click Clear conversation
Python
Install
pip install --upgrade google-genai
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
Go
Learn how to install or update the Go.
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
Node.js
Install
npm install @google/genai
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
Java
Learn how to install or update the Java.
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
C#
Learn how to install or update the C#.
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
REST
Before using any of the request data, make the following replacements:
GENERATE_RESPONSE_METHOD: The type of response that you want the model to generate. Choose a method that generates how you want the model's response to be returned:streamGenerateContent: The response is streamed as it's being generated to reduce the perception of latency to a human audience.generateContent: The response is returned after it's fully generated.
LOCATION: The region to process the request. Available options include the following:Click to expand a partial list of available regions
us-central1us-west4northamerica-northeast1us-east4us-west1asia-northeast3asia-southeast1asia-northeast1
PROJECT_ID: Your project ID.MODEL_ID: The model ID of the multimodal model that you want to use. The text instructions to include in the first prompt of the multi-turn conversation. For example,TEXT1
What are all the colors in a rainbow? The text instructions to include in the second prompt. For example,TEXT2
Why does it appear when it rains?TEMPERATURE: The temperature is used for sampling during response generation, which occurs whentopPandtopKare applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of0means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature. If the model enters infinite generation, increasing the temperature to at least
0.1may lead to improved results.1.0is the recommended starting value for temperature.
To send your request, choose one of these options:
curl
Save the request body in a file named request.json.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF'
{
"contents": [
{
"role": "user",
"parts": { "text": "TEXT1" }
},
{
"role": "model",
"parts": { "text": "What a great question!" }
},
{
"role": "user",
"parts": { "text": "TEXT2" }
}
],
"generation_config": {
"temperature": TEMPERATURE
}
}
EOFThen execute the following command to send your REST request:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD"
PowerShell
Save the request body in a file named request.json.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@'
{
"contents": [
{
"role": "user",
"parts": { "text": "TEXT1" }
},
{
"role": "model",
"parts": { "text": "What a great question!" }
},
{
"role": "user",
"parts": { "text": "TEXT2" }
}
],
"generation_config": {
"temperature": TEMPERATURE
}
}
'@ | Out-File -FilePath request.json -Encoding utf8Then execute the following command to send your REST request:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
Here is a streaming text generation example.
Python
Before trying this sample, follow the Python setup instructions in the Agent Platform quickstart using client libraries. For more information, see the Agent Platform Python API reference documentation.
To authenticate to Agent Platform, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Gemini multiturn chat behavior
When you use multiturn chat, Gemini Enterprise Agent Platform locally stores the initial content and prompts that you sent to the model. Gemini Enterprise Agent Platform sends all of this data with each subsequent request to the model. Consequently, the input costs for each message that you send is a running total of all the data that was already sent to the model. If your initial content is sufficiently large, consider using context caching when you create the initial model object to better control input costs.
What's next
- Learn how to send multimodal prompt requests:
- Learn about responsible AI best practices and Agent Platform's safety filters.