Gemini 3 Pro is our most advanced reasoning Gemini model, capable of solving complex problems. Gemini 3 Pro can comprehend vast datasets and challenging problems from different information sources, including text, audio, images, video, PDFs, and even entire code repositories with its 1M token context window.
Quality changes
When migrating from Gemini 2.5 Pro to Gemini 3 Pro, you can expect to see significant improvements in high-level reasoning, complex instruction following, tool use, agentic use cases, and better long context capabilities (including image and document understanding). Gemini 3 Pro models aren't designed around prioritizing supporting audio understanding or image segmentation use cases. For high performance output on those use cases, try using models specifically built with those needs in mind. For information-dense or complicated graphs, tables, or charts, the model can sometimes incorrectly extract information or misinterpret the provided resources. Presenting key information in as straightforward a manner as possible can help ensure the preferred output when working with Gemini 3 Pro.
Behavior changes
Gemini 3 Pro is designed for high efficiency and action. The model has been trained to provide concise, direct answers and to attempt to solve user intent as quickly as possible. Because the model is designed to prioritize being helpful, it may occasionally guess when information is missing or prioritize a satisfying answer over strict instructions. This behavior can be mitigated or modified with prompting. For more information and best practices, see Get started with Gemini 3.
New features
Gemini 3 Pro introduces several new features to improve performance, control, and multimodal fidelity:
- Thinking level: Use the
thinking_levelparameter to control the amount of internal reasoning the model performs (low or high) to balance response quality, reasoning complexity, latency, and cost. Thethinking_levelparameter replacesthinking_budgetfor Gemini 3 models. - Media resolution: Use the
media_resolutionparameter (low, medium, or high) to control vision processing for multimodal inputs, impacting token usage and latency. See Get started with Gemini 3 for default resolution settings. - Thought signatures: Stricter validation of thought signatures improves reliability in multi-turn function calling.
- Multimodal function responses: Function responses can now include multimodal objects like images and PDFs in addition to text.
- Streaming function calling: Stream partial function call arguments to improve user experience during tool use.
For more information on using these features, see Get started with Gemini 3.
Try in Vertex AI View in Model Garden (Preview) Deploy example app
| Model ID | gemini-3-pro-preview |
|
|---|---|---|
| Supported inputs & outputs |
|
|
| Token limits |
|
|
| Capabilities | ||
| Usage types |
|
|
| Technical specifications | ||
| Images |
|
|
| Documents |
|
|
| Video |
|
|
| Audio |
|
|
| Parameter defaults |
|
|
| Supported regions | ||
|
Model availability (Includes Standard PayGo & Provisioned Throughput) |
|
|
| See Deployments and endpoints for more information. | ||
| Knowledge cutoff date | January 2025 | |
| Versions |
|
|
| Security controls | ||
| Online prediction |
|
|
| Batch prediction |
|
|
| Tuning |
|
|
| Context caching |
|
|
| See Security controls for more information. | ||
| Supported languages | See Supported languages. | |
| Pricing | See Pricing. | |