You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.
To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly.
December 03, 2025
Model Armor integration with Vertex AI is available in General Availability.
November 10, 2025
Model Armor is available in the following regions:
europe-west1(Belgium)europe-west2(London)europe-west3(Frankfurt)asia-south1(Mumbai)
For more information, see Locations.
September 27, 2025
Model Armor limits the maximum input size for files and text to 4 MB, automatically skipping any content that exceeds this threshold.
September 23, 2025
The upgraded model for the prompt injection and jailbreak detection filter is available in EU multi-region. This model has improved detection rates across several attack vectors, including the following:
- Do Anything Now prompts
- System instruction manipulation
- Unauthorized action execution
- Sensitive information retrieval
September 16, 2025
Model Armor is integrated with Gemini Enterprise to provide greater insights and enhanced security of your agent interactions by default. For more information, see Integration with Gemini Enterprise.
September 15, 2025
Model Armor integration with Google Kubernetes Engine is available in General Availability.
September 08, 2025
The Model Armor monitoring dashboard provides a centralized view to track interactions and violations within your projects. This feature is available in Preview. For more information, see View the monitoring dashboard.
July 29, 2025
Model Armor supports the asia-southeast1 location. For information
about supported locations, see Locations.
You can use Terraform to manage Model Armor floor settings and templates. This helps reduce manual overhead with Model Armor deployments. For more information, see Terraform resources for Model Armor.
Model Armor and Vertex AI integration
Model Armor integrates with Vertex AI, providing a default security configuration for all new prediction endpoints. This feature is in Preview. For more information, see Integration with Vertex AI.
July 28, 2025
Model Armor filter updates
- The prompt injection and jailbreak detection filter now supports 10,000 tokens.
- For the Sensitive Data Protection filter,
SKIP_DETECTIONis returned if the prompt or response exceeds the token limit. - For all other filters, if the prompt or response exceeds the token limit,
MATCH_FOUNDis returned if malicious content is found, andSKIP_DETECTIONis returned if no malicious content is found.
June 19, 2025
The prompt injection and jailbreak detection filter in Model Armor flags
more threats across various attack vectors, and offers an improved detection
rate for high-confidence malicious prompts. This filter is available in us-east1.
June 08, 2025
The Responsible AI and prompt injection and jailbreak detection filters are tested in English, Spanish, French, Italian, Portuguese, German, Chinese (Mandarin), Japanese, and Korean. These filters can work in other languages, but the quality of results might vary. For more information, see Languages supported.
Model Armor supports screening text in the following document types for malicious content:
- DOCX, DOCM, DOTX, DOTM documents
- PPTX, PPTM, POTX, POT presentations
- XLSX, XLSM, XLTX, XLTM spreadsheets
- Multi-language support for Model Armor filters
May 28, 2025
- Model Armor supports multi-regional endpoints. For more information, see Locations.
- All Model Armor filters support up to 2,000 tokens.
April 09, 2025
Model Armor enforces security policies uniformly on generative AI inference traffic using a traffic extension. This applies to all application load balancers, including Google Kubernetes Engine Inference Gateway. This feature is in Preview. For more information, see Integration with Google Kubernetes Engine.
March 21, 2025
The prompt injection and jailbreak detection filter in Model Armor is upgraded with increased efficacy and higher model quality scores.
February 03, 2025
Model Armor is a Google Cloud service that lets you to apply content safety and content security controls to LLM prompts and responses to mitigate risks such as sensitive data leakage, prompt injection, and offensive content. For more information, see Overview.