<?xml version="1.0" encoding="UTF-8"?>
<!-- AUTOGENERATED FILE. DO NOT EDIT. -->
<feed xmlns="http://www.w3.org/2005/Atom">
  <id>tag:google.com,2016:vision-release-notes</id>
  <title>Cloud Vision - Release notes</title>
  <link rel="self" href="https://docs.cloud.google.com/feeds/vision-release-notes.xml"/>
  <author>
    <name>Google Cloud Platform</name>
  </author>
  <updated>2024-12-19T00:00:00-08:00</updated>

  <entry>
    <title>December 19, 2024</title>
    <id>tag:google.com,2016:vision-release-notes#December_19_2024</id>
    <updated>2024-12-19T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#December_19_2024"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>Safe Search model update</strong></p>
<p>We will be updating the <code>SAFE_SEARCH_DETECTION</code> feature model to improve quality. </p>
<p>We'll support both the current model and the new model for the next 90 days. After 90 days, the new model will become the default. The current model can still be accessed by specifying <code>"builtin/legacy"</code> for an additional 90 days before it's deprecated.</p>
<p>To use the new model, specify <code>"builtin/latest"</code> in the model field of a <code>Feature</code> object.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>August 23, 2024</title>
    <id>tag:google.com,2016:vision-release-notes#August_23_2024</id>
    <updated>2024-08-23T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#August_23_2024"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>New label detection model</strong></p>
<p>An improved model is now available for Label Detection. Along with the improved model, the <a href="https://docs.cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#entityannotation"><code>topicality</code></a> field is now populated correctly.</p>
<p>Specify "<code>builtin/latest</code>" in the model field of a <code>Feature</code> object to use the new model.
We'll support both the current model and the new model the next 90 days. After 90 days, the new models will become the default. The current models can still be accessed by specifying "<code>builtin/legacy</code>" for an additional 90 days before they are deprecated.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 05, 2023</title>
    <id>tag:google.com,2016:vision-release-notes#December_05_2023</id>
    <updated>2023-12-05T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#December_05_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>Updated feature models</strong></p>
<p>Improved models are now available for the following features:</p>
<ul>
<li>Text detection and documentation text detection (OCR)</li>
<li>Web detection</li>
<li>Logo detection</li>
<li>Object localization</li>
</ul>
<p>Specify <code>"builtin/latest"</code> in the <code>model</code> field of a <code>Feature</code> object to use the new models.</p>
<p>We'll support both the current model and the new model the next 90 days. After 90 days, the new models will become the default. The current models can still be accessed by specifying <code>"builtin/legacy"</code> for an additional 90 days before they are deprecated.</p>
<h3>Change</h3>
<p><strong>Change in OCR model behavior when detecting line break hyphens</strong></p>
<p><em>Soft hyphens</em> are used for the purpose of breaking words across lines. The previous (<code>"builtin/legacy"</code>) version used the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/AnnotateImageResponse#breaktype"><code>BreakType</code></a> <code>HYPHEN</code> to represent a soft hyphen. The new stable (<code>"builtin/latest"</code>) version represents it as a new symbol with literal text <code>"-"</code> and <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/AnnotateImageResponse#breaktype"><code>BreakType</code></a> <code>EOL_SURE_SPACE</code>.</p>
<p>Consider the following example text and OCR model output:</p>
<p><strong>Sample text</strong></p>
<p>"<em>Mr. White has had considerable experience as a <strong>Veteri-</strong></em></p>
<p><em>nary surgeon, and will attend practice in that line.</em>"</p>
<p><strong>Output (<code>"builtin/legacy"</code> model)</strong>:</p>
<pre class="prettyprint"><code>[...]
{
  "property": {
    "detectedBreak": {
      "type": "HYPHEN"
    }
  },
  "boundingBox": {
    "vertices": [
      [...]
    ]
  },
  "text": "i"
}
[...]
</code></pre>
<p><strong>Output (<code>"builtin/latest"</code> model)</strong>: </p>
<pre class="prettyprint"><code>{
  "boundingBox": {
    "vertices": [
      [...]
    ]
  },
  "text": "i"
},
{
  "property": {
    "detectedBreak": {
      "type": "EOL_SURE_SPACE"
    }
  },
  "boundingBox": {
    "vertices": [
      [...]
    ]
  },
  "text": "-"
}
</code></pre>
]]>
    </content>
  </entry>

  <entry>
    <title>December 16, 2022</title>
    <id>tag:google.com,2016:vision-release-notes#December_16_2022</id>
    <updated>2022-12-16T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#December_16_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>Landmark Detection Upgrade</strong></p>
<p>Specify <code>"builtin/latest"</code> in the model field of a <code>Feature</code> object to use the new model.</p>
<p>We'll support both the current model and the new model the next 90 days. After 90 days the current model will be deprecated and only the new model will be used for all landmark detection requests.</p>
<h3>Change</h3>
<p><strong>Face Detection Upgrade</strong></p>
<p>Specify <code>"builtin/latest"</code> in the model field of a <code>Feature</code> object to use the new model.</p>
<p>We'll support both the current model and the new model the next 90 days. After 90 days the latest model will become the default model. The old model will be available for another 90 days by specifying <code>"builtin/legacy"</code>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 20, 2022</title>
    <id>tag:google.com,2016:vision-release-notes#May_20_2022</id>
    <updated>2022-05-20T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#May_20_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR model migration</strong></p>
<p>The <code>TEXT_DETECTION</code> and <code>DOCUMENT_TEXT_DETECTION</code> models have been upgraded to newer versions. The API interface and client library will be the same as the previous version. The API follows the same Service Level Agreement.</p>
<p>The legacy models can still be accessed until August 20 2022. Specify "builtin/legacy" in the model field of a <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature">Feature</a> object to get the old model results. <strong>After August 20, 2022 the legacy models will no longer be offered</strong>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 05, 2022</title>
    <id>tag:google.com,2016:vision-release-notes#May_05_2022</id>
    <updated>2022-05-05T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#May_05_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR model migration reverted</strong></p>
<p>We have switched the "builtin/stable" model back to the original version temporarily while we fix a bug resulting from this migration. The week of May 16th, we will update the "builtin/stable" model used for OCR again with the model from "builtin/latest" and create a new release note.</p>
<p>You will be able to use the original model as "builtin/legacy" for 90 more days after we upgrade "builtin/stable".</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 02, 2022</title>
    <id>tag:google.com,2016:vision-release-notes#May_02_2022</id>
    <updated>2022-05-02T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#May_02_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR model migration</strong></p>
<p>The <code>TEXT_DETECTION</code> and <code>DOCUMENT_TEXT_DETECTION</code> models have been upgraded to newer versions. The API interface and client library will be the same as the previous version. The API follows the same Service Level Agreement.</p>
<p>The legacy models can still be accessed until August 02 2022. Specify "builtin/legacy" in the model field of a <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature">Feature</a> object to get the old model results. <strong>After August 02, 2022 the legacy models will no longer be offered</strong>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>January 21, 2022</title>
    <id>tag:google.com,2016:vision-release-notes#January_21_2022</id>
    <updated>2022-01-21T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#January_21_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR model update</strong></p>
<p>We have updated the "builtin/latest" OCR model with quality improvements. Consequently, customers can continue to test this model for 90 <em>additional</em> days.</p>
<p>Please note that you have 90 days from today to test the new model by specifying "builtin/latest" in the model field of the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature">Feature</a> object. At the end of that period, it will be promoted to the default model accessible as "builtin/stable". After that event, the original models will still be available for another 90 days using "builtin/legacy". If you encounter problems with this upgrade, please contact Vision API engineering team by submitting a ticket in the <a href="https://issuetracker.google.com/issues/new?component=491447">private issue tracker</a>.</p>
<p>For the original announcement of this change, see the <a href="https://docs.cloud.google.com/vision/docs/release-notes#October_01_2021">October 1, 2021</a> release note.</p>
<p>Region forwarding from the global to the regional endpoint has been deprecated. For more information, see the <a href="https://docs.cloud.google.com/vision/docs/release-notes#October_01_2021">October 1, 2021</a> release note.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 01, 2021</title>
    <id>tag:google.com,2016:vision-release-notes#October_01_2021</id>
    <updated>2021-10-01T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#October_01_2021"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR Model Update</strong></p>
<p>An improved model is now available for <a href="https://docs.cloud.google.com/vision/docs/detecting-text">Text Detection (OCR)</a>. The new model can be used with <code>TEXT_DETECTION</code> and <code>DOCUMENT_TEXT_DETECTION</code> features. The same model is used for requests sent to both features.
<strong>With the new model, the distribution of confidence scores of responses will change.</strong> For more information, see <a href="https://docs.cloud.google.com/vision/docs/service-announcements">Service announcements</a>.</p>
<p>Please note that you have 90 days from today to test the new model by specifying "builtin/latest" in the model field of the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature">Feature</a> object. At the end of that period, it will be promoted to the default model accessible as "builtin/stable". After that event, the original models will still be available for another 90 days using "builtin/legacy".
If you encounter problems with this upgrade, please contact Vision API engineering team by submitting a ticket in the <a href="https://issuetracker.google.com/issues/new?component=491447">private issue tracker</a>.</p>
<h3>Change</h3>
<p>Deprecate region forwarding
In 90 days, specifying the location "us" or "eu" in the request to the global endpoint vision.googleapis.com will no longer be supported. Instead you should directly call the "us" or "eu" region endpoints (us-vision.googleapis.com or eu-vision.googleapis.com). You can find more information in the <a href="https://docs.cloud.google.com/vision/docs/ocr#setting-the-location-using-the-api">Multi-regional support</a> section of the feature pages.</p>
<h3>Feature</h3>
<p><strong>New multi-regional support for features</strong></p>
<p>The Vision API now offers multi-regional support (<code>us</code> and <code>eu</code>) for the <code>LABEL_DETECTION</code> and <code>SAFE_SEARCH</code> features.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 22, 2021</title>
    <id>tag:google.com,2016:vision-release-notes#March_22_2021</id>
    <updated>2021-03-22T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#March_22_2021"/>
    <content type="html"><![CDATA[<h3>Fixed</h3>
<p><strong>EXIF rotation featured fixed</strong></p>
<p>EXIF rotation is now disabled.</p>
<p>For more information, see the March 8, 2021 release note.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 08, 2021</title>
    <id>tag:google.com,2016:vision-release-notes#March_08_2021</id>
    <updated>2021-03-08T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#March_08_2021"/>
    <content type="html"><![CDATA[<h3>Issue</h3>
<p><strong>EXIF rotation feature fix</strong></p>
<p>This fix will disable EXIF rotation, a feature activated by the model update mentioned in the <a href="#November_15_2020">November 11, 2020</a> release note. This feature affects the DOCUMENT_TEXT_DETECTION and TEXT_DETECTION features.</p>
<p>EXIF rotation will be turned down on <strong>March 22, 2021</strong>. If your usage relies on this specific behavior, please file a feature request to us.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 14, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#December_14_2020</id>
    <updated>2020-12-14T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#December_14_2020"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>OCR On-Prem General Availability (GA) release</strong></p>
<p>OCR On-Prem is now generally available for approved customers. OCR On-Prem enables easy integration of Google image text recognition technologies into your on-premises solution.</p>
<p>For more information, refer to the <a href="https://docs.cloud.google.com/vision/on-prem/">product documentation</a>. Approved customers can also view  the <a href="https://console.cloud.google.com/marketplace/details/googlecloudvision/ocr-service-cpu">marketplace entry</a> .</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 07, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#December_07_2020</id>
    <updated>2020-12-07T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#December_07_2020"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Confidence score field addition for <code>TEXT_DETECTION</code></strong></p>
<p>You can now provide the flag <code>TextDetectionParams.enable_text_detection_confidence_score</code> to a <code>TEXT_DETECTION</code> request to get a confidence score for response information.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 04, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#December_04_2020</id>
    <updated>2020-12-04T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#December_04_2020"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong><code>LABEL_DETECTION</code> model upgrade</strong></p>
<p>The latest <code>LABEL_DETECTION</code> model announced on October 16, 2020 has been promoted to the default model. The original model will still be available for another 60 days using <code>"builtin/legacy"</code>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>November 15, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#November_15_2020</id>
    <updated>2020-11-15T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#November_15_2020"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR legacy model access discontinued</strong></p>
<p>Extended support of the legacy  <code>TEXT_DETECTION</code> and <code>DOCUMENT_TEXT_DETECTION</code> models ("builtin/legacy_20190601") is now discontinued. </p>
<p>See the <a href="#June_11_2020">June 11, 2020</a> release note for more information.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 16, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#October_16_2020</id>
    <updated>2020-10-16T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#October_16_2020"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong><code>LABEL_DETECTION</code> model upgrade</strong></p>
<p>The <code>LABEL_DETECTION</code> model will undergo an upgrade over the next 90 days to a newer version. The API interface and client library will be the same as with the previous version. The API follows the same <a href="https://cloud.google.com/vision/sla">Service Level Agreement</a>.</p>
<p>Please note that you have 30 days from today to test the new model by specifying <code>"builtin/latest"</code> in the <code>model</code> field of the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature"><code>Feature</code></a> object while requesting image annotation. At the end of that period, it will be promoted to the default model accessible as <code>"builtin/stable"</code>. After that event, the original model will still be available for another 60 days using <code>"builtin/legacy"</code>.</p>
<p>If you encounter problems with this upgrade, please contact Vision API engineering team by submitting a ticket in the <a href="https://issuetracker.google.com/issues/new?component=491447">private issue tracker</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 11, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#June_11_2020</id>
    <updated>2020-06-11T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#June_11_2020"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR legacy model access extension</strong></p>
<p>Based on customer feedback, we have decided to extend support of the legacy  <code>TEXT_DETECTION</code> and <code>DOCUMENT_TEXT_DETECTION</code> models. These legacy models are accessed by specifying "builtin/legacy_20190601" in the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature"><code>model</code></a> of a <code>Feature</code> object.</p>
<p>These models will now be accessible until <strong>November 15, 2020 (6 months from launch date)</strong> to give customers more time to adapt and migrate to the new model.</p>
<p>See the <a href="#May_15_2020">May 15, 2020</a> release note for the original update announcement.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 04, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#June_04_2020</id>
    <updated>2020-06-04T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#June_04_2020"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Access Transparency GA</strong></p>
<p>Access Transparency logging is now Generally Available. If you want to enable
Access Transparency logs, see <a href="https://docs.cloud.google.com/assured-workloads/access-transparency/docs/overview">Enabling Access
Transparency</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 15, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#May_15_2020</id>
    <updated>2020-05-15T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#May_15_2020"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR model upgrades</strong></p>
<p><em><strong>Note</strong>: As per the <a href="#June_11_2020">June 11, 2020</a> release note, the legacy models are accessible through November 15, 2020.</em></p>
<p>The <code>TEXT_DETECTION</code> and <code>DOCUMENT_TEXT_DETECTION</code> models have been upgraded to newer versions. The API interface and client library will be the same as previous version. The API follows the same <a href="https://cloud.google.com/vision/sla">Service Level Agreement</a>.</p>
<p>The legacy models can still be accessed until June 30, 2020. Specify
    "builtin/legacy_20190601" in the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature"><code>model</code></a> field of a
    <code>Feature</code> object to get the old model results. After June 30, 2020 the old models will not longer be offered.</p>
<p>For more information, see the <a href="https://docs.cloud.google.com/vision/docs/ocr">product documentation</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>April 11, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#April_11_2020</id>
    <updated>2020-04-11T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#April_11_2020"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>CMEK compliance</strong></p>
<p>Vision API is now compliant with customer-managed encryption keys (CMEK). To learn more, vist the <a href="https://docs.cloud.google.com/vision/docs/cmek">CMEK compliance page</a>. Please note that <a href="https://docs.cloud.google.com/vision/product-search/docs/">Product Search</a> is <em>not</em> CMEK compliant at this time.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>February 24, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#February_24_2020</id>
    <updated>2020-02-24T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#February_24_2020"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>SafeSearch Detection update</strong></p>
<p>The SafeSearch model has been upgraded to a newer version. The API interface and client library will be the same as previous version. The API follows the same <a href="https://cloud.google.com/vision/sla">Service Level Agreement</a>.</p>
<p>For more information, see the <a href="https://docs.cloud.google.com/vision/docs/detecting-safe-search">product documentation</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>February 19, 2020</title>
    <id>tag:google.com,2016:vision-release-notes#February_19_2020</id>
    <updated>2020-02-19T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#February_19_2020"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>Cloud Vision API will not return gendered labels such as 'man' and 'woman' after February 19, 2020</strong></p>
<p><a href="https://docs.cloud.google.com/vision/docs/labels">Detecting labels</a> in an image containing humans will result in non-gendered label such as 'person' being returned. Our prior approach was to return gendered terms, like 'man' or 'woman'. </p>
<p>Given that a person's gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the <a href="https://ai.google/principles/">Artificial Intelligence Principles at Google</a>, specifically Principle #2: Avoid creating or reinforcing unfair bias.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 13, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#December_13_2019</id>
    <updated>2019-12-13T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#December_13_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Regional endpoints available for OCR</strong></p>
<p>The Vision API now offers <a href="https://docs.cloud.google.com/vision/docs/ocr">multi-regional support</a> (<code>us</code> and <code>eu</code>) for the OCR feature.</p>
<p>Using a multi-region endpoint enables you to configure the Vision API to
store and perform machine learning (OCR) on your data in the United States or European Union.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 30, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#October_30_2019</id>
    <updated>2019-10-30T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#October_30_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Beta feature</strong></p>
<p>The following beta features are available in API version <strong>v1p4beta1</strong>:</p>
<ul>
<li>Celebrity recognition. For more information, see <a href="https://docs.cloud.google.com/vision/docs/celebrity-recognition">Celebrity recognition</a>.</li>
</ul>
]]>
    </content>
  </entry>

  <entry>
    <title>September 12, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#September_12_2019</id>
    <updated>2019-09-12T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#September_12_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>OCR regional support</strong></p>
<p>You can now specify a continent-level region for data processing of OCR requests. For more infomation, see the OCR how-to pages:</p>
<ul>
<li><a href="https://docs.cloud.google.com/vision/docs/ocr#regionalization">Detect text in images</a></li>
<li><a href="https://docs.cloud.google.com/vision/docs/handwriting#regionalization">Detect handwriting in images</a></li>
<li><a href="https://docs.cloud.google.com/vision/docs/pdf#regionalization">Detect text in files (PDF/TIFF)</a></li>
</ul>
]]>
    </content>
  </entry>

  <entry>
    <title>August 29, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#August_29_2019</id>
    <updated>2019-08-29T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#August_29_2019"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Improved detection models are now default for the following features:</p>
<ul>
<li><a href="https://docs.cloud.google.com/vision/docs/detecting-logos">Logo Detection</a></li>
<li><a href="https://docs.cloud.google.com/vision/docs/detecting-landmarks">Landmark Detection</a></li>
<li><a href="https://docs.cloud.google.com/vision/docs/detecting-crop-hints">Crop hints</a></li>
<li><a href="https://docs.cloud.google.com/vision/docs/object-localizer">Object Localization</a></li>
</ul>
<p>The legacy model can still be accessed for 90 days by specifying
    "builtin/legacy" in the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature"><code>model</code></a> field of a
    <code>Feature</code> object.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>August 16, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#August_16_2019</id>
    <updated>2019-08-16T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#August_16_2019"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>Spring framework integration</strong></p>
<p>If you write your applications in Java with the <a href="https://spring.io/projects/spring-framework">Spring Framework</a>, we now provide a guide to help you <a href="https://docs.cloud.google.com/vision/docs/adding-spring">add Spring Cloud Vision API to your application</a>. Spring Cloud Vision can make it easier and more efficient to work with Cloud Vision.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 07, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#June_07_2019</id>
    <updated>2019-06-07T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#June_07_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>General Availability (GA) release</strong>. Support for online small batch file annotation has been released as GA. For more information, see <a href="https://docs.cloud.google.com/vision/docs/document-small-batch">Online small batch file annotation</a>.</p>
<h3>Feature</h3>
<p><strong>General Availability (GA) release</strong>. Support for offline batch image annotation has been released as GA. For more information, see <a href="https://docs.cloud.google.com/vision/docs/batch">Offline batch image annotation</a>.</p>
<h3>Change</h3>
<p><strong>Model updates</strong></p>
<p>Improved detection models are now available for the following features:</p>
<ul>
<li><a href="https://docs.cloud.google.com/vision/docs/detecting-logos">Logo detection</a></li>
<li><a href="https://docs.cloud.google.com/vision/docs/detecting-landmarks">Landmark detection</a></li>
<li><a href="https://docs.cloud.google.com/vision/docs/detecting-crop-hints">Crop hints</a></li>
</ul>
<p>Specify "builtin/latest" in the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature"><code>model</code></a> field of a <code>Feature</code> object to use the new models.</p>
<p>We'll support both the current model and the new model the next 90 days. After 90 days the current detection models will be deprecated and only the new detection models will be used for all logo, landmark, and crop hint detection requests.</p>
<h3>Change</h3>
<p><strong>Languages update</strong></p>
<p>More languages (with associated <code>languageHint</code> codes) have been added to the list of languages <a href="https://docs.cloud.google.com/vision/docs/languages#supported-langs">supported</a> by <code>TEXT_DETECTION</code> and <code>DOCUMENT_TEXT_DETECTION</code>. <a href="https://docs.cloud.google.com/vision/docs/languages#experimental-langs">Experimentally supported</a> languages and <a href="https://docs.cloud.google.com/vision/docs/languages#mapped-langs">mapped languages</a> lists have also been added.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 13, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#May_13_2019</id>
    <updated>2019-05-13T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#May_13_2019"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>OCR Model Updates</strong></p>
<p>An <a href="https://docs.cloud.google.com/vision/docs/release-notes#september2018">improved OCR model</a> is now the default for <a href="https://docs.cloud.google.com/vision/docs/detecting-text">Text detection (OCR)</a>.</p>
<p>The legacy model can still be accessed for 90 days by specifying "builtin/legacy" in the <a href="https://docs.cloud.google.com/vision/docs/reference/rest/v1/Feature"><code>model</code></a> field of a <code>Feature</code> object.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>April 10, 2019</title>
    <id>tag:google.com,2016:vision-release-notes#April_10_2019</id>
    <updated>2019-04-10T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/vision/docs/release-notes#April_10_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Beta features</strong></p>
<p>The following beta features are available in API version <strong>v1p4beta1</strong>:</p>
<ul>
<li><strong>Online small batch file annotation</strong>. Performs synchronous image detection and annotation for a batch of files (currently "application/pdf", "image/tiff" and "image/gif"). The API will extract at most 5 frames (gif) or pages (pdf or tiff) of your choosing from each file provided and perform detection and annotation for each image extracted. <a href="https://docs.cloud.google.com/vision/docs/document-small-batch">Learn more</a>.</li>
<li><strong>Offline batch image annotation</strong>. Allows users to call any Cloud Vision API feature type on a batch of images and perform asynchronous image detection and annotation on the list of images. <a href="https://docs.cloud.google.com/vision/docs/batch">Learn more</a>.</li>
</ul>
]]>
    </content>
  </entry>

</feed>
