<?xml version="1.0" encoding="UTF-8"?>
<!-- AUTOGENERATED FILE. DO NOT EDIT. -->
<feed xmlns="http://www.w3.org/2005/Atom">
  <id>tag:google.com,2016:video-release-notes</id>
  <title>Video Intelligence API - Release notes</title>
  <link rel="self" href="https://docs.cloud.google.com/feeds/video-release-notes.xml"/>
  <author>
    <name>Google Cloud Platform</name>
  </author>
  <updated>2021-11-01T00:00:00-07:00</updated>

  <entry>
    <title>November 01, 2021</title>
    <id>tag:google.com,2016:video-release-notes#November_01_2021</id>
    <updated>2021-11-01T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#November_01_2021"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>AutoML Action Recognition</strong>: The Streaming API is a Beta feature of Video Intelligence API for real-time versions of several capabilities such as object tracking and label detection. This current launch adds streaming support for AutoML Action Recognition models. Customers can now specify their own custom AutoML model when performing <a href="https://docs.cloud.google.com/video-intelligence/docs/streaming/action-recognition">action recognition</a> on a stream.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 08, 2021</title>
    <id>tag:google.com,2016:video-release-notes#October_08_2021</id>
    <updated>2021-10-08T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#October_08_2021"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>The SHOT_CHANGE_DETECTION model will undergo an upgrade over the next 90 days to a newer version. The API interface and client library will remain the same as the previous version.</p>
<p>Note that you have 30 days from today to test the new model by specifying "builtin/latest"
 in the model field of the config object for shot change detection. At the end of 30 days, the new model  will be promoted to the default model accessible as "builtin/stable". After that event, the original model, currently accessible by default or using "builtin/stable" will still be available for another 60 days using "builtin/legacy".</p>
<p>Until this 30 day period ends, the model formerly accessible as "builtin/latest" will be available as "builtin/legacy". Thank you for your feedback on that model, now labeled  "builtin/legacy" version. The new model launched today as "builtin/latest" has  been improved over this model as well as the current default  "builtin/stable" model.</p>
<p>If you encounter problems with this upgrade, contact the Video Intelligence API engineering team by submitting a ticket in the <a href="https://issuetracker.google.com/issues/new?component=190865&amp;template=1161103&amp;pli=1">private issue tracker</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>September 20, 2021</title>
    <id>tag:google.com,2016:video-release-notes#September_20_2021</id>
    <updated>2021-09-20T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#September_20_2021"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>The CELEBRITY_RECOGNITION model will undergo an upgrade to a newer version over the next 90 days. The API interface and client library will remain same as the previous version. The API follows the same Service Level Agreement (SLA). You have 30 days from this release date to test the new model. To do so, specify "builtin/latest" in the model field of the Feature object while requesting image annotation. After the end of this 30-day period, the new version will be promoted to the default model and accessible as "builtin/stable". Going forward, the original model will still be available for another 60 days using "builtin/legacy". If you encounter problems with this upgrade,  contact the Video Intelligence API engineering team by submitting a ticket in the private issue tracker.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 04, 2021</title>
    <id>tag:google.com,2016:video-release-notes#May_04_2021</id>
    <updated>2021-05-04T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#May_04_2021"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>The following features are available in the Video Intelligence API version v1:</p>
<p><strong>Face detection</strong>: Locate faces within a video, and identify attributes such as glasses being worn.  <a href="https://docs.cloud.google.com/video-intelligence/docs/face-detection">Learn more</a></p>
<p><strong>Person detection</strong>: Locate people in a video, and identify attributes and 2D landmarks. <a href="https://docs.cloud.google.com/video-intelligence/docs/people-detection">Learn more</a></p>
<p>This GA launch brings significant quality improvement to both features.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>September 14, 2020</title>
    <id>tag:google.com,2016:video-release-notes#September_14_2020</id>
    <updated>2020-09-14T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#September_14_2020"/>
    <content type="html"><![CDATA[<h3>Issue</h3>
<p>Bug fix for shot change detection API: Tuned internal model parameters to reduce false positives under certain scenarios.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 21, 2020</title>
    <id>tag:google.com,2016:video-release-notes#May_21_2020</id>
    <updated>2020-05-21T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#May_21_2020"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>The following features are available in the Video Intelligence API version v1p3beta1:</p>
<p><strong>Face detection</strong>: Locate faces within a video, and identify attributes such as glasses being worn.  <a href="https://docs.cloud.google.com/video-intelligence/docs/face-detection">Learn more</a></p>
<p><strong>Person detection</strong>: Locate people in a video, and identify attributes and 2D landmarks. <a href="https://docs.cloud.google.com/video-intelligence/docs/people-detection">Learn more</a></p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 31, 2020</title>
    <id>tag:google.com,2016:video-release-notes#March_31_2020</id>
    <updated>2020-03-31T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#March_31_2020"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>The following GA feature is available in the Video Intelligence API version v1:</p>
<p><strong>Logo recognition</strong>: Detect, track, and recognize the presence of over 100,000 brands and logos in video content. <a href="https://docs.cloud.google.com/video-intelligence/docs/logo-recognition">Learn more</a></p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 30, 2019</title>
    <id>tag:google.com,2016:video-release-notes#October_30_2019</id>
    <updated>2019-10-30T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#October_30_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>Cloud Video Intelligence now offers
 <a href="https://docs.cloud.google.com/video-intelligence/docs/celebrity-recognition">celebrity recognition</a>
to select media &amp; entertainment companies and their designated partners. With celebrity recognition, you can inspect your video content to detect and track human faces that appear in the input video or video segment. The Video Intelligence API then compares the faces against a database of celebrities. This feature is in beta; access to the feature is restricted.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 04, 2019</title>
    <id>tag:google.com,2016:video-release-notes#October_04_2019</id>
    <updated>2019-10-04T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#October_04_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>Logo recognition is now available as a beta feature. <a href="https://docs.cloud.google.com/video-intelligence/docs/logo-recognition">Learn more</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>September 17, 2019</title>
    <id>tag:google.com,2016:video-release-notes#September_17_2019</id>
    <updated>2019-09-17T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#September_17_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>You can now specify which model you want to use with <a href="https://docs.cloud.google.com/video-intelligence/docs/analyze-labels"><code>LABEL_DETECTION</code></a> and
<a href="https://docs.cloud.google.com/video-intelligence/docs/analyze-shots"><code>SHOT_CHANGE</code></a>.
To specify a model using the v1 version of the service, set the <code>model</code> field of the
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate#labeldetectionconfig"><code>LabelDetectionConfig</code></a>
or
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate#shotchangedetectionconfig"><code>ShotChangeDetectionConfig</code></a>
to either <code>builtin/stable</code> or <code>builtin/latest</code>.</p>
<h3>Feature</h3>
<p>You can now specify your own custom AutoML model when performing
<a href="https://docs.cloud.google.com/video-intelligence/docs/streaming/label-analysis">label detection</a> or
<a href="https://docs.cloud.google.com/video-intelligence/docs/streaming/object-tracking">object tracking</a>
on a stream. This feature is in beta.</p>
<h3>Feature</h3>
<p>Cloud Video Intelligence rolled out improved models for video annotation using the  <a href="https://docs.cloud.google.com/video-intelligence/docs/object-tracking"><code>OBJECT_TRACKING</code></a> and
<a href="https://docs.cloud.google.com/video-intelligence/docs/text-detection"><code>TEXT_DETECTION</code></a> with the v1 version of the service.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 26, 2019</title>
    <id>tag:google.com,2016:video-release-notes#June_26_2019</id>
    <updated>2019-06-26T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#June_26_2019"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Results returned from asynchronous annotation now provide resource names in the following format <code>project/PROJECT_NAME/location/us-west1/operation/OPERATION_ID</code>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>April 10, 2019</title>
    <id>tag:google.com,2016:video-release-notes#April_10_2019</id>
    <updated>2019-04-10T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#April_10_2019"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>Live streaming video annotation is available as a beta feature. <a href="https://docs.cloud.google.com/video-intelligence/docs/streaming/live-streaming-overview">Learn more</a>.</p>
<h3>Feature</h3>
<p>Object tracking is generally available for use. <a href="https://docs.cloud.google.com/video-intelligence/docs/object-tracking">Learn more</a>.</p>
<h3>Feature</h3>
<p>Streaming from a file is available as a beta feature. <a href="https://docs.cloud.google.com/video-intelligence/docs/streaming/streaming">Learn more</a>.</p>
<h3>Feature</h3>
<p>Text detection (OCR) is generally available for use. <a href="https://docs.cloud.google.com/video-intelligence/docs/text-detection">Learn more</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 07, 2018</title>
    <id>tag:google.com,2016:video-release-notes#December_07_2018</id>
    <updated>2018-12-07T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#December_07_2018"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>The Video Intelligence API enables you to transcribe text from speech in the audio of a video. Speech transcription can recognize multiple speakers, filter out profanity, add punctuation to the transcribed text, and more. For more information, see <a href="https://docs.cloud.google.com/video-intelligence/docs/transcription">Speech Transcription</a>. This feature is generally available.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 26, 2018</title>
    <id>tag:google.com,2016:video-release-notes#October_26_2018</id>
    <updated>2018-10-26T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#October_26_2018"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>Cloud Video Intelligence allows you to <a href="https://docs.cloud.google.com/video-intelligence/docs/object-tracking">track an object</a> from one moment to the next in a video. This feature is in beta.</p>
<h3>Feature</h3>
<p>You can use the Video Intelligence API to <a href="https://docs.cloud.google.com/video-intelligence/docs/text-detection">detect text (OCR)</a> in a video. This feature is in beta.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>August 08, 2018</title>
    <id>tag:google.com,2016:video-release-notes#August_08_2018</id>
    <updated>2018-08-08T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#August_08_2018"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>Speech transcription is available as a beta feature. <a href="https://docs.cloud.google.com/video-intelligence/docs/transcription">Learn more</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 27, 2018</title>
    <id>tag:google.com,2016:video-release-notes#June_27_2018</id>
    <updated>2018-06-27T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#June_27_2018"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>An improved label detection model is now available. The new model:</p>
<ul>
<li>Leverages audio content in videos to improve label detection.</li>
<li>Is trained using more features and better calibration ground truth.</li>
</ul>
<p>To instruct the Cloud Video Intelligence API to use the new label detection model when servicing your annotation request, set the <code>model</code> field of your <a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate#labeldetectionconfig"><code>LabelDetectionConfig</code></a>
to <code>builtin/latest</code>. </p>
<p>We'll support both the current model and the new model the next 90 days. After 90 days the current label detection model will be deprecated and only the new label detection model will be used for all label detection requests.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>November 30, 2017</title>
    <id>tag:google.com,2016:video-release-notes#November_30_2017</id>
    <updated>2017-11-30T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#November_30_2017"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>Video Intelligence API is GA</strong>: The Video Intelligence API has graduated out of beta and has reached v1. All API endpoints are updated to use <code>https://videointelligence.googleapis.com/v1/</code>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>September 21, 2017</title>
    <id>tag:google.com,2016:video-release-notes#September_21_2017</id>
    <updated>2017-09-21T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#September_21_2017"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p><strong>Explicit content detection</strong>: SafeSearch has been renamed to Explicit Content Detection. Explicit Content Detection inspects an input video for frame-level imagery that could be considered adult content.</p>
<p>Explicit Content Detection is performed by using the
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate">annotate</a>
method and specifying an
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos#Feature">EXPLICIT_CONTENT_DETECTION</a>
request.</p>
<h3>Change</h3>
<p>The
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/Shared.Types/AnnotateVideoResponse#ExplicitContentAnnotation"><code>ExplicitContentAnnotation</code></a>
and
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/Shared.Types/AnnotateVideoResponse#ExplicitContentFrame"><code>ExplicitContentFrame</code></a>
types replace the <code>SafeSearchAnnotation</code> type. The <code>adult</code> field has been renamed to <code>pornographyLikelihood</code> and the <code>time</code> field has been renamed to <code>timeOffset</code>. The <code>spoof</code>, <code>medical</code>, <code>violent</code>, and <code>racy</code> fields have also been removed. The <code>timeOffset</code> field returns a value of type <code>Duration</code> instead of type <code>int64</code>.</p>
<h3>Change</h3>
<p>The <code>labelAnnotations</code> field returned in the response for a video annotation request has been replaced with the <code>segmentLabelAnnotations</code>, <code>shotLabelAnnotations</code>, and <code>frameLabelAnnotations</code> fields. This provides specific label annotations for each level of the video. The <code>LabelLevel</code> enum has been removed.</p>
<p>All annotations have been updated to return an array of
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/Shared.Types/AnnotateVideoResponse#labelframe"><code>LabelFrame</code></a>
types, an array of
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/Shared.Types/AnnotateVideoResponse#LabelSegment"><code>LabelSegment</code></a>
types, a list of entities, and a list of entity categories. Each <code>LabelFrame</code> and <code>LabelSegment</code> includes a confidence value.</p>
<p>The field description has been replaced with an
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/Shared.Types/AnnotateVideoResponse#entity"><code>Entity</code></a>
 type, which includes both the <code>description</code> and an <code>entity_id</code> field. You can use the entity id to find more information for some entities in the <a href="https://developers.google.com/knowledge-graph/">Google Knowledge Graph Search API</a>.</p>
<p>The
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos#LabelDetectionMode"><code>LabelDetectionMode</code></a>
enum remains unchanged and can be set as the <code>label_detection_mode</code> field of the
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos#LabelDetectionConfig"><code>LabelDetectionConfig</code></a>
for your request, along with the <code>stationary_camera</code> field. This configurations applies only to labels. Other video intelligence features do not yet have yet any feature-specific configuration options.</p>
<h3>Change</h3>
<p>The 
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/Shared.Types/AnnotateVideoResponse#VideoSegment"><code>VideoSegment</code></a>
type is now used only in the context configuration
(<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos#VideoContext"><code>VideoContext</code></a>) of the 
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rpc/google.cloud.videointelligence.v1#google.cloud.videointelligence.v1.AnnotateVideoRequest"><code>AnnotateVideoRequest</code></a> type to allow you to pass multiple video segments in a request.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 26, 2017</title>
    <id>tag:google.com,2016:video-release-notes#June_26_2017</id>
    <updated>2017-06-26T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#June_26_2017"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>SafeSearch detection</strong>: SafeSearch inspects an input video for frame-level imagery that could be considered adult content.</p>
<p>SafeSearch is performed through the 
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos#Feature"><code>annotate</code></a>
method using a 
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate"><code>SAFE_SEARCH_DETECTION</code></a> request.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 18, 2017</title>
    <id>tag:google.com,2016:video-release-notes#May_18_2017</id>
    <updated>2017-05-18T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/video-intelligence/docs/release-notes#May_18_2017"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>Cloud Video Intelligence API is available for beta.</p>
<h3>Feature</h3>
<p><strong>Label detection</strong>: Label detection inspects an input video and detects entities that occur throughout the length of the video or video segment. Label detection is performed through the <a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate"><code>annotate</code></a> method using a <a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos#Feature"><code>LABEL_DETECTION</code></a> request.</p>
<h3>Feature</h3>
<p><strong>Shot change detection</strong>: Shot detection inspects an input video, and detects changes in shots (scenes) that occur throughout the length of the video or video segment. Shot detection is performed through the
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos/annotate"><code>annotate</code></a>
method using a 
<a href="https://docs.cloud.google.com/video-intelligence/docs/reference/rest/v1/videos#Feature"><code>SHOT_CHANGE_DETECTION</code></a>
request.</p>
]]>
    </content>
  </entry>

</feed>
