<?xml version="1.0" encoding="UTF-8"?>
<!-- AUTOGENERATED FILE. DO NOT EDIT. -->
<feed xmlns="http://www.w3.org/2005/Atom">
  <id>tag:google.com,2016:tpu-release-notes</id>
  <title>Cloud TPU - Release notes</title>
  <link rel="self" href="https://docs.cloud.google.com/feeds/tpu-release-notes.xml"/>
  <author>
    <name>Google Cloud Platform</name>
  </author>
  <updated>2026-03-31T00:00:00-07:00</updated>

  <entry>
    <title>March 31, 2026</title>
    <id>tag:google.com,2016:tpu-release-notes#March_31_2026</id>
    <updated>2026-03-31T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_31_2026"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Generally available</strong>: TPU7x is generally available (GA). TPU7x is the first
release within the Ironwood family, Google Cloud's seventh generation TPU. TPU7x
supports large-scale AI training and inference, providing performance and
cost-effectiveness for demanding workloads such as large language (LLMs),
mixture of experts (MoEs), and diffusion models. For more information, see the
<a href="https://docs.cloud.google.com/tpu/docs/tpu7x">TPU7x (Ironwood) documentation</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>November 24, 2025</title>
    <id>tag:google.com,2016:tpu-release-notes#November_24_2025</id>
    <updated>2025-11-24T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#November_24_2025"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Preview</strong>: TPU7x is available in Preview. TPU7x is the first release within
the Ironwood family, Google Cloud's seventh generation TPU. TPU7x supports
large-scale AI training and inference, providing performance and
cost-effectiveness for demanding workloads such as large language models (LLMs),
mixture of experts (MoEs), and diffusion models. For more information, see the
<a href="https://docs.cloud.google.com/tpu/docs/tpu7x">TPU7x (Ironwood) documentation</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 22, 2025</title>
    <id>tag:google.com,2016:tpu-release-notes#May_22_2025</id>
    <updated>2025-05-22T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#May_22_2025"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><strong>Public preview</strong>: You can request Cloud TPUs using future reservations in calendar mode. This mode, powered by the <a href="https://cloud.google.com/blog/products/compute/introducing-dynamic-workload-scheduler">Dynamic Workload Scheduler</a>, lets you check TPU availability up to 120 days in advance and request capacity based on your schedule. You can use calendar mode to reserve TPUs for 1 to 90 days. Requesting a short-term reservation with calendar mode is a good fit for training and experimentation workloads that require precise start times and have a defined duration. For more information, see <a href="https://docs.cloud.google.com/tpu/docs/calendar-mode-reservation">Request a short-term reservation using calendar mode</a>.</p>
<h3>Feature</h3>
<p><strong>Public preview</strong>: You can enable reservation sharing for Cloud TPU. This feature lets you share a reservation across multiple projects. You can also share a reservation with Vertex AI for training or serving workloads. For more information, see <a href="https://docs.cloud.google.com/tpu/docs/share-reservation">Share a Cloud TPU reservation</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 31, 2025</title>
    <id>tag:google.com,2016:tpu-release-notes#March_31_2025</id>
    <updated>2025-03-31T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_31_2025"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p><a href="https://docs.cloud.google.com/tpu/docs/request-using-flex-start">Flex-start for Cloud TPU</a>, powered by <a href="https://cloud.google.com/blog/products/compute/introducing-dynamic-workload-scheduler">Dynamic Workload Scheduler</a>, is available in <a href="https://cloud.google.com/products?e=48754805&amp;hl=en#product-launch-stages">Preview</a>. Flex-start is a flexible and cost-effective consumption option for AI workloads. Flex-start enables you to dynamically provision TPUs for up to 7 days using the queued resources API, without long-term reservations. This option is ideal for quick experimentation, small-scale testing, dynamic inference provisioning, and model fine-tuning. For more information about Flex-start for Cloud TPU, see <a href="https://docs.cloud.google.com/tpu/docs/request-using-flex-start">Request Cloud TPUs using Flex-start</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 16, 2024</title>
    <id>tag:google.com,2016:tpu-release-notes#December_16_2024</id>
    <updated>2024-12-16T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#December_16_2024"/>
    <content type="html"><![CDATA[<h3>Announcement</h3>
<p>This Release Note announces General Availability of Trillium AKA v6e. Trillium is the 6th generation and latest Cloud TPU. It is fully integrated with our AI Hypercomputer architecture to deliver compelling value to our Google Cloud Platform AI customers. </p>
<p>We used Trillium TPUs to train the new Gemini 2.0, Google's most capable AI model yet, and now enterprises and startups alike can take advantage of the same powerful, efficient, and sustainable infrastructure. Today, Trillium is generally available for Google Cloud customers and this week we will be delivering our first large tranches of Trillium capacity to some of our biggest Google Cloud Platform customers.</p>
<p>Here are some of the key improvements that Trillium delivers over the prior generations, v5e and v5p:</p>
<ul>
<li><p>Over 4x improvement in training performance.</p></li>
<li><p>Up to 3x increase in inference throughput.</p></li>
<li><p>A 67% increase in energy efficiency.</p></li>
<li><p>An impressive 4.7x increase in peak compute performance per chip.</p></li>
<li><p>Double the High Bandwidth Memory (HBM) capacity.</p></li>
<li><p>Double the Interchip Interconnect (ICI) bandwidth.</p></li>
<li><p>100,000 Trillium chips per Jupiter network fabric with 13 Petabits/sec of bisection bandwidth, capable of scaling a single distributed training job to hundreds of thousands of accelerators. </p></li>
<li><p>Trillium provides up to 2.1x increase in performance per dollar over Cloud TPU v5e and up to 2.5x increase in performance per dollar over Cloud TPU v5p in training dense LLMs like Llama2-70b and Llama3.1-405b.</p></li>
<li><p>GKE integration enables seamless AI workload orchestration using Google Compute Engine MIGs including XPK for faster iterative development.</p></li>
<li><p>Multislice training with Trillium scales from one to hundreds of thousands of chips across pods using DCN.</p></li>
<li><p>Training and serving fungibility enables use of same Cloud TPU quota for both training and inference.</p></li>
<li><p>Support for collection scheduling with collection SLOs being defended.</p></li>
<li><p>Full-host VM support to enable inference support for larger models (70B+ parameters).</p></li>
<li><p>Official Libtpu releases that guarantees stability across all three frameworks (Jax/Pytorch-XLA/Tensorflow).</p></li>
</ul>
<p>These enhancements enable Trillium to excel across a wide range of AI workloads, including:</p>
<ul>
<li><p>Scaling AI training workloads like LLMs including dense and Mixture of Experts (MoE) models</p></li>
<li><p>Inference performance and collection scheduling</p></li>
<li><p>Embedding-intensive models acceleration</p></li>
<li><p>Delivering training and inference price-performance</p></li>
</ul>
]]>
    </content>
  </entry>

  <entry>
    <title>November 01, 2024</title>
    <id>tag:google.com,2016:tpu-release-notes#November_01_2024</id>
    <updated>2024-11-01T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#November_01_2024"/>
    <content type="html"><![CDATA[<h3>Feature</h3>
<p>You can now request Cloud TPUs as queued resources in the Google Cloud Console. Queuing your request for TPU resources can help alleviate stockout issues. If the resources you request are not immediately available, your request is added to a queue until the request succeeds or you delete it. You can also specify a time range in which you want to fulfill the resource request. For more information, see <a href="https://docs.cloud.google.com/tpu/docs/queued-resources">Manage queued resources</a>.</p>
<h3>Feature</h3>
<p>Creating a Multislice TPU environment is now available in the Google Cloud Console. You can use Multislice to run training jobs using multiple TPU slices within a single Pod or on slices in multiple Pods. You must use a queued resource request to create a Multislice environment. For more information, see <a href="https://docs.cloud.google.com/tpu/docs/multislice-introduction">Cloud TPU Multislice overview</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 11, 2024</title>
    <id>tag:google.com,2016:tpu-release-notes#March_11_2024</id>
    <updated>2024-03-11T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_11_2024"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow 2.16.1. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.16.1">TensorFlow 2.16.1 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 04, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#December_04_2023</id>
    <updated>2023-12-04T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#December_04_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow 2.14.1. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.14.1">TensorFlow 2.14.1 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>November 13, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#November_13_2023</id>
    <updated>2023-11-13T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#November_13_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow 2.15.0, which adds support for PJRT. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.15.0">TensorFlow 2.15.0 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>October 05, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#October_05_2023</id>
    <updated>2023-10-05T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#October_05_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow 2.13.1. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.13.1">TensorFlow 2.13.1 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>September 27, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#September_27_2023</id>
    <updated>2023-09-27T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#September_27_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow 2.14.0. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.14.0">TensorFlow 2.14.0 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>August 29, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#August_29_2023</id>
    <updated>2023-08-29T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#August_29_2023"/>
    <content type="html"><![CDATA[<h3>Announcement</h3>
<p>You can now create Cloud Tensor Processing Unit (TPU) nodes in Google Kubernetes Engine (GKE) to run AI workloads, from training to inference models. GKE manages your cluster by automating TPU resource provisioning, scaling, scheduling, repairing, and upgrading. GKE provides TPU infrastructure metrics in Cloud Monitoring, TPU logs, and error reports for better visibility and monitoring of TPU node pools in GKE clusters. TPUs are available with GKE Standard clusters. GKE supports TPU v4 in version 1.26.1.gke-1500 and later, and supports TPU v5e in version 1.27.2-gke.1500 and later. To learn more, see <a href="https://docs.cloud.google.com/tpu/docs/tpus-in-gke">TPUs in GKE introduction</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>July 21, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#July_21_2023</id>
    <updated>2023-07-21T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#July_21_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow 2.12.1. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.12.1">TensorFlow 2.12.1 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>July 10, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#July_10_2023</id>
    <updated>2023-07-10T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#July_10_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow 2.13.0. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.13.0">TensorFlow 2.13.0 Release Notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 07, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#June_07_2023</id>
    <updated>2023-06-07T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#June_07_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>You can now view historical logs of maintenance events on your TPU in <a href="https://cloud.google.com/tpu/docs/audit-logs#audited_operations">system
event audit logs</a>. For additional
information see the <a href="https://docs.cloud.google.com/tpu/docs/maintenance-events">maintenance events documentation</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 31, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#March_31_2023</id>
    <updated>2023-03-31T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_31_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.11.1. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.11.1">TensorFlow 2.11.1 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 27, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#March_27_2023</id>
    <updated>2023-03-27T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_27_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.12.0. For more information see the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.12.0">TensorFlow 2.12 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 24, 2023</title>
    <id>tag:google.com,2016:tpu-release-notes#March_24_2023</id>
    <updated>2023-03-24T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_24_2023"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPUs now support the <a href="https://github.com/pytorch/pytorch/releases">PyTorch 2.0 release</a>, via PyTorch/XLA integration. On top of the underlying improvements and bug fixes in PyTorch's 2.0 release, this release introduces several features, and PyTorch/XLA specific bug fixes.</p>
<h3 id="beta_features"><strong>Beta Features</strong></h3>
<h4 id="pjrt_runtime">PJRT runtime</h4>
<ul>
<li>Checkout our newest <a href="https://github.com/pytorch/xla/blob/r2.0/docs/pjrt.md">document</a>; PjRt is the default runtime in 2.0.</li>
<li>New Implementation of <code>xm.rendezvous</code> with XLA collective communication which scales better (<a href="https://github.com/pytorch/xla/pull/4181">#4181</a>)</li>
<li>New PJRT TPU backend through the C-API (<a href="https://github.com/pytorch/xla/pull/4077">#4077</a>)</li>
<li>Use PJRT to default if no runtime is configured (<a href="https://github.com/pytorch/xla/pull/4599">#4599</a>)</li>
<li>Experimental support for torch.distributed and DDP on TPU v2 and v3 (<code><a href="https://github.com/pytorch/xla/pull/4520">#4520</a></code>)</li>
</ul>
<h4 id="fsdp">FSDP</h4>
<ul>
<li>Add <code>auto_wrap_policy</code> into XLA FSDP for automatic wrapping (<a href="https://github.com/pytorch/xla/pull/4318">#4318</a>)</li>
</ul>
<h3 id="stable_features"><strong>Stable Features</strong></h3>
<h4 id="lazy_tensor_core_migration">Lazy Tensor Core Migration</h4>
<ul>
<li>Migration is completed, checkout this <a href="https://dev-discuss.pytorch.org/t/pytorch-xla-2022-q4-dev-update/961">dev discussion</a> for more detail.</li>
<li>Naively inherits LazyTensor (<a href="https://github.com/pytorch/xla/pull/4271">#4271</a>)</li>
<li>Adopt even more LazyTensor interfaces (<a href="https://github.com/pytorch/xla/pull/4317">#4317</a>)</li>
<li>Introduce XLAGraphExecutor (<a href="https://github.com/pytorch/xla/pull/4270">#4270</a>)</li>
<li>Inherits LazyGraphExecutor (<a href="https://github.com/pytorch/xla/pull/4296">#4296</a>)</li>
<li>Adopt more LazyGraphExecutor virtual interfaces (<a href="https://github.com/pytorch/xla/pull/4314">#4314</a>)</li>
<li>Rollback to use <code>xla::Shape</code> instead of <code>torch::lazy::Shape</code> (<a href="https://github.com/pytorch/xla/pull/4111">#4111</a>)</li>
<li>Use TORCH_LAZY_COUNTER/METRIC (<a href="https://github.com/pytorch/xla/pull/4208">#4208</a>)</li>
</ul>
<h4 id="improvements_additions">Improvements &amp; Additions</h4>
<ul>
<li>Add an option to increase the worker thread efficiency for data loading (<a href="https://github.com/pytorch/xla/pull/4727">#4727</a>)</li>
<li>Improve numerical stability of torch.sigmoid (<a href="https://github.com/pytorch/xla/pull/4311">#4311</a>)</li>
<li>Add an api to clear counter and metrics (<a href="https://github.com/pytorch/xla/pull/4109">#4109</a>)</li>
<li>Add <code>met.short_metrics_report</code> to display more concise metrics report (<a href="https://github.com/pytorch/xla/pull/4148">#4148</a>)</li>
<li>Document environment variables (<a href="https://github.com/pytorch/xla/pull/4273">#4273</a>)</li>
<li>Op Lowering
<ul>
<li><code>_linalg_svd</code> (<a href="https://github.com/pytorch/xla/pull/4537">#4537</a>)</li>
<li><code>Upsample_bilinear2d</code> with scale (<a href="https://github.com/pytorch/xla/pull/4464">#4464</a>)</li>
</ul></li>
</ul>
<h3 id="experimental_features"><strong>Experimental Features</strong></h3>
<h4 id="torchdynamo_torchcompile_support">TorchDynamo (<code>torch.compile</code>) support</h4>
<ul>
<li>Checkout our newest <a href="https://github.com/pytorch/xla/blob/r2.0/docs/dynamo.md">doc</a>.</li>
<li>Dynamo bridge python binding (<a href="https://github.com/pytorch/xla/pull/4119">#4119</a>)</li>
<li>Dynamo bridge backend implementation (<a href="https://github.com/pytorch/xla/pull/4523">#4523</a>)</li>
<li>Training optimization: make execution async (<a href="https://github.com/pytorch/xla/pull/4425">#4425</a>)</li>
<li>Training optimization: reduce graph execution per step (<a href="https://github.com/pytorch/xla/pull/4523">#4523</a>)</li>
</ul>
<h4 id="pytorchxla_gspmd_on_single_host">PyTorch/XLA GSPMD on single host</h4>
<ul>
<li>Preserve parameter sharding with sharded data placeholder (<a href="https://github.com/pytorch/xla/pull/4721">#4721)</a></li>
<li>Transfer shards from server to host (<a href="https://github.com/pytorch/xla/pull/4508">#4508</a>)</li>
<li>Store the sharding annotation within XLATensor(#<a href="https://github.com/pytorch/xla/pull/4390">4390</a>)</li>
<li>Use d2d replication for more efficient input sharding (<a href="https://github.com/pytorch/xla/pull/4336">#4336</a>)</li>
<li>Mesh to support custom device order. (<a href="https://github.com/pytorch/xla/pull/4162">#4162</a>)</li>
<li>Introduce virtual SPMD device to avoid unpartitioned data transfer (<a href="https://github.com/pytorch/xla/pull/4091">#4091</a>)</li>
</ul>
<h3 id="ongoing_development"><strong>Ongoing development</strong></h3>
<ul>
<li>Ongoing Dynamic Shape implementation
<ul>
<li>Implement missing <code>XLASymNodeImpl::Sub</code> (<a href="https://github.com/pytorch/xla/pull/4551">#4551</a>)</li>
<li>Make <code>empty_symint</code> support dynamism. (<a href="https://github.com/pytorch/xla/pull/4550">#4550</a>)</li>
<li>Add dynamic shape support to <code>SigmoidBackward</code> (<a href="https://github.com/pytorch/xla/pull/4322">#4322</a>)</li>
<li>Add a forward pass NN model with dynamism test (<a href="https://github.com/pytorch/xla/pull/4256">#4256</a>) </li>
</ul></li>
<li>Ongoing SPMD multi host execution (<a href="https://github.com/pytorch/xla/pull/4573">#4573</a>)</li>
</ul>
<h3 id="bug_fixes_improvements"><strong>Bug fixes &amp; improvements</strong></h3>
<ul>
<li>Support int as index type (<a href="https://github.com/pytorch/xla/pull/4602">#4602</a>)</li>
<li>Only alias inputs and outputs when <code>force_ltc_sync == True</code> (<a href="https://github.com/pytorch/xla/pull/4575">#4575</a>)</li>
<li>Fix race condition between execution and buffer tear down on GPU when using <code>bfc_allocator</code> (<code><a href="https://github.com/pytorch/xla/pull/4542">#4542</a></code>)</li>
<li>Release the GIL during TransferFromServer (<a href="https://github.com/pytorch/xla/pull/4504">#4504</a>)</li>
<li>Fix type annotations in FSDP (<a href="https://github.com/pytorch/xla/pull/4371">#4371</a>)</li>
</ul>
]]>
    </content>
  </entry>

  <entry>
    <title>December 19, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#December_19_2022</id>
    <updated>2022-12-19T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#December_19_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports TensorFlow patches: <a href="https://pypi.org/project/tensorflow/2.8.4/">2.8.4</a>, <a href="https://pypi.org/project/tensorflow/2.9.3/">2.9.3</a>, and <a href="https://pypi.org/project/tensorflow/2.10.1/">2.10.1</a>. See the TensorFlow release notes for details:</p>
<ul>
<li><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.8.4">2.8.4 release notes</a></li>
<li><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.9.3">2.9.3 release notes</a></li>
<li><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.10.1">2.10.1 release notes</a></li>
</ul>
]]>
    </content>
  </entry>

  <entry>
    <title>December 01, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#December_01_2022</id>
    <updated>2022-12-01T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#December_01_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.11.0. For more information see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0">TensorFlow 2.11 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>September 13, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#September_13_2022</id>
    <updated>2022-09-13T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#September_13_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.10.0. For more information see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.10.0">TensorFlow 2.10 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>June 24, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#June_24_2022</id>
    <updated>2022-06-24T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#June_24_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports <a href="https://pypi.org/project/tensorflow/2.6.5/">TensorFlow 2.6.5</a> and <a href="https://pypi.org/project/tensorflow/2.7.3/">TensorFlow 2.7.3</a>. </p>
<p>For more information see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.6.5">TensorFlow 2.6.5</a> and <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.7.3">TensorFlow 2.7.3</a> release notes.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>May 27, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#May_27_2022</id>
    <updated>2022-05-27T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#May_27_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.8.2 and 2.9.1. For more information see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.8.2">TensorFlow 2.8.2 release notes</a>  and <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.9.1">TensorFlow 2.9.1 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 18, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#March_18_2022</id>
    <updated>2022-03-18T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_18_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.6.3. For more information see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.6.3">TensorFlow 2.6.3 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>March 09, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#March_09_2022</id>
    <updated>2022-03-09T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#March_09_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.5.3 and 2.7.1. For more information see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.5.3">TensorFlow 2.5.3 release notes </a> and <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.7.1">TensorFlow 2.7.1 release notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>February 03, 2022</title>
    <id>tag:google.com,2016:tpu-release-notes#February_03_2022</id>
    <updated>2022-02-03T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#February_03_2022"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.8.0. For more information, see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.8.0">TensorFlow 2.8.0 Release Notes</a>.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>December 02, 2021</title>
    <id>tag:google.com,2016:tpu-release-notes#December_02_2021</id>
    <updated>2021-12-02T00:00:00-08:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#December_02_2021"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU team just released TF-2.4.4, TF-2.5.2 and TF-2.6.2 on Cloud TPUs. The TensorFlow release notes for these releases are shown below.</p>
<ul>
<li><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.4.4">TF 2.4.4</a></li>
<li><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.5.2">TF 2.5.2</a></li>
<li><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.6.2">TF 2.6.2</a></li>
</ul>
]]>
    </content>
  </entry>

  <entry>
    <title>November 05, 2021</title>
    <id>tag:google.com,2016:tpu-release-notes#November_05_2021</id>
    <updated>2021-11-05T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#November_05_2021"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.7.0. For more information, see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.7.0">Tensorflow 2.7.0 Release Notes</a></p>
]]>
    </content>
  </entry>

  <entry>
    <title>August 24, 2021</title>
    <id>tag:google.com,2016:tpu-release-notes#August_24_2021</id>
    <updated>2021-08-24T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#August_24_2021"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU team just released TF-2.3.4, TF-2.4.3 and TF-2.5.1 on Cloud TPUs. The TensorFlow release notes for these releases are shown below.</p>
<ul>
<li><p><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.3.4">Tensoflow-2.3.4 Release notes</a></p></li>
<li><p><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.4.3">Tensoflow-2.4.3 Release notes</a></p></li>
<li><p><a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.5.1">Tensoflow-2.5.1 Release notes</a></p></li>
</ul>
]]>
    </content>
  </entry>

  <entry>
    <title>August 12, 2021</title>
    <id>tag:google.com,2016:tpu-release-notes#August_12_2021</id>
    <updated>2021-08-12T00:00:00-07:00</updated>
    <link rel="alternate" href="https://docs.cloud.google.com/tpu/docs/release-notes#August_12_2021"/>
    <content type="html"><![CDATA[<h3>Change</h3>
<p>Cloud TPU now supports Tensorflow 2.6.0. For more information, see <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.6.0">Tensorflow 2.6.0 Release Notes</a>.</p>
<p>In TF 2.6.0, TensorFlow has introduced a new version of the TF/XLA bridge using the MLIR compiler infrastructure. The MLIR bridge is enabled by default. To explicitly disable it at runtime, add the following code snippet to your model's code:</p>
<p>tf.config.experimental.disable_mlir_bridge()</p>
]]>
    </content>
  </entry>

</feed>
