Google Cloud wants to make it a lot easier to run massive ML workloads

Audio player loading…

Google Cloud has announced the general availability of its TPU virtual machines.

Tensor Processing Units (TPUs) are application-specific integrated circuits (ASICs) developed by Google which are used to accelerate machine learning workloads.

Cloud TPU enables you to run your machine learning workloads on the cloud hosting giant’s TPU accelerator hardware using open source machine learning platform TensorFlow

What can TPU VMs do for users?

Google says its user community has adopted virtual TPUs as they provide a better debugging experience and also enable certain training setups, including Distributed Reinforcement Learning, which it says were not feasible with existing TPU Node (networks accessed) architecture.

Cloud TPUs are optimized for large scale ranking and recommendation workloads according to Google, citing how Snap was an early adopter of this capability.

In addition, with the TPU VMs GA release, Google is introducing a new TPU Embedding API, which it says can accelerate ML based ranking and recommendation workloads.

Google highlighted how many modern businesses rely on ranking and recommendation use cases, such as audio and video recommendations, product recommendations, and ad ranking.

The tech giant said that TPUs can help businesses implement a deep neural network-based approach to tackling the above use cases, which it says can be expensive and data intensive to train.

Google also says its TPU VMs offer several additional capabilities over existing TPU Node architecture due to their local execution setup, as the input data pipeline can execute directly on the TPU hosts, saving organizations computing resources.

TPU VM GA Release also supports other ML major frameworks such as PyTorch and JAX.

Interested in deploying a virtual TPU? You can follow one of Google’s quick starts or tutorials.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


one × 1 =