We are living in an era where technology is the keystone that drives the entire civilization. But despite of all the brilliant inventions and technological advancements, the world is more inclined toward speed and agility today than ever before. We have moved from the traditional wired dial-up internet connections to the fourth generation of wireless networks. The widespread distribution of optical fibers have made it possible to connect to the internet and access data at lightning fast speeds. Similarly, when it comes to processors and GPUs, we have moved from the traditional 8 bits 8080 microprocessor chips that consisted merely 6000 transistors to the state-of-the-art Octa-core processors with clock speeds up to 1.7 GHz. Well, this has definitely raised the standard for the upcoming technologies.
One of the world’s leading tech-savvy companies, Google has raised the bar quite high with the launch of its high speed custom Machine Learning chips called Tensor Processing Units (TPUs). These chips were first introduced at the I/O Developer Conference conducted by the company back in May 2016. But not much was revealed by Google about the TPUs for obvious reasons. However, the company has released a paper quite recently that contains an in-depth analysis of the TPUs. You can read the paper for a detailed summary. In this blog, we bring to you the key highlights of the chips as revealed by Google.
You may also like Machine Learning Will Help Improve Google Maps Services.
What Are TPUs?
Tensor Processing Units or TPUs are the custom Machine Learning chips designed by Google to successfully execute its regular Machine Learning workloads. Instead of using CPUs, GPUs and the combination of both, Google is now working on the implementation of these TPUs that are said to be 15-30 times faster than the standard CPUs and GPUs. Also, when it comes to power consumption, these chips offer 30 to 80 times higher TeraOps/Watt.
History
Google reveals that the company had no idea that the additional hardware resources of the company could be made into something as useful and powerful as the TPUs. Back in 2006, the company started to find new ways to make an effective use of its excessive hardware resources including GPUs, FPGA chips and ASICs. Numerous experiments were conducted in the Google Datacenters over the next several years. But the major transition came in 2013 when the DNNs was getting more and more popular and it was supposed to go even bigger in the coming years. Google extrapolated if ‘that happens, the companies available hardware resources would be insufficient to meet the augmented computational requirements. That’s when the company started working on a high-priority project to design a series of custom ASIC chips for handling more number of tasks with less power consumption and at a blistering fast speed. These custom ASIC chips are termed by Google as ‘Tensor Processing Units.’
TPU chips are meant to be used by Google for handling its internal operations in order to improve its cloud platform for the users by means of advanced machine learning algorithms. Although Google is not likely to launch the TPUs outside its own cloud platform as of now, it has definitely shown a path to the world and paved way for new inventions.
Read More at - https://goo.gl/ca2biJ
No comments:
Post a Comment