May 8, 2024

|

by: admin

|

Categories: Uncategorized

Google Joins Amazon, Microsoft With New Arm-based Data Center CPU

Google as well as Amazon and Microsoft has announced custom silicon for its data centers.

Google’s Axion line of processors represents its first Arm-based CPUs designed for the data center. According to Google, its Axion processors combine the company’s silicon expertise with Arm’s highest-performing CPU cores “to deliver instances with up to 30% better performance than the fastest general-purpose Arm-based instances available in the cloud today and up to 50% better performance and up to 60% better energy-efficiency than comparable current-generation x86-based instances”.

Google can customize its hardware components for greater performance and efficiency by designing its own CPU, industry experts say.

The use of Arm processors in the data center is unfortunate news for Intel, which has historically dominated the data center market with its x86 processors.

This announcement shows an accelerating transition away from x86 architectures and more towards Arm for the server market, which is “the ultimate prize” for chip companies.

Axion is another example of major players such as Apple and Tesla investing in their own chip designs.

With this announcement, Google is putting its substantial financial and technical weight behind a market trend for semiconductors like CPUs and accelerators to be designed according to how they are going to be used.

Google also announced the general availability of Cloud TPU v5p, the company’s most powerful and scalable Tensor Processing Unit to date.

The accelerator is built to train some of the largest and most demanding generative AI models, the company explained in a blog. A single TPU v5p pod contains 8,960 chips that run in unison over 2x the chips in a TPU v4 pod and can deliver over 2x higher FLOPS and 3x more high-bandwidth memory on a per-chip basis.

Google’s TPU adds another option for lower-cost inferencing to Google’s cloud.

Inference costs are what users pay to run their machine-learning models in the cloud. Those costs can be as much as 90% of the total cost of running ML infrastructure.

In addition to improved performance, Google noted that its new Axion chips will contribute to its sustainability goals. Beyond performance, customers want to operate more efficiently and meet their sustainability goals. With Axion processors, customers can optimize for even more energy efficiency.