Last week, Google and Meta made landmark announcements regarding their in-house silicon development efforts, marking a growing shift away from reliance on off-the-shelf solutions towards the development of custom chips in the cloud computing and AI market. Google unveiled Axion, its first custom Arm-based CPU for the data center, while Meta revealed details of its second-generation Meta Training and Inference Accelerator (MTIA).
Why Custom Chips Matter
The increasing adoption of custom silicon by tech giants like Google, Meta, and AWS (Amazon Web Services) represents a strategic response to the unique demands of AI and cloud workloads.
By designing their own chips, these three companies gain the ability to optimize performance for specific workloads while minimizing power consumption. Control over the entire hardware stack also allows the companies to implement new features and optimizations at their own pace rather than being constrained by the traditional release cycles of merchant silicon vendors such as Nvidia, Intel, and AMD.
Google’s Arm-based Thrust with Axion
Built on Arm’s latest Neoverse V2 technology, the Google Axion processor is designed for general-purpose workloads including web and app servers, containerized microservices, open-source databases, data analytics engines, media processing, and CPU-based AI training and inferencing.
Google claims that the Axion processor delivers instances with up to 30% better performance than the fastest general-purpose Arm-based instances available in the cloud, as well as up to 50% better performance and up to 60% better energy-efficiency than comparable current-generation x86-based instances.
The company plans to open access to the Axion processor for Google Cloud services such as Google Compute Engine, Google Kubernetes Engine, Dataproc, Dataflow, and Cloud Batch. It says that Google Cloud data centers are 1.5X more efficient than the industry average and deliver 3X more computing power with the same amount of electrical power compared with five years ago.
Meta’s Evolving Custom Chip Strategy
The Meta Training and Inference Accelerator (MTIA) is part of the company’s growing development program for custom chips, which aims to power its AI services and reduce its reliance on external suppliers. Meta claims that its MTIA v2 boasts significant improvements in performance, on-chip memory, and bandwidth compared to its predecessor, enhancing the company’s ability to optimize its recommendation and ranking models for advertising, a core component of its business.
The Future: Competition and a Diversifying Landscape
With continuing shortages of high-end AI server GPUs, growing concerns about data center energy consumption, and the rising cost of delivering cloud-based AI services, it is no big surprise that Google, Meta, and AWS are accelerating the development of custom chips. In addition to reducing their dependence on Nvidia, Intel, and AMD, the design and deployment of home-grown chips will enable them to reduce costs, minimize energy consumption, and further boost the efficiency of their data center operations.
Long time technology industry fan here in Taiwan.