1. Home >
  2. Computing

Google Announces Axion Arm Chip

Google's first Arm chip for servers promises a 30% performance boost compared with Amazon and Microsoft Arm chips.
By Ryan Whitwam
Axion server chip
Credit: Google

Google has unveiled a new custom Arm processor, but unlike the Tensor family, this one is decidedly not suitable for phones. Google Axion is the company's first Arm-based CPU for data centers. It's based on the newest and most capable Arm CPU cores, known as Neoverse V2. While Axion is designed for general-purpose workloads, Google says this chip is still going to boost machine learning in the cloud by avoiding a CPU bottleneck.

You may not think of Google as a big player in custom silicon, but the Axion is far from its first chip. Google's first foray into chip design was in 2015 when it released the first Tensor Processing Units. In 2018, it created a video encoding chip, and in 2021, it released the first Tensor mobile system-on-a-chip (SoC) for Pixel devices. The new Arm chip for servers is a different animal, though.

Axion's Neoverse CPU cores are much more powerful than the mobile Arm cores with which we are most familiar. Axion is still much more efficient than x86 parts, with 50% better performance and 60% improved efficiency. Google says the chip also does well compared with Amazon and Microsoft Arm chips, offering 30% more speed. However, Google has opted not to provide any technical details or benchmarks to back up those figures.

Google is focused heavily on AI now, and it's far from alone. Every technology firm is suddenly looking to build a network of AI accelerators, like Google TPUs or Nvidia Blackwell. However, general-purpose computing can't get lost in the shuffle, says Google. It cites Amdahl’s Law, which holds that the performance improvement from more powerful components is limited by the fraction of time they are actually used. Put differently, the fastest GPUs won't matter if your data center CPUs can't keep up.

TPU servers
Google TPU-based AI servers. Credit: Google

That's not to say AI accelerators don't need to get faster. Google also had an update on that front. After unveiling the Cloud TPU v5p in December, Google has announced that this chip is now available to developers. The new TPU is almost three times faster than the TPU v4 component, making it ideal for training the latest large language models, like Google's Gemini v1.5 with up to 10 million tokens.

Google will begin transitioning services like Google Earth Engine and BigQuery to Axion later this year. When the chips roll out, developers should have the option to build on Axion for Google Compute Engine, Dataflow, Cloud Batch, and more. However, Google Cloud hasn't announced pricing yet.

Tagged In

Artificial Intelligence Google

More from Computing

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up