1. Home >
  2. Computing

Nvidia to Go After Custom AI Chip Market

The company will seek to neuter its biggest threat while it's still gathering steam.
By Josh Norem
Nvidia H100
Credit: Nvidia

Thanks to the recent surge in interest in artificial intelligence, Nvidia has seen its fortunes explode over the past year. It suddenly found itself holding a golden ticket with its industry-leading AI accelerators. Although both Intel and AMD have touted their own products as being competitive with Nvidia's, it's likely that the biggest threat it faces in this market is companies pursuing their own custom silicon solutions. Now, according to a new report, Nvidia is looking to nip that uprising in the bud by forming a new division to help companies design custom Nvidia chips.

Reuters is reporting Nvidia has formed a new business unit to help cloud computing and other hyperscale companies develop custom silicon. The report says it's based on nine sources, which shared Nvidia is already talking to Meta, Google, Amazon, and OpenAI about building custom chips for AI and other computing tasks. The piece doesn't say if Nvidia has signed any contracts with these companies yet, but it notes the company has already lost custom chip business to competitors such as Marvell and Broadcom.

It's an intelligent move by Nvidia if the report is accurate, as its H100 and A100 accelerators are too general-purpose to match every company's demands perfectly. In the same way that GPUs are better than CPUs for some tasks, bespoke silicon is even better than an off-the-shelf H100, for example, since it's custom-tailored to each company's requirements, which usually makes them more efficient overall. Although custom solutions have existed for years with chips like Amazon's Graviton and Google's Tensor Core chips, this movement is beginning to accelerate thanks to the AI craze and the resulting shortage (and soaring price) of Nvidia accelerators.

Just last week, Reuters reported that Meta was planning on deploying a custom chip to its data centers with two goals: to further its AI aspirations and to reduce its reliance on Nvidia's GPUs. In August 2023, Google announced it was deploying its newest custom chip, TPU V5e, for generative AI and large language models. Finally, in November, Microsoft announced it had built a custom chip to train AI workloads to avoid buying expensive chips from Nvidia, according to The Verge.

Reuters estimates the custom silicon market represents a $30 billion opportunity for Nvidia. The person in charge of the new unit also reportedly scrubbed any mention of it on their LinkedIn profile after Reuters contacted Nvidia for comment, indicating the reporting is accurate.

Tagged In

Hpc Data Centers Semiconductors Artificial Intelligence

More from Computing

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up