1. Home >
  2. Computing

Nvidia to Unveil Next-Gen B100 AI Accelerator This Week

The H100 is about to be yesterday's news.
By Ryan Whitwam
Nvidia chip Blackwell
Credit: Nvidia

Nvidia CEO Jensen Huang promises a "transformative moment in AI" this coming week when he unveils a new GPU optimized for machine learning. The B100 is based on the same Blackwell architecture as the company's forthcoming RTX 50 Series gaming cards. However, the B100 will use its massively parallel computing capabilities to run the next generation of artificial intelligence models.

There's little information on the B100, but Nvidia did offer up a vague graph last year that compares the B100 to the older H100 and A100, as well as the new H200. The unlabeled bar (see below) doesn't tell us how much faster Nvidia expects the B100 to chew through data, but the bar vanishes into the distance to illustrate it will be a significant improvement.

From what we know of Blackwell, the B100 will be built on TSMC's 3nm process node, the most advanced currently available. This will allow Nvidia to boost the number of transistors for better performance. Rumors also suggest the B100 will utilize a chiplet-based multi-chip module (MCM) instead of the monolithic design of past GPUs. Splitting the chip's functions into discrete elements gives you a more efficient and flexible design. AMD started using chiplets with its Ryzen CPUs several years ago and recently released the first chiplet GPUs with the Radeon RX 7000 series. Nvidia has yet to release any chiplet designs.

The company expects 16,000 people to attend its GPU Technology Conference (GTC), roughly its attendance in 2019 before the pandemic shut down in-person events for several years. The excitement among developers is understandable—Nvidia has been cranking out AI-optimized chips for several years, and sales are booming. Nvidia's market cap crept over $2 trillion last month, making it the third most valuable company behind Apple and Microsoft. The chipmaker is projected to increase revenue by 81% this year, according to Reuters.

Nvidia B100 performance graph
That's an awfully vague bar, Nvidia. Credit: Nvidia

In 2022, Nvidia used GTC to announce the H100 "Hopper" GPU. This AI accelerator, based on the older Ampere architecture, has been selling for $30,000-40,000 for a single card, and AI projects often need multiple servers stuffed full of accelerators. Although it is possible to use gaming GPUs for lighter AI work, cards designed for AI come with much more memory to accommodate large models. The B100 is also expected to have even more VRAM than the last-gen chips.

Assuming technology firms don't suddenly sour on AI, the Nvidia B100 is a guaranteed hit. The company will probably sell every card it can produce, and the retailer markups could make the H100 look like a bargain. Nvidia might even find itself closing in on Apple and Microsoft in market cap before long. GTC runs from March 17 to 21, and Huang's keynote will take place on March 18 at 1 p.m. PDT.

Tagged In

Nvidia Artificial Intelligence

More from Computing

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up