AI

Microsoft looks to free itself from GPU shackles by designing custom AI chips

Comment

a photo of Microsoft's campus in Germany
Image Credits: Fink Avenue (opens in a new window) / Getty Images

Most companies developing AI models, particularly generative AI models like ChatGPT, GPT-4 Turbo and Stable Diffusion, rely heavily on GPUs. GPUs’ ability to perform many computations in parallel make them well-suited to training — and running — today’s most capable AI.

But there simply aren’t enough GPUs to go around.

Nvidia’s best-performing AI cards are reportedly sold out until 2024. The CEO of chipmaker TSMC was less optimistic recently, suggesting that the shortage of AI GPUs from Nvidia — as well as chips from Nvidia’s rivals — could extend into 2025.

So Microsoft’s going its own way.

Today at its 2023 Ignite conference, Microsoft unveiled two custom-designed, in-house and data center-bound AI chips: the Azure Maia 100 AI Accelerator and the Azure Cobalt 100 CPU. Maia 100 can be used to train and run AI models, while Cobalt 100 is designed to run general purpose workloads.

Image Credits: Microsoft

“Microsoft is building the infrastructure to support AI innovation, and we are reimagining every aspect of our data centers to meet the needs of our customers,” Scott Guthrie, Microsoft cloud and AI group EVP, was quoted as saying in a press release provided to TechCrunch earlier this week. “At the scale we operate, it’s important for us to optimize and integrate every layer of the infrastructure stack to maximize performance, diversify our supply chain and give customers infrastructure choice.”

Both Maia 100 and Cobalt 100 will start to roll out early next year to Azure data centers, Microsoft says — initially powering Microsoft AI services like Copilot, Microsoft’s family of generative AI products, and Azure OpenAI Service, the company’s fully managed offering for OpenAI models. It might be early days, but Microsoft assures that the chips aren’t one-offs. Second-generation Maia and Cobalt hardware is already in the works.

Built from the ground up

That Microsoft created custom AI chips doesn’t come as a surprise, exactly. The wheels were set in motion some time ago — and publicized.

In April, The Information reported that Microsoft had been working on AI chips in secret since 2019 as part of a project code-named Athena. And further back, in 2020, Bloomberg revealed that Microsoft had designed a range of chips based on the ARM architecture for data centers and other devices, including consumer hardware (think the Surface Pro).

But the announcement at Ignite gives the most thorough look yet at Microsoft’s semiconductor efforts.

First up is Maia 100.

Microsoft says that Maia 100 — a 5-nanometer chip containing 105 billion transistors — was engineered “specifically for the Azure hardware stack” and to “achieve the absolute maximum utilization of the hardware.” The company promises that Maia 100 will “power some of the largest internal AI [and generative AI] workloads running on Microsoft Azure,” inclusive of workloads for Bing, Microsoft 365 and Azure OpenAI Service (but not public cloud customers — yet).

Maia 100
Image Credits: Microsoft

That’s a lot of jargon, though. What’s it all mean? Well, to be quite honest, it’s not totally obvious to this reporter — at least not from the details Microsoft’s provided in its press materials. In fact, it’s not even clear what sort of chip Maia 100 is; Microsoft’s chosen to keep the architecture under wraps, at least for the time being.

In another disappointing development, Microsoft didn’t submit Maia 100 to public benchmarking test suites like MLCommons, so there’s no comparing the chip’s performance to that of other AI training chips out there, such as Google’s TPU, Amazon’s Tranium and Meta’s MTIA. Now that the cat’s out of the bag, here’s hoping that’ll change in short order.

One interesting factoid that Microsoft was willing to disclose is that its close AI partner and investment target, OpenAI, provided feedback on Maia 100’s design.

It’s an evolution of the two companies’ compute infrastructure tie-ups.

In 2020, OpenAI worked with Microsoft to co-design an Azure-hosted “AI supercomputer” — a cluster containing over 285,000 processor cores and 10,000 graphics cards. Subsequently, OpenAI and Microsoft built multiple supercomputing systems powered by Azure — which OpenAI exclusively uses for its research, API and products — to train OpenAI’s models.

“Since first partnering with Microsoft, we’ve collaborated to co-design Azure’s AI infrastructure at every layer for our models and unprecedented training needs,” Altman said in a canned statement. “We were excited when Microsoft first shared their designs for the Maia chip, and we’ve worked together to refine and test it with our models. Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers.”

I asked Microsoft for clarification, and a spokesperson had this to say: “As OpenAI’s exclusive cloud provider, we work closely together to ensure our infrastructure meets their requirements today and in the future. They have provided valuable testing and feedback on Maia, and we will continue to consult their roadmap in the development of our Microsoft first-party AI silicon generations.”

We also know that Maia 100’s physical package is larger than a typical GPU’s.

Microsoft says that it had to build from scratch the data center server racks that house Maia 100 chips, with the goal of accommodating both the chips and the necessary power and networking cables. Maia 100 also required a unique liquid-based cooling solution since the chips consume a higher-than-average amount of power and Microsoft’s data centers weren’t designed for large liquid chillers.

Image Credits: Microsoft

“Cold liquid flows from [a ‘sidekick’] to cold plates that are attached to the surface of Maia 100 chips,” explains a Microsoft-authored post. “Each plate has channels through which liquid is circulated to absorb and transport heat. That flows to the sidekick, which removes heat from the liquid and sends it back to the rack to absorb more heat, and so on.”

As with Maia 100, Microsoft kept most of Cobalt 100’s technical details vague in its Ignite unveiling, save that Cobalt 100’s an energy-efficient, 128-core chip built on an Arm Neoverse CSS architecture and “optimized to deliver greater efficiency and performance in cloud native offerings.”

Cobalt 100
Image Credits: Microsoft

Arm-based AI inference chips were something of a trend — a trend that Microsoft’s now perpetuating. Amazon’s latest data center chip for inference, Graviton3E (which complements Inferentia, the company’s other inference chip), is built on an Arm architecture. Google is reportedly preparing custom Arm server chips of its own, meanwhile.

“The architecture and implementation is designed with power efficiency in mind,” Wes McCullough, CVP of hardware product development, said of Cobalt in a statement. “We’re making the most efficient use of the transistors on the silicon. Multiply those efficiency gains in servers across all our datacenters, it adds up to a pretty big number.”

A Microsoft spokesperson said that Cobalt 100 will power new virtual machines for customers in the coming year.

But why?

So Microsoft’s made AI chips. But why? What’s the motivation?

Well, there’s the company line — “optimizing every layer of [the Azure] technology stack,” one of the Microsoft blog posts published today reads. But the subtext is, Microsoft’s vying to remain competitive — and cost-conscious — in the relentless race for AI dominance.

The scarcity and indispensability of GPUs has left companies in the AI space large and small, including Microsoft, beholden to chip vendors. In May, Nvidia reached a market value of more than $1 trillion on AI chip and related revenue ($13.5 billion in its most recent fiscal quarter), becoming only the sixth tech company in history to do so. Even with a fraction of the install base, Nvidia’s chief rival, AMD, expects its GPU data center revenue alone to eclipse $2 billion in 2024.

Microsoft is no doubt dissatisfied with this arrangement. OpenAI certainly is — and it’s OpenAI’s tech that drives many of Microsoft’s flagship AI products, apps and services today.

In a private meeting with developers this summer, Altman admitted that GPU shortages and costs were hindering OpenAI’s progress; the company just this week was forced to pause sign-ups for ChatGPT due to capacity issues. Underlining the point, Altman said in an interview this week with the Financial Times that he “hoped” Microsoft, which has invested over $10 billion in OpenAI over the past four years, would increase its investment to help pay for “huge” imminent model training costs.

Microsoft itself warned shareholders earlier this year of potential Azure AI service disruptions if it can’t get enough chips for its data centers. The company’s been forced to take drastic measures in the interim, like incentivizing Azure customers with unused GPU reservations to give up those reservations in exchange for refunds and pledging upwards of billions of dollars to third-party cloud GPU providers like CoreWeave.

Should OpenAI design its own AI chips as rumored, it could put the two parties at odds. But Microsoft likely sees the potential cost savings arising from in-house hardware — and competitiveness in the cloud market — as worth the risk of preempting its ally.

One of Microsoft’s premiere AI products, the code-generating GitHub Copilot, has reportedly been costing the company up to $80 per user per month partially due to model inferencing costs. If the situation doesn’t turn around, investment firm UBS sees Microsoft struggling to generate AI revenue streams next year.

Of course, hardware is hard, and there’s no guarantee that Microsoft will succeed in launching AI chips where others failed.

Meta’s early custom AI chip efforts were beset with problems, leading the company to scrap some of its experimental hardware. Elsewhere, Google hasn’t been able to keep pace with demand for its TPUs, Wired reports — and ran into design issues with its newest generation of the chip.

Microsoft’s giving it the old college try, though. And it’s oozing with confidence.

“Microsoft innovation is going further down in the stack with this silicon work to ensure the future of our customers’ workloads on Azure, prioritizing performance, power efficiency and cost,” Pat Stemen, a partner program manager on Microsoft’s Azure hardware systems and infrastructure team, said in a blog post today. “We chose this innovation intentionally so that our customers are going to get the best experience they can have with Azure today and in the future …We’re trying to provide the best set of options for [customers], whether it’s for performance or cost or any other dimension they care about.”

More TechCrunch

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

17 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

3 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies