Nvidia broadcasts the GB200 Blackwell AI chip, launching later this yr

Norman Ray
Norman Ray

International Courant

Nvidia CEO Jensen Huang delivers a keynote speech throughout the Nvidia GTC Synthetic Intelligence Convention at SAP Middle on March 18, 2024 in San Jose, California.

Justin Sullivan | Getty Photographs

Nvidia on Monday introduced a brand new technology of synthetic intelligence chips and software program for operating synthetic intelligence fashions. The announcement, made throughout Nvidia’s developer convention in San Jose, comes because the chipmaker seems to solidify its place because the go-to provider for AI firms.

- Advertisement -

Nvidia’s share worth has elevated fivefold and whole income has greater than tripled since OpenAI’s ChatGPT kicked off the AI ​​growth in late 2022. Nvidia’s high-end server GPUs are important for coaching and deploying giant AI fashions. Corporations prefer it Microsoft And Meta have spent billions of {dollars} shopping for the chips.

The brand new technology of AI graphics processors is known as Blackwell. The primary Blackwell chip is known as the GB200 and can be launched later this yr. Nvidia is tempting its clients with extra highly effective chips to stimulate new orders. For instance, firms and software program makers are nonetheless making an attempt to get their palms on the present technology of “Hopper” H100s and comparable chips.

“Hopper is nice, however we’d like larger GPUs,” Nvidia CEO Jensen Huang mentioned Monday on the firm’s developer convention in California.

The corporate additionally launched revenue-generating software program known as NIM that may make it simpler to deploy AI, giving clients another excuse to stay with Nvidia chips over a rising variety of rivals.

Nvidia executives say the corporate is turning into much less of a rental chip supplier and extra of a platform supplier, like Microsoft or Apple, on which different firms can construct software program.

- Advertisement -

“Blackwell just isn’t a chip, it’s the identify of a platform,” Huang mentioned.

“The marketable industrial product was the GPU, and the software program was all there to assist individuals use the GPU in several methods,” Manuvir Das, vp of Nvidia Enterprise, mentioned in an interview. “After all we nonetheless do this. However what has actually modified is that we now actually have a industrial software program firm.”

Das mentioned Nvidia’s new software program will make it simpler to run applications on all Nvidia GPUs, even older ones that could be higher suited to deploying however not constructing AI.

- Advertisement -

“For those who’re a developer, you have got an fascinating mannequin that you really want individuals to undertake. For those who put it in a NIM, we be certain it runs on all our GPUs so that you attain lots of people,” Das mentioned.

Meet Blackwell, Hopper’s successor

Nvidia’s GB200 Grace Blackwell Superchip, with two B200 graphics processors and one Arm-based central processor.

Each two years, Nvidia updates its GPU structure, bringing an enormous efficiency enhance. Most of the AI ​​fashions launched over the previous yr have been educated on the corporate’s Hopper structure – utilized by chips just like the H100 – which was introduced in 2022.

Nvidia says Blackwell-based processors just like the GB200 supply an enormous efficiency improve for AI firms, with 20 petaflops in AI efficiency versus 4 petaflops for the H100. The extra processing energy will enable AI firms to coach bigger and extra difficult fashions, Nvidia mentioned.

The chip comprises what Nvidia calls a “transformer engine” constructed particularly to run transformer-based AI, one of many core applied sciences underlying ChatGPT.

The Blackwell GPU is giant and combines two individually manufactured dies into one chip, manufactured by TSMC. It should even be out there as a full server known as the GB200 NVLink 2, which mixes 72 Blackwell GPUs and different Nvidia elements designed to coach AI fashions.

Nvidia CEO Jensen Huang compares the scale of the brand new “Blackwell” chip to the present “Hopper” H100 chip on the firm’s developer convention in San Jose, California.


Amazon, Googling, MicrosoftAnd Oracle will promote entry to the GB200 through cloud companies. The GB200 combines two B200 Blackwell GPUs with one Arm-based Grace CPU. Nvidia mentioned Amazon Internet Companies would construct a server cluster with 20,000 GB200 chips.

Nvidia mentioned the system can deploy a mannequin with 27 trillion parameters. That is a lot bigger than even the most important fashions, comparable to GPT-4, which reportedly has 1.7 trillion parameters. Many synthetic intelligence researchers consider in bigger fashions with extra parameters and knowledge may unlock new potentialities.

Nvidia has not offered any prices for the brand new GB200 or the techniques will probably be utilized in. Nvidia’s Hopper-based H100 prices between $25,000 and $40,000 per chip, whereas whole techniques price as a lot as $200,000, in accordance with analyst estimates.

Nvidia will even promote B200 graphics processors as a part of a whole system that takes up a complete server rack.


Nvidia additionally introduced that it’s including a brand new product known as NIM to its Nvidia enterprise software program subscription.

NIM makes it simpler to make use of older Nvidia GPUs for inference, or the method of operating AI software program, and can enable firms to proceed utilizing the lots of of thousands and thousands of Nvidia GPUs they already personal. Inference requires much less computing energy than the preliminary coaching of a brand new AI mannequin. NIM makes it potential for firms that wish to use their very own AI fashions, moderately than buying entry to AI outcomes as a service from firms like OpenAI.

The technique is to persuade clients who purchase Nvidia-based servers to enroll in Nvidia Enterprise, which prices $4,500 per GPU per yr to license.

Nvidia will work with AI firms comparable to Microsoft or Hugging Face to make sure that their AI fashions are tailor-made to all appropriate Nvidia chips. Then, utilizing a NIM, builders can effectively run the mannequin on their very own servers or on cloud-based Nvidia servers with out a prolonged configuration course of.

“In my code, the place I name OpenAI, I’ll substitute one line of code to level it to this NIM that I bought from Nvidia,” Das mentioned.

Nvidia says the software program will even assist AI run on GPU-equipped laptops, moderately than servers within the cloud.

Nvidia broadcasts the GB200 Blackwell AI chip, launching later this yr

World Information,Subsequent Huge Factor in Public Knowledg

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *