top of page

Nvidia Unveils H200: A Powerhouse for AI Model Training

Nvidia plans to offer the H200 in both four-GPU and eight-GPU server configurations through its HGX complete systems.

Nvidia is unveiling its H200 in four and eight-GPU configurations.
Credit: NurPhoto/Getty Images

SANTA CLARA, CA – Nvidia, a front-runner in AI technology, has recently introduced its latest high-end chip, the H200. This graphics processing unit (GPU) is specifically crafted for the intricate task of training and deploying artificial intelligence models, the very engines propelling the generative AI surge.

A Leap Beyond: H200 vs. H100

The H200 marks a significant leap from its predecessor, the H100, renowned for being the chip behind OpenAI's GPT-4 language model. Demand for these chips has skyrocketed, with major players in various sectors vying for a limited supply.

According to estimates from Raymond James, the H100 chips, vital for the training process, cost between $25,000 and $40,000 each. Large-scale AI model creation, known as "training," requires thousands of these chips working collaboratively.

Excitement over Nvidia's AI GPUs has propelled the company's stock to impressive heights, boasting a remarkable 230% increase in 2023. Projected to achieve around $16 billion in revenue for the fiscal third quarter, Nvidia's sales are expected to surge by an impressive 170% compared to the previous year.

H200's Key Advancements

The H200's prowess lies in its inclusion of 141GB of next-generation "HBM3" memory, a critical feature facilitating swift execution of "inference." In simpler terms, this means the chip can rapidly generate text, images, or predictions after completing the training phase.

Nvidia affirms that the H200 outperforms its predecessor, the H100, by nearly doubling the speed of output generation. This claim is substantiated through tests, including one employing Meta's Llama 2 LLM.

Set to hit the market in the second quarter of 2024, the H200 is positioned to rival AMD's MI300X GPU. Featuring additional memory compared to its predecessors, AMD's chip excels at accommodating large models for seamless inference.

Crucially, Nvidia ensures backward compatibility, indicating that companies currently utilizing the H100 won't need to overhaul their server systems or software to seamlessly transition to the H200.


bottom of page