Nvidia Developing AI GPU with 144GB HBM3E Memory

Nvidia Developing AI GPU with 144GB HBM3E Memory

Nvidia is gearing up for the release of its next-generation B100 and B200 GPUs, which are built on the Blackwell architecture, as reported by TrendForce. These advanced GPUs are slated to launch in the latter half of this year and will cater primarily to Cloud Service Providers (CSPs) for their cloud computing needs. Additionally, Nvidia plans to introduce a simplified version, the B200A, aimed at OEM enterprise customers with edge AI requirements.

Tight Packaging Capacity

TSMC's CoWoS-L packaging capacity, utilized by the B200 series, remains limited. The B200A will transition to the more straightforward CoWoS-S packaging technology to better meet the needs of Cloud Service Providers.

B200A Technical Specifications

Although the full technical specifications of the B200A are not yet detailed, it is confirmed that the HBM3E memory capacity will be reduced from 192GB to 144GB. Additionally, the number of memory chip layers will be cut from eight to four, although the capacity of each individual chip will increase from 24GB to 36GB.


Nvidia Developing AI GPU with 144GB HBM3E Memory

Power Consumption and Cooling

The B200A will consume less power than the B200 GPUs and will not require liquid cooling, making it easier to install with an air cooling system. This new GPU model is expected to be available to OEM manufacturers by the second quarter of next year.

Supply chain surveys indicate that Nvidia’s primary high-end GPU shipments in 2024 will be based on the Hopper platform, with H100 and H200 targeting the North American market and H20 for the Chinese market. Since the B200A will be released around the second quarter of 2025, it is anticipated not to overlap with the H200, which is scheduled to arrive on or after the third quarter.

(Source: TrendForce)

Scroll to Top