NVIDIA Blackwell Ultra AI GPUs renamed to B300 Series
NVIDIA pivots to AI models for bigger profits in 2025

WePC is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more
Nvidia’s next-gen Blackwell Ultra AI GPUs, designed for AI computing, have been rebranded as the B300 series and are expected to feature 12-Hi HBM3E and TSMC CoWoS-L technology.
According to TrendForce, Nvidia’s Blackwell Ultra AI GPU lineup has been rebranded to the B300 series. Under this new naming convention, the B200 Ultra AI accelerator and the GB200 Ultra AI server will now be referred to as B300 and GB300, respectively. Additionally, the brand’s “A” series AI products will also undergo rebranding, with the 200A Ultra and GB200A Ultra updated to B300A and GB300A, respectively. While the exact motive is unclear, it is believed that Nvidia is changing the naming scheme to reflect the architectural changes planned for the B300 series.
NVIDIA originally intended to launch the B200A series
The original B200A series, intended for server OEMs, is not expected to be released in the market. Instead, Nvidia shifted to the B300A during the design process, and the B200A series will now be adjusted with the launch of the B300 lineup. Nvidia believes this change will help attract enterprise clients who are not looking to invest heavily in building their own AI computing power.
B300 series to use HBM3e 12hi
TrendForce also provided a key specifications forecast for the Nvidia B300 Blackwell Ultra lineup, which is expected to feature a mix of HGX, MGX, and NVL configurations, with NVL-36 and NVL-72 serving as the powerhouses for servers and data centers. Additionally, these chips will incorporate up to 288 GB of HBM3e memory and will also be available in 144 GB packages. For comparison, AMD’s first AI MI325X GPU currently offers 256 GB capacities, but the company plans to release a 288 GB variant later in 2025 called MI350X.
Additionally, all B300 models will include HBM3e 12Hi memory and CoWoS-L technology. Production is scheduled to begin between the fourth quarter of 2024 and the first quarter of 2025. However, as this marks NVIDIA’s first large-scale production of a 12Hi stack product, suppliers are expected to need at least two quarters to fine-tune their processes and stabilize production yields.
NVIDIA’s latest product strategy is heading towards AI models
Nvidia has been aiming for the top spot in market share for some time. After recently surpassing Microsoft to secure the 2nd position, the demand for its Blackwell lineup is expected to drive the numbers further. Shipment trends indicate that Nvidia’s high-end GPU offerings are projected to reach around 50% of overall shipment share in 2024—an increase of over 20 percentage points from the previous year. On top of that, the Blackwell platform is anticipated to boost this figure to 65% in 2025.