Published in News

Micron ships HBM3E and SOCAMM products for AI servers

by on19 March 2025


Memory wizardry is the secret sauce behind AI

Memory outfit Micron is shipping both HBM3E and SOCAMM products for AI servers, and claims its chips will be the secret sauce behind the AI boom.

The troubled memory chip industry, still reeling from past supply chain woes, now views AI as its golden ticket, and Micron wants to be at the forefront.

Nvidia has signed up for Micron's modular LPDDR5X memory solution, supposedly fine-tuned for the Grace Blackwell Ultra Superchip. According to Nvidia and Micron, AI workloads will run smoother, faster, and with less power consumption.

At GTC 2025, Micron plans to showcase its full AI memory and storage kit, aiming to impress the discerning investors of Wall Street.

The lineup includes HBM3E in 8H and 12H variants, as well as LPDDR5X SOCAMMs, GDDR7, DDR5 RDIMMs, and MRDIMMs. For those needing storage, Micron is pushing its high-capacity SSDs and automotive-grade memory, aiming to shove its silicon into everything from data centres to self-driving bangers.

The marketing spiel insists that Micron’s SOCAMM is the world’s fastest, smallest, and lowest-power modular memory solution because, of course, it is. It’s 2.5 times better than RDIMMs, one-third the size, and sips power like a tea-drinking pensioner.

If the claims hold up, AI server farms might start looking at these as a way to cram more computing power into smaller spaces without overloading the power grid.

Micron's HBM solutions aren’t just trying to keep up with AI's hunger for speed—they want to lead the pack. The firm boasts a 50 per cent capacity boost over the HBM3E 8H 24GB, while allegedly slashing power usage by a fifth. Micron is already promising HBM4, which it claims will be over 50 per cent better than today’s best efforts.

Micron is investing heavily in SSDs for AI workloads, with its 61.44TB ION NVMe SSD promising to store more data per rack than ever before. This means AI training farms will soon be running on its chips from start to finish, and its storage solutions will be feeding those models like a buffet in Vegas.

All of this, of course, comes wrapped in the usual PR fluff about ecosystem collaboration, innovation, and power efficiency.

Micron’s Compute and Networking Business Unit senior vice president and general manager, Raj Narasimhan, said, “AI is driving a paradigm shift in computing, and memory is at the heart of this evolution.” Micron’s contributions to the NVIDIA Grace Blackwell platform yield significant performance and power-saving benefits for AI training and inference applications. HBM and LP memory solutions help unlock improved computational capabilities for GPUs.”

Last modified on 19 March 2025
Rate this item
(0 votes)

Read more about: