hbm4 memory is coming: samsung, sk hynix, and micron lead the charge for ai
seoul, Thursday, 20 March 2025.
the next generation of high bandwidth memory is on the horizon. samsung, sk hynix, and micron are developing hbm4 memory, expected in 2025, with hbm4e to follow in 2027. sk hynix has already delivered samples of its 12-layer hbm4, boasting a 36 gb capacity and a 2 tb/s bandwidth. this technology promises to boost ai gpu performance, challenging nvidia’s market dominance and driving demand for advanced manufacturing from asml and tsmc.
sk hynix leads with hbm4 samples
SK Hynix has taken the lead by providing samples of its 12-layer (12Hi) HBM4 memory to key clients [2][3][4]. This memory uses 24Gb DRAM chips and advanced MR-MUF bonding technology [2][4]. A single package boasts a 36GB capacity and a 2TB/s bandwidth [2][4]. SK Hynix anticipates mass production readiness in the second half of 2025, strengthening its position in the next-generation AI memory market [2][4]. The company aims to meet customer demands and overcome technological hurdles to lead AI ecosystem innovation [3].
samsung’s hbm roadmap
Samsung’s roadmap reveals ambitions for 48GB and 64GB per stack, with a density of 32Gb per layer [1]. These HBM4 modules will target speeds of 9.2 Gbps to 10 Gbps, available in 8, 12, and 16-high stacks [1]. While Samsung is working to catch up, recent reports indicate their HBM3E is undergoing Nvidia’s quality certification, with expected approval by early June [4]. Securing Nvidia’s validation is crucial for Samsung to remain competitive in the high-bandwidth memory market [4].
nvidia’s future gpu architecture
Nvidia’s next-generation Rubin architecture will utilize HBM4 memory, while the Rubin Ultra will incorporate HBM4e [1]. The Rubin Ultra NVL576, set to launch in the second half of 2027, will feature a massive 1TB of HBM4e memory [1]. This configuration includes 576 nodes, 2304 Rubin GPUs, and 576 Vera CPUs, demonstrating the escalating demand for memory bandwidth in AI applications [7]. Nvidia’s roadmap also includes the ‘Feynman’ architecture in 2028, potentially using HBM5 memory [7].
implications for asml and tsmc
The push for HBM4 and HBM4e will likely increase demand for advanced manufacturing technologies [1]. ASML’s lithography systems and TSMC’s fabrication capabilities are essential for producing these high-density memory chips [GPT]. Investors should monitor these companies, as their technologies are vital for enabling the performance gains promised by HBM4 [GPT]. The intricate packaging and high-density requirements of HBM technology will further solidify ASML and TSMC’s critical roles in the semiconductor supply chain [GPT].
market outlook and investor considerations
The transition to HBM4 represents a significant opportunity for memory manufacturers [1]. SK Hynix’s early sampling and Samsung’s efforts to gain Nvidia’s approval highlight the competitive landscape [4]. Micron’s role in this market should also be closely monitored as they ramp up their HBM4 production [1]. Investors should watch for announcements regarding production yields, design wins, and partnerships with major GPU vendors [GPT]. Success in the HBM4 market will likely translate to increased stock value for the winning companies [GPT].
Bronnen
- videocardz.com
- www.ithome.com
- www.sohu.com
- m.mydrivers.com
- m.weibo.cn
- digi.ithome.com
- news.mydrivers.com
- www.360doc.com