What do you get when you combine some of the world’s leading technology analysts with incredibly smart subject matter experts? Answer: the SixFive Media video podcast. It’s must-view content for anyone interested in understanding exactly how AI technologies are evolving.
At Marvell’s recent Investor Analysts Day, company leaders were happy to chat with Patrick Moorhead, CEO and Chief Analyst at Moor Insights & Strategy, and Daniel Newman, CEO and Chief Analyst at The Futurum Group. The resulting conversations (captured on video) were enlightening:
How Custom HBM is Shaping AI Chip Technology
Fresh off Marvell’s announcement of a partnership with SK Hynix, Micron Technology and Samsung Semiconductor, Patrick and Daniel dove into the details with leaders from those organizations. The partnership centers around custom high bandwidth memory (HBM), which fits inside AI accelerators to store data close to the processors.
Custom designs alleviate the physical and thermal constraints traditionally faced by chip designers by dramatically reducing the size and power consumption of the interface and HBM base die. Marvell estimates that up to 25% of the real estate inside the chip package can be recovered via customization.
Will Chu, SVP and GM of Custom Compute and Storage at Marvell, says the company estimates that the total addressable market (TAM) for data centers in 3-4 years is $75B. Last year it was $21B. Out of that, Marvell estimates that $40-43B is for custom accelerators.
Attached to that is custom HBM, which alleviates bottlenecks for AI workloads. In Dong Kim, VP of Product Planning at Samsung Semiconductor said, “Custom HBM will be the majority portion of the market towards the 2027-28 timeframe.” As Patrick Moorhead said, “The rate of change is phenomenal.”
The Value of Custom Silicon in the AI Era
In this video, Will Chu continues to share insights, joined by Sandeep Bharathi, Chief Development Officer of Engineering at Marvell. Will Townsend discusses custom silicon for AI with the two leaders. Customization is essential in the data center due to the different applications and nuances necessary for each high-performance workload.
“There’s quite a large opportunity and there’s the customer need to continue driving higher levels of performance and better TCO, and we have the technology platform that Sandeep’s team builds to deliver all of that,” said Will Chu. “We’ve really focused the company to be able to serve our customers and be successful in enabling that next-generation infrastructure.”
He also highlights how this is really a unique once-in-a-lifetime opportunity to build the infrastructure for AI, and that Marvell is well-poised as a leader in this space.
Sandeep rounds out the conversation with the following thoughts:
“Hardware is cool again. Software needs to run on something that can make all the magic possible. We are here to really push innovation to the bleeding edge and really make a difference in custom silicon, whether it is from a connectivity, compute, or holistic platform perspective.”
Tags: custom computing, ASIC, AI, AI infrastructure
Copyright © 2025 Marvell, All rights reserved.