By Chander Chadha, Director of Marketing, Flash Storage Products, Marvell
AI is all about dichotomies. Distinct computing architectures and processors have been developed for training and inference workloads. In the past two years, scale-up and scale-out networks have emerged.
Soon, the same will happen in storage.
The AI infrastructure need is prompting storage companies to develop SSDs, controllers, NAND and other technologies fine-tuned to support GPUs—with an emphasis on higher IOPS (input/output operations per second) for AI inference—that will be fundamentally different from those for CPU-connected drives where latency and capacity are the bigger focus points. This drive bifurcation also likely won’t be the last; expect to also see drives optimized for training or inference.
As in other technology markets, the changes are being driven by the rapid growth of AI and the equally rapidly growing need to boost the performance, efficiency and TCO of AI infrastructure. The total amount of SSD capacity inside data centers is expected to double to approximately 2 zettabytes by 2028 with the growth primary fueled by AI.1 By that year, SSDs will account for 41% of the installed base of data center drives, up from 25% in 2023.1
Greater storage capacity, however, also potentially means more storage network complexity, latency, and storage management overhead. It also means potentially more power. In 2023, SSDs accounted for 4 terawatt hours of data center power, or around 25% of the 16 TWh consumed by storage. By 2028, SSDs are slated to account for 11TWh, or 50%, of storage’s expected total for the year.1 While storage represents less than five percent of total data power consumption, the total remains large and provides incentives for saving. Reducing storage power by even 1 TWh, or less than 10%, would save enough electricity to power 90,000 US homes for a year.2 Finding the precise balance between capacity, speed, power and cost will be critical for both AI data center operators and customers. Creating different categories of technologies becomes the first step toward optimizing products in a way that will be scalable.
By Kishore Atreya, Director of Product Management, Marvell
Recently the Linux Foundation hosted its annual ONE Summit for open networking, edge projects and solutions. For the first time, this year’s event included a “mini-summit” for SONiC, an open source networking operating system targeted for data center applications that’s been widely adopted by cloud customers. A variety of industry members gave presentations, including Marvell’s very own Vijay Vyas Mohan, who presented on the topic of Extensible Platform Serdes Libraries. In addition, the SONiC mini-summit included a hackathon to motivate users and developers to innovate new ways to solve customer problems.
So, what could we hack?
At Marvell, we believe that SONiC has utility not only for the data center, but to enable solutions that span from edge to cloud. Because it’s a data center NOS, SONiC is not optimized for edge use cases. It requires an expensive bill of materials to run, including a powerful CPU, a minimum of 8 to 16GB DDR, and an SSD. In the data center environment, these HW resources contribute less to the BOM cost than do the optics and switch ASIC. However, for edge use cases with 1G to 10G interfaces, the cost of the processor complex, primarily driven by the NOS, can be a much more significant contributor to overall system cost. For edge disaggregation with SONiC to be viable, the hardware cost needs to be comparable to that of a typical OEM-based solution. Today, that’s not possible.
By Kristin Hehir, Senior Manager, PR and Marketing, Marvell
Flash Memory Summit (FMS), the industry’s largest conference featuring data storage and memory technology solutions, presented its 2022 Best of Show Awards at a ceremony held in conjunction with this week’s event. Marvell was named a winner alongside Exascend for the collaboration of Marvell’s edge and client SSD controller with Exascend’s high-performance memory card.
Honored as the “Most Innovative Flash Memory Consumer Application,” the Exascend Nitro CFexpress card powered by Marvell’s PCIe® Gen 4, 4-NAND channel 88SS1321 SSD controller enables digital storage of ultraHD video and photos in extreme temperature environments where ruggedness, endurance and reliability are critical. The Nitro CFexpress card is unique in controller, hardware and firmware architecture in that it combines Marvell’s 12nm process node, low-power, compact form factor SSD controller with Exascend’s innovative hardware design and Adaptive Thermal Control™ technology.
The Nitro card is the highest capacity VPG 400 CFexpress card on the market, with up to 1 TB of storage, and is certified VPG400 by the CompactFlash® Association using its stringent Video Performance Guarantee Profile 4 (VPG400) qualification. Marvell’s 88SS1321 controller helps drive the Nitro card’s 1,850 MB/s of sustained read and 1,700 MB/sustained write for ultimate performance.
“Consumer applications, such as high-definition photography and video capture using professional photography and cinema cameras, require the highest performance from their storage solution. They also require the reliability to address the dynamics of extreme environmental conditions, both indoors and outdoors,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize the collaboration of Marvell’s SSD controllers with Exascend’s memory cards, delivering 1,850 MB/s of sustained read and 1,700 MB/s sustained write for ultimate performance addressing the most extreme consumer workloads. Additionally, Exascend’s Adaptive Thermal Control™ technology provides an IP67 certified environmental hardening that is dustproof, water resistant and tackles the issue of overheating and thermal throttling.”
More information on the 2022 Flash Memory Summit Best of Show Award Winners can be found here.
Copyright © 2025 Marvell, All rights reserved.

