We’re Building the Future of Data Infrastructure

Archive for the 'Cloud' Category

  • June 18, 2024

    Custom Compute in the AI Era

    This article is the final installment in a series of talks delivered Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    AI demands are pushing the limits of semiconductor technology, and hyperscale operators are at the forefront of adoption—they develop and deploy leading-edge technology that increases compute capacity. These large operators seek to optimize performance while simultaneously lowering total cost of ownership (TCO). With billions of dollars on the line, many have turned to custom silicon to meet their TCO and compute performance objectives.

    But building a custom compute solution is no small matter. Doing so requires a large IP portfolio, significant R&D scale and decades of experience to create the mix of ingredients that make up custom AI silicon. Today, Marvell is partnering with hyperscale operators to deliver custom compute silicon that’s enabling their AI growth trajectories.

    Why are hyperscale operators turning to custom compute?

    Hyperscale operators have always been focused on maximizing both performance and efficiency, but new demands from AI applications have amplified the pressure. According to Raghib Hussain, president of products and technologies at Marvell, “Every hyperscaler is focused on optimizing every aspect of their platform because the order of magnitude of impact is much, much higher than before. They are not only achieving the highest performance, but also saving billions of dollars.”

    With multiple business models in the cloud, including internal apps, infrastructure-as-a-service (IaaS), and software-as-a-service (SaaS)—the latter of which is the fastest-growing market thanks to generative AI—hyperscale operators are constantly seeking ways to improve their total cost of ownership. Custom compute allows them to do just that. Operators are first adopting custom compute platforms for their mass-scale internal applications, such as search and their own SaaS applications. Next up for greater custom adoption will be third-party SaaS and IaaS, where the operator offers their own custom compute as an alternative to merchant options.

    Progression of custom silicon adoption in hyperscale data centers.

    Progression of custom silicon adoption in hyperscale data centers.

  • June 11, 2024

    How AI Will Drive Cloud Switch Innovation

    This article is part five in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024. 

    AI has fundamentally changed the network switching landscape. AI requirements are driving foundational shifts in the industry roadmap, expanding the use cases for cloud switching semiconductors and creating opportunities to redefine the terrain.

    Here’s how AI will drive cloud switching innovation.

    A changing network requires a change in scale

    In a modern cloud data center, the compute servers are connected to themselves and the internet through a network of high-bandwidth switches. The approach is like that of the internet itself, allowing operators to build a network of any size while mixing and matching products from various vendors to create a network architecture specific to their needs.

    Such a high-bandwidth switching network is critical for AI applications, and a higher-performing network can lead to a more profitable deployment.

    However, expanding and extending the general-purpose cloud network to AI isn’t quite as simple as just adding more building blocks. In the world of general-purpose computing, a single workload or more can fit on a single server CPU. In contrast, AI’s large datasets don’t fit on a single processor, whether it’s a CPU, GPU or other accelerated compute device (XPU), making it necessary to distribute the workload across multiple processors. These accelerated processors must function as a single computing element. 

    AI calls for enhanced cloud switch architecture

    AI requires accelerated infrastructure to split workloads across many processors.

  • March 23, 2023

    How Secure is Your 5G Network?

    By Bill Hagerstrand, Security Solutions BU, Marvell

    New Challenges and Solutions in an Open, Disaggregated Cloud-Native World

    Time to grab a cup of coffee, as I describe how the transition towards open, disaggregated, and virtualized networks – also known as cloud-native 5G – has created new challenges in an already-heightened 4G-5G security environment.

    5G networks move, process and store an ever-increasing amount of sensitive data as a result of faster connection speeds, mission-critical nature of new enterprise, industrial and edge computing/AI applications, and the proliferation of 5G-connected IoT devices and data centers. At the same time, evolving architectures are creating new security threat vectors. The opening of the 5G network edge is driven by O-RAN standards, which disaggregates the radio units (RU), front-haul, mid-haul, and distributed units (DU). Virtualization of the 5G network further disaggregates hardware and software and introduces commodity servers with open-source software running in virtual machines (VM’s) or containers from the DU to the core network.

    As a result, these factors have necessitated improvements in 5G security standards that include additional protocols and new security features. But these measures alone, are not enough to secure the 5G network in the cloud-native and quantum computing era. This blog details the growing need for cloud-optimized HSMs (Hardware Security Modules) and their many critical 5G use cases from the device to the core network.

  • January 04, 2023

    Software-Defined Networking for the Software-Defined Vehicle

    By Amir Bar-Niv, VP of Marketing, Automotive Business Unit, Marvell and John Heinlein, Chief Marketing Officer, Sonatus and Simon Edelhaus, VP SW, Automotive Business Unit, Marvell

    The software-defined vehicle (SDV) is one of the newest and most interesting megatrends in the automotive industry. As we discussed in a previous blog, the reason that this new architectural—and business—model will be successful is the advantages it offers to all stakeholders:

    • The OEMs (car manufacturers) will gain new revenue streams from aftermarket services and new applications;
    • The car owners will easily upgrade their vehicle features and functions; and
    • The mobile operators will profit from increased vehicle data consumption driven by new applications.

    What is a software-defined vehicle? While there is no official definition, the term reflects the change in the way software is being used in vehicle design to enable flexibility and extensibility. To better understand the software-defined vehicle, it helps to first examine the current approach.

    Today’s embedded control units (ECUs) that manage car functions do include software, however, the software in each ECU is often incompatible with and isolated from other modules. When updates are required, the vehicle owner must visit the dealer service center, which inconveniences the owner and is costly for the manufacturer.

  • November 28, 2022

    A Marvell-ous Hack Indeed – Winning the Hearts of SONiC Users

    By Kishore Atreya, Director of Product Management, Marvell

    Recently the Linux Foundation hosted its annual ONE Summit for open networking, edge projects and solutions. For the first time, this year’s event included a “mini-summit” for SONiC, an open source networking operating system targeted for data center applications that’s been widely adopted by cloud customers. A variety of industry members gave presentations, including Marvell’s very own Vijay Vyas Mohan, who presented on the topic of Extensible Platform Serdes Libraries. In addition, the SONiC mini-summit included a hackathon to motivate users and developers to innovate new ways to solve customer problems. 

    So, what could we hack?

    At Marvell, we believe that SONiC has utility not only for the data center, but to enable solutions that span from edge to cloud. Because it’s a data center NOS, SONiC is not optimized for edge use cases. It requires an expensive bill of materials to run, including a powerful CPU, a minimum of 8 to 16GB DDR, and an SSD. In the data center environment, these HW resources contribute less to the BOM cost than do the optics and switch ASIC. However, for edge use cases with 1G to 10G interfaces, the cost of the processor complex, primarily driven by the NOS, can be a much more significant contributor to overall system cost. For edge disaggregation with SONiC to be viable, the hardware cost needs to be comparable to that of a typical OEM-based solution. Today, that’s not possible.

  • October 26, 2022

    The Tasting Notes for 64G Fibre Channel

    By Nishant Lodha, Director of Product Marketing – Emerging Technologies, Marvell

    While age is just a number and so is new speed for Fibre Channel (FC), the number itself is often irrelevant and it’s the maturity that matters – kind of like a bottle of wine! Today as we make a toast to the data center and pop open (announce) the Marvell® QLogic® 2870 Series 64G Fibre Channel HBAs, take a glass and sip into its maturity to find notes of trust and reliability alongside of operational simplicity, in-depth visibility, and consistent performance.

    Big words on the label? I will let you be the sommelier as you work through your glass and my writings.

    Marvell QLogic 2870 series 64GFC HBAs

  • October 12, 2022

    The Evolution of Cloud Storage and Memory

    By Gary Kotzur, CTO, Storage Products Group, Marvell and Jon Haswell, SVP, Firmware, Marvel

    The nature of storage is changing much more rapidly than it ever has historically. This evolution is being driven by expanding amounts of enterprise data and the inexorable need for greater flexibility and scale to meet ever-higher performance demands.

    If you look back 10 or 20 years, there used to be a one-size-fits-all approach to storage. Today, however, there is the public cloud, the private cloud, and the hybrid cloud, which is a combination of both. All these clouds have different storage and infrastructure requirements. What’s more, the data center infrastructure of every hyperscaler and cloud provider is architecturally different and is moving towards a more composable architecture. All of this is driving the need for highly customized cloud storage solutions as well as demanding the need for a comparable solution in the memory domain.

  • October 05, 2022

    Designing energy efficient chips

    By Rebecca O'Neill, Global Head of ESG, Marvell

    Today is Energy Efficiency Day. Energy, specifically the electricity consumption required to power our chips, is something that is top of mind here at Marvell. Our goal is to reduce power consumption of products with each generation for set capabilities.

    Our products play an essential role in powering data infrastructure spanning cloud and enterprise data centers, 5G carrier infrastructure, automotive vehicles, and industrial and enterprise networking. When we design our products, we focus on innovative features that deliver new capabilities while also improving performance, capacity and security to ultimately improve energy efficiency during product use.

    These innovations help make the world’s data infrastructure more efficient and, by extension, reduce our collective impact on climate change. The use of our products by our customers contributes to Marvell’s Scope 3 greenhouse gas emissions, which is our biggest category of emissions.

  • September 26, 2022

    SONiC: It’s Not Just for Switches Anymore

    By Amit Sanyal, Senior Director, Product Marketing, Marvell

    SONiC (Software for Open Networking in the Cloud) has steadily gained momentum as a cloud-scale network operating system (NOS) by offering a community-driven approach to NOS innovation. In fact, 650 Group predicts that revenue for SONiC hardware, controllers and OSs will grow from around US$2 billion today to around US$4.5 billion by 2025. 

    Those using it know that the SONiC open-source framework shortens software development cycles; and SONiC’s Switch Abstraction Interface (SAI) provides ease of porting and a homogeneous edge-to-cloud experience for data center operators. It also speeds time-to-market for OEMs bringing new systems to the market.

    The bottom line: more choice is good when it comes to building disaggregated networking hardware optimized for the cloud. Over recent years, SONiC-using cloud customers have benefited from consistent user experience, unified automation, and software portability across switch platforms, at scale.

    As the utility of SONiC has become evident, other applications are lining up to benefit from this open-source ecosystem.

    A SONiC Buffet: Extending SONiC to Storage

    SONiC capabilities in Marvell’s cloud-optimized switch silicon include high availability (HA) features, RDMA over converged ethernet (RoCE), low latency, and advanced telemetry. All these features are required to run robust storage networks.

    Here’s one use case: EBOF. The capabilities above form the foundation of Marvell’s Ethernet-Bunch-of-Flash (EBOF) storage architecture. The new EBOF architecture addresses the non-storage bottlenecks that constrain the performance of the traditional Just-a-Bunch-of-Flash (JBOF) architecture it replaces-by disaggregating storage from compute.

    EBOF architecture replaces the bottleneck components found in JBOF - CPUs, DRAM and SmartNICs - with an Ethernet switch, and it’s here that SONiC is added to the plate. Marvell has, for the first time, applied SONiC to storage, specifically for services enablement, including the NVMeoFTM (NVM Express over Fabrics) discovery controller, and out-of-band management for EBOF, using Redfish® management. This implementation is in production today on the Ingrasys ES2000 EBOF storage solution. (For more on this topic, check out thisthis, and this.)

    Marvell has now extended SONiC NOS to enable storage services, thus bringing the benefits of disaggregated open networking to the storage domain.

    OK, tasty enough, but what about compute?

    How Would You Like Your Arm Prepared?

    I prefer Arm for my control plane processing, you say. Why can’t I manage those switch-based processors using SONiC, too, you ask? You’re in luck. For the first time, SONiC is the OS for Arm-based, embedded control plane processors, specifically the control plane processors found on Marvell® Prestera® switches. SONiC-enabled Arm processing allows SONiC to run on lower-cost 1G systems, reducing the bill-of-materials, power, and total cost of ownership for both management and access switches.

    In addition to embedded processors, with the OCTEON® family, Marvell offers a smorgasbord of Arm-based processors. These can be paired with Marvell switches to bring the benefits of the Arm ecosystem to networking, including Data Processing Units (DPUs) and SmartNICs.

    By combining SONiC with Arm processors, we’re setting the table for the broad Arm software ecosystem - which will develop applications for SONiC that can benefit both cloud and enterprise customers.

    The Third Course

    So, you’ve made it through the SONiC-enabled switching and on-chip control processing courses but there’s something more you need to round out the meal. Something to reap the full benefit of your SONiC experience. PHY, of course. Whether your taste runs to copper or optical mediums; PAM or coherent modulation, Marvell provides a complete SONiC-enabled portfolio by offering SONiC with our (not baked) Alaska® Ethernet PHYs and optical modules built using Marvell DSPs.  Room for Dessert?

    Finally, by enabling SONiC across the data center and enterprise switch portfolio we’re able to bring operators the enhanced telemetry and visibility capabilities that are so critical to effective service-level validation and troubleshooting. For more information on Marvell telemetry capabilities, check out this short video:

     

    The Drive Home

    Disaggregation has lowered the barrier-to-entry for market participants - unleashing new innovations from myriad hardware and software suppliers. By making use of SONiC, network designers can readily design and build disaggregated data center and enterprise networks.

    For its part, Marvell’s goal is simple: help realize the vision of an open-source standardized network operating system and accelerate its adoption.

Archives