By Rohan Gandhi, Director of Product Management for Switching Products, Marvell
Power and space are two of the most critical resources in building AI infrastructure. That’s why Marvell is working with cabling partners and other industry experts to build a framework that enables data center operators to integrate co-packaged copper (CPC) interconnects into scale-up networks.
Unlike traditional printed circuit board (PCB) traces, CPCs aren’t embedded in circuit boards. Instead, CPCs consist of discrete ribbons or bundles of twinax cable that run alongside the board. By taking the connection out of the board, CPCs extend the reach of copper connections without the need for additional components such as equalizers or amplifiers as well as reduce interference, improve signal integrity, and lower the power budget of AI networks.
Being completely passive, CPCs can’t match the reach of active electrical cables (AECs) or optical transceivers. They extend farther than traditional direct attach copper (DAC) cables, making them an optimal solution for XPU-to-XPU connections within a tray or connecting XPUs in a tray to the backplane. Typical 800G CPC connections between processors within the same tray span a few hundred millimeters while XPU-to-backplane connections can reach 1.5 meters. Looking ahead,1.6T CPCs based around 200G lanes are expected within the next two years, followed by 3.2T solutions.
While the vision can be straightforward to describe, it involves painstaking engineering and cooperation across different ecosystems. Marvell has been cultivating partnerships to ensure a smooth transition to CPCs as well as create an environment where the technology can evolve and scale rapidly.
By Nicola Bramante, Senior Principal Engineer, Connectivity Marketing, Marvell
The exponential growth in AI workloads drives new requirements for connectivity in terms of data rate, associated bandwidth and distance, especially for scale-up applications. With direct attach copper (DAC) cables reaching their limits in terms of bandwidth and distance, a new class of cables, active copper cables (ACCs), are coming to market for short-reach links within a data center rack and between racks. Designed for connections up to 2 to 2.5 meters long, ACCs can transmit signals further than traditional passive DAC cables in the 200G/lane fabrics hyperscalers will soon deploy in their rack infrastructures.
At the same time, a 1.6T ACC consumes a relatively miniscule 2.5 watts of power and can be built around fewer and less sophisticated components than longer active electrical cables (AECs) or active optical cables (AOCs). The combination of features gives ACCs a peak mix of bandwidth, power, and cost for server-to-server or server-to-switch connections within the same rack.
Marvell announced its first ACC linear equalizers for producing ACC cables last month.
Inside the Cable
ACCs effectively integrate technology originally developed for the optical realm into copper cables. The idea is to use optical technologies to extend bandwidth, distance and performance while taking advantage of copper’s economics and reliability. Where these ACCs differ is in the components added to them and the way they leverage the technological capabilities of a switch or other device to which they are connected.
ACCs include an equalizer that boosts signals received from the opposite end of the connection. As analog devices, ACC equalizers are relatively inexpensive compared to digital alternatives, consume minimal power and add very little latency.
By Khurram Malik, Senior Director of Marketing, Custom Cloud Solutions, Marvell
Near-memory compute technologies have always been compelling. They can offload tasks from CPUs to boost utilization and revenue opportunities for cloud providers. They can reduce data movement, one of the primary contributors to power consumption,1 while also increasing memory bandwidth for better performance.
They have also only been deployed sporadically; thermal problems, a lack of standards, cost and other issues have prevented many of these ideas giving developers that goldilocks combination of wanted features that will jumpstart commercial adoption.2
This picture is now changing with CXL compute accelerators, which leverage open standards, familiar technologies and a broad ecosystem. And, in a demonstration at OCP 2025, Samsung Electronics, software-defined composable solution provider Liqid, and Marvell showed how CXL accelerators can deliver outsized gains in performance.
The Liqid EX5410C is a demonstration of a CXL memory pooling and sharing appliance capable of scaling up to 20TB of additional memory. Five of the 4RU appliances can then be integrated into a pod for a whopping 100TB of memory and 5.1Tbps of additional memory bandwidth. The CXL fabric is managed by Liqid’s Matrix software that enables real-time and precise memory deployment based on workload requirements:

By Vienna Alexander, Marketing Content Professional, Marvell

Marvell was announced as the top Connectivity winner in the 2025 LEAP Awards for its 1.6 Tbps LPO Optical Chipset. The judges' remarks noted that “the value case writes itself—less power, reduced complexity but substantial bandwidth increase.” Marvell earned the gold spot, reaffirming the industry-leading connectivity portfolio it is continually building.
The LEAP (Leadership in Engineering Achievement Program) Awards recognize best-in-class product and component designs across 11 categories with the feedback of an independent judging panel of experts. These awards are published by Design World, the trade magazine that covers design engineering topics in detail.
This chipset, combining a 200G/lane TIA (transimpedance amplifier) and laser drivers, enables 800G and 1.6T linear-drive pluggable optics (LPO) modules. LPO modules offer longer reach than passive copper, at low power and low latency, and are designed for scale-up compute-fabric applications.
By Chris McCormick, Product Management Director, Cloud Platform Group, Marvell
Co-packaged optics (CPO) will play a fundamental role in improving the performance, efficiency, and capabilities of networks, especially the scale-up fabrics for AI systems.
Realizing these benefits will also require a fundamental transformation in the way computing and switching assets are designed and deployed in data centers. Marvell is partnering with equipment manufacturers, cable specialists, interconnect companies and others to ensure the infrastructure for delivering CPO will be ready when customers are ready to adopt it.
The Trends Driving CPO
AI’s insatiable appetite for bandwidth and the physical limitations of copper are driving demand for CPO. Network bandwidth doubles every two to three years, and the reach of copper reduces meaningfully as bandwidth increases. Meanwhile, data center operators are clamoring for better performance per watt and rack.
CPO ameliorates the problem by moving the conversion of electrical to optical from an external slot on the faceplate to a position as close to the ASIC as possible. This shortens the copper trace, which may improve the link budget enough to remove digital signal processor (DSP) or retimer functionality, thereby reducing the overall power per bit, a key metric in AI datacenter management. Achieving commercial viability and scalability, however, has taken years of R&D across the ecosystem, and the benefits will likely depend on the use cases and applications where CPO is deployed.
While analyst firms such as LightCounting predict that optical modules will continue to constitute the majority of optical links inside data centers through the decade,1 CPO will likely become a meaningful segment.
The CPO Server Tray
The image below shows a conceptualized AI compute tray with CPO developed with products from SENKO Advanced Components and Marvell. The design contains room for four XPUs and up to 102.4 Tbps of bandwidth delivered through 1024 optical fibers, all in a 1U tray. The density and reach enabled by CPO opens the door to scale-up domains far beyond what is possible with copper alone..

When asked at recent trade shows how many fibers the tray contained, most attendees guessed around 250 fibers. The actual number is 1,152 fibers.