By Michael Kanellos, Head of Influencer Relations, Marvell
The semiconductor market is vastly different than it was a few years ago. Cloud service providers want custom silicon and collaborating with partners on designs. Chiplets and 3D devices, long discussed in the future tense, are a growing sector of the market. Moore’s Law? It’s still alive, but manufacturers and designers are following it by different means than simply shrinking transistors.
And by sheer coincidence, many of the forces propelling these changes happened in the same year: 2006.
The Magic of Scaling Slows.
While Moore’s Law has slowed, it is still alive; semiconductor companies continue to be able to shrink the size of transistors at a somewhat predictable cadence.
The benefits, however, changed. With so-called “Dennard Scaling,” chip designers could increase clock speed, reduce power—or both—with transistor shrinks. In practical terms, it meant that PC makers, phone designers and software developers could plan on a steady stream of hardware advances.
Dennard Scaling effectively stopped in 20061. New technologies for keeping the hamster wheel spinning needed to be found, and fast.
This article is the final installment in a series of talks delivered Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024.
AI demands are pushing the limits of semiconductor technology, and hyperscale operators are at the forefront of adoption—they develop and deploy leading-edge technology that increases compute capacity. These large operators seek to optimize performance while simultaneously lowering total cost of ownership (TCO). With billions of dollars on the line, many have turned to custom silicon to meet their TCO and compute performance objectives.
But building a custom compute solution is no small matter. Doing so requires a large IP portfolio, significant R&D scale and decades of experience to create the mix of ingredients that make up custom AI silicon. Today, Marvell is partnering with hyperscale operators to deliver custom compute silicon that’s enabling their AI growth trajectories.
Why are hyperscale operators turning to custom compute?
Hyperscale operators have always been focused on maximizing both performance and efficiency, but new demands from AI applications have amplified the pressure. According to Raghib Hussain, president of products and technologies at Marvell, “Every hyperscaler is focused on optimizing every aspect of their platform because the order of magnitude of impact is much, much higher than before. They are not only achieving the highest performance, but also saving billions of dollars.”
With multiple business models in the cloud, including internal apps, infrastructure-as-a-service (IaaS), and software-as-a-service (SaaS)—the latter of which is the fastest-growing market thanks to generative AI—hyperscale operators are constantly seeking ways to improve their total cost of ownership. Custom compute allows them to do just that. Operators are first adopting custom compute platforms for their mass-scale internal applications, such as search and their own SaaS applications. Next up for greater custom adoption will be third-party SaaS and IaaS, where the operator offers their own custom compute as an alternative to merchant options.
Progression of custom silicon adoption in hyperscale data centers.
By Michael Kanellos, Head of Influencer Relations, Marvell
Aaron Thean, points to a slide featuring the downtown skylines of New York, Singapore and San Francisco along with a prototype of a 3D processor and asks, “Which one of these things is not like the other?”
The answer? While most gravitate to the processor, San Francisco is a better answer. With a population well under 1 million, the city’s internal transportation and communications systems don’t come close to the level of complexity, performance and synchronization required by the other three.
With future chips, “we’re talking about trillions of transistors on multiple substrates,” said Thean, the deputy president of the National University of Singapore and the director of SHINE, an initiative to expand Singapore’s role in the development of chipets, during a one-day summit sponsored by Marvell and the university.
This article is part five in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024.
AI has fundamentally changed the network switching landscape. AI requirements are driving foundational shifts in the industry roadmap, expanding the use cases for cloud switching semiconductors and creating opportunities to redefine the terrain.
Here’s how AI will drive cloud switching innovation.
A changing network requires a change in scale
In a modern cloud data center, the compute servers are connected to themselves and the internet through a network of high-bandwidth switches. The approach is like that of the internet itself, allowing operators to build a network of any size while mixing and matching products from various vendors to create a network architecture specific to their needs.
Such a high-bandwidth switching network is critical for AI applications, and a higher-performing network can lead to a more profitable deployment.
However, expanding and extending the general-purpose cloud network to AI isn’t quite as simple as just adding more building blocks. In the world of general-purpose computing, a single workload or more can fit on a single server CPU. In contrast, AI’s large datasets don’t fit on a single processor, whether it’s a CPU, GPU or other accelerated compute device (XPU), making it necessary to distribute the workload across multiple processors. These accelerated processors must function as a single computing element.
AI requires accelerated infrastructure to split workloads across many processors.
This article is part four in a series on talks delivered at Accelerated Infrastructure for the AI Era, a one-day symposium held by Marvell in April 2024.
Silicon photonics—the technology of manufacturing the hundreds of components required for optical communications with CMOS processes—has been employed to produce coherent optical modules for metro and long-distance communications for years. The increasing bandwidth demands brought on by AI are now opening the door for silicon photonics to come inside data centers to enhance their economics and capabilities.
What’s inside an optical module?
As the previous posts in this series noted, critical semiconductors like digital signal processors (DSPs), transimpedance amplifiers (TIAs) and drivers for producing optical modules have steadily improved in terms of performance and efficiency with each new generation of chips thanks to Moore’s Law and other factors.
The same is not true for optics. Modulators, multiplexers, lenses, waveguides and other devices for managing light impulses have historically been delivered as discrete components.
“Optics pretty much uses piece parts,” said Loi Nguyen, executive vice president and general manager of cloud optics at Marvell. “It is very hard to scale.”
Lasers have been particularly challenging with module developers forced to choose between a wide variety of technologies. Electro-absorption-modulated (EML) lasers are currently the only commercially viable option capable of meeting the 200G per second speed necessary to support AI models. Often used for longer links, EML is the laser of choice for 1.6T optical modules. Not only is fab capacity for EML lasers constrained, but they are also incredibly expensive. Together, these factors make it difficult to scale at the rate needed for AI.