Mouser - White Papers

How Connectors Keep Data Centers Fast, Efficient, and Cool

Mouser Electronics White Papers

Issue link: https://resources.mouser.com/i/1538288

Contents of this Issue

Navigation

Page 1 of 5

Mouser Electronics White Paper Artificial intelligence (AI) workloads in data centers are projected to grow 350 percent before the end of the decade. 1 That level of growth relies on more than just faster chips. It puts pressure on the entire infrastructure, spurring developers to rethink how computing power and data move through the system so it can move faster and handle more, without creating losses at the connection points. Data centers can support this growth. Whether owned by a single enterprise or the leading hyperscale providers, data centers are home to the extensive computing and storage systems required to power AI. However, their capabilities depend on the physical infrastructure that connects them. This paper will focus on the critical role of connectors in the latest data center applications. In doing so, it will highlight three key aspects of connector design: high-speed data, efficient power delivery, and effective thermal management. Data Connections AI is optimized for analyzing large datasets to uncover patterns and generate insights. Even if this were the limits of its capabilities, AI would still demand some of the most significant concentrations of computing power ever assembled. However, the development of AI systems is accelerating their expansion into autonomous decision-making and automation at a scale and speed beyond human capacity. This evolution is driving the rapid growth of the global data center market today. 2 Modern data centers rely on a combination of components. Graphics processing units (GPUs) handle the massive parallel processing required for deep learning workflows, allowing them to conduct the large numbers of computations needed by deep learning models. These are often paired with AI accelerators, such as application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs), that are designed to perform specific tasks with low latency and high efficiency. These computing elements depend on high-speed access to data. AI training requires fast-response storage (e.g., solid- state drives) and high-capacity arrays for bulk data handling. Rather than functioning as a monolithic entity, data centers rely on constant coordination among thousands of interconnected systems. The largest hyperscale data centers feature tens of thousands of GPUs 3 and depend on the free flow of information throughout the facility. Keeping data flowing efficiently between processors, memory, and storage without loss depends on connectors designed to meet these requirements. Meeting the Standards When it comes to defining high-speed data connections between GPUs, storage devices, and the Ethernet hardware that links them to the outside world, PCI Express® (PCIe) is an industry-wide standard. The PCIe specification governs how data is configured and the physical layer, including card form factors and the connectors that link them. Figure 1: PCIe Gen 6 Card Ege Connectors support up to 64GT/s and are SFF-TA-1016 compliant, with a new inner structure design but the same interface as the PCIe Gen 5 solution. (Source: Amphenol; edited by Mouser Electronics)

Articles in this issue

view archives of Mouser - White Papers - How Connectors Keep Data Centers Fast, Efficient, and Cool