Data Center Innovations Reflect and Fuel the Internet's Hypergrowth
Electronics has always been a hotbed of virtual cycles. In the 1980s, higher capacity disk drives allowed software companies like Microsoft to create more powerful applications that required more storage. This made it possible for even more powerful applications that demanded even more storage. As manufacturers like Intel® packed more and more processing into its microprocessors, calculation-intensive applications such as graphics and video appeared to fill the processing headroom. And of course, there is the internet, which makes all previous virtual cycles seem like small stuff.
It is not surprising that data centers—and more recently, cloud computing—are both profiting from and driving innovations in hardware technology at all levels within the data center. The arrival of cloud computing made faster processing, faster data retrieval, larger capacity storage, and quicker machine-to-machine communications more critical than ever.
The term “cloud computing” refers to the amorphous “cloud” shown in textbooks and on whiteboards that constitute the complex computing and communications architecture behind the internet. Cloud computing had a false start in the 1990s when some vendors touted “thin-clients,” which were dumbed down PCs that relied on remote networked services. Unfortunately, the infrastructure was slow and not up to task. Now, the approach has changed thanks to faster network communications and processing.
While the traditional data center generally kept the server, computing resources, applications, and storage in more or less the same physical location, cloud computing introduced a new paradigm of computing on a virtual machine with parallelization and redundancy paramount. Different threads of a single application can be running on multiple computers around the world, retrieving information from numerous network-attached-storage (NAS) locations, and sharing geographically diverse server resources as well as communications channels.
The prevalence of long-haul fiber-optic communications, to a large extent, made cloud computing feasible. However, the same throughput bottlenecks that have bothered engineers for decades—subsystem-to-subsystem interconnects and chip-to-chip interfaces—are still with us.
Data Center Technologies
There is plenty of room for hardware innovation in the server part of the data center market. Since an adjustment in hardware performance can deliver a significant market advantage, it is never too long before the introduction of a new, more powerful server family. The rewards that come with keeping in front of the pack are great because vast quantities of servers are deployed in each new data center. To a lesser extent, new storage and communications technologies are also contributing to the virtual cycle.
While servers may be the core technology of data centers, energy consumption, communications, and seemingly mundane technologies such as air conditioning contribute to the overall market size in terms of dollars. A single data center may consume more than 10MW. According to the United States Data Center Energy Usage Report from the Ernest Orlando Lawrence Berkeley National Laboratory, American data centers account for approximately 2 percent of the country’s total electricity consumption. That percentage is forecasted to grow as more data—and in particular, more video—is consumed by individuals and corporations. The amount of energy consumed by data centers is set to continue to grow at a rate of 12 percent per year, according to the Data Center Cooling Market - Growth, Trends, and Forecast (2019–2024) report.
The perennial need for speed, higher density storage, and a stronger emphasis on energy efficiency are making the design of server motherboards among the most challenging in electronics today. The block diagram shows the main components on a typical server motherboard (Figure 1).
Figure 1: The block diagram depicts a typical server motherboard architecture. (Source: Mouser Electronics)
Microprocessors are the core of server design and typically comprise a large percentage of the motherboard’s cost and energy consumption. Doing more with less is the obvious answer. The design challenge continues to be how to implement more efficient multiprocessing and multithreading. For microprocessors, the term more efficient has come to mean not just more MIPs per chip but more MIPs per watt. The most common technique for implementing multiprocessing is to design chips with multiple embedded processing cores. This enables the software multithreading that allows the server to execute more than one stream of code simultaneously. Without it, cloud computing probably would not be a reality. Multithreading also reduces energy consumption.
Memory and Interconnects
As with any computing system, there is a spectrum of innovations that can be brought to bear on the bottleneck between the processor, memory, and other I/O. Fully-buffered DIMMs (FBDIMMs), for example, reduce latency resulting in shorter read/write access times.
The FBDIMM memory architecture replaces multiple parallel interconnects with a single serial interconnect. The architecture includes an advanced memory buffer (AMB) between the memory controller and the memory module. Instead of writing directly to the memory module (the DIMMs in Figure 1), the controller interfaces with the AMB, which compensates for signal deterioration by buffering and resending the signal. The AMB also executes error correction without imposing any additional overhead on the processor or the system's memory controller.
In Figure 1, the orange blocks indicate interface standards. PCI-Express, HyperTransport, Serial ATA, SAS, and USB are high-speed interfaces. The choice of the interface depends on the particular use scenario on the server motherboard. The chosen interface must be capable of delivering data fast enough to align with the processing power of the server subsystem.
Signal conditioning with re-driver chips has a vital role to play in satisfying the processor’s appetite for data. Because faster signal frequencies allow less signal margin for designing reliable, high-performance systems, re-drivers, which are also known as repeater ICs, mediate the connection between the interface device and the CPU.
Re-drivers regenerate signals to enhance the signal quality using equalization, pre-emphasis, and other signal conditioning technologies. A single re-driver can adjust and correct for known channel losses at the transmitter and restore signal integrity at the receiver. The result is an eye pattern at the receiver with the margins required to deliver reliable communications with low bit-error rates.
Storage
Without readily available data—and lots of it—the cloud does not get very far off the ground. To satisfy that appetite, an evolution toward the globalization and virtualization of storage is well underway. From a hardware perspective, this means the ongoing convergence of storage area networks (SANs) and network-attached storage (NAS). While either technology will work in many situations, their chief difference lies in their protocols: NAS uses TCP/IP and HTTP, while SAN uses Encapsulated SCSI over FibreChannel.
From a hardware perspective, both use a redundant array of independent disks (RAID), a storage technology that distributes data across the drives in different RAID levels, depending on the level of redundancy and performance required. Controller chips for data-center-class servers must be RAID compliant and are considered to be sufficiently “advanced” technology to require an export license to ship outside the United States.
Despite their differences, SAN and NAS may get combined into a hybrid system that offers both file-level protocols (NAS) and block-level protocols (SAN). The trend toward storage virtualization, which abstracts logical storage from physical storage, is making the distinction between SAN and NAS less and less critical.
Conclusion
As cloud computing becomes pervasive, the traditional data center that located all significant components in the same physical space has been evolving toward its next-generation in which redundancy and high-speed communications are more critical than ever. Although silicon photonics holds the promise for speed-of-light data transfers between subsystems in the future, today’s design engineers have to optimize every aspect of the server, storage, communications, and computing technology.
Advances continue to be made in multicore processors that implement multithreaded computing architectures. Finding solutions to signal integrity issues is an ongoing challenge with technologies such as re-drivers. Meanwhile, cloud computing has initiated significant changes in storage technology, both for capacity and architecture.