Skip to main content

Comparing Power Density and Power Efficiency

Image Source: Olivier Le Moal/Stock.adobe.com

By Mark Patrick for Mouser Electronics

Published June 2, 2023

Often the choice of power supply is made based on a single efficiency figure on the datasheet, and manufacturers are doing all they can to drive this number up, including defining the measurement conditions ever more carefully. Designers are coming up with more sophisticated topologies, such as phase-shifted full bridges (PSFBs) and LLC converters, and at the component level, MOSFETs supplant diodes to reduce losses. Even silicon is being challenged as wide bandgap (WBG) materials such as silicon carbide (SiC) and gallium nitride (GaN) promise enhanced performance, even at high switching speeds.

The exact efficiency figure on a power supply datasheet means relatively little to end users. They care more about system or process efficiency as well as meeting (or exceeding) their environmental obligations and financial goals. There is a growing realisation that there is more to supporting the environment (and controlling costs) than the power supply’s efficiency figure on the datasheet when viewed over the lifetime of a system. However, as real estate costs money to buy and maintain, they are very focused on packing as much revenue-generating equipment into their space as possible. So for them, power density is often more valuable than efficiency.

This article looks at power density & efficiency in detail, considering what it costs to drive higher efficiency as well as to buy high-performance power solutions and ultimately dispose of them responsibly. It contrasts this with an approach focused on increasing power density and how system efficiency is enhanced. The article also considers whether heat management should be the focus, instead of overall power conversion efficiency.

The Concept of Efficiency

Efficiency is a concept that is easy to understand: Surely the closer to 100% you get, the better everything is? But it is all about how efficiency is thought of; in an office or a data centre, no useful work (in physics terms) is done—no big machinery is moved—so we could consider these places 0% efficient because all the power used eventually goes into heat in the computers, servers, storage, and power conversion.

If, however, you compared revenue efficiency—that is, the dollar value of electricity compared to dollar revenues—then the efficiency could reach 1000%. So, for business performance and success, the goal should be to keep electricity costs as low as possible by reducing the amount of electricity used, per unit of output.

Every data centre manager is challenged to increase processing and storage capacity as well as revenue generation and profit. To do this, they must keep electricity costs in check and ensure acquisitions pay back quickly. As servers are added, the electricity costs (as well as the ability to earn revenue) rise, and this ratio of revenue to cost is defined in part by the equipment selection.

In a factory, the only valid reason to add another powerful motor is to produce more saleable output, so the motor drive and associated power supply are simply overhead costs that add no commercial value as such. Therefore, all operating expenses (including electricity) associated with running the motor are seen as a drain on the bottom line. Efficiency is important, but only in the context of doing the necessary work while using as little electricity as possible.

Losses Are Important Everywhere

Electronics design is full of formulae (for example, efficiency equals power out divided by power in, as a percentage, and losses equal power in minus power out). However, context such as power levels and the operating and environmental conditions is needed to make these formulae meaningful. Even with a defined formula, power supply manufacturers can select the best conditions, making the efficiency appear better than it will be in real-world conditions.

Often, efficiency is specified close to full load, but few systems (especially in redundant applications) run at this level for any period of time and, away from the ‘sweet spot’, efficiency can be much lower. Generally, efficiency will fall off significantly towards zero load, and the way this happens is different for each power supply. So energy consumed when a server is idling can be one (or more) orders of magnitude different.

In Figure 1, at 5% load, the converter represented by the blue line dissipates more than three times more quickly than the converter represented by the orange line. Light-load losses should be the focus during selection as they make a significant difference to the total energy draw.

Figure 1: Different power supplies will exhibit very different low-load efficiency. (Source: Mouser Electronics)

Recognising the importance of low-load efficiency, standards such as the ‘80 PLUS’ initiative (Table 1) have been developed to stipulate minimum efficiencies through the load range. 80 PLUS Titanium is the toughest specification, requiring at least 94% efficiency at 50% load and 90% at 10% load (based on a 115V system). For a 230V system, the requirement at 50% load changes to 96% while a 10% load still requires 90%.

Table 1: Summary of 80 PLUS requirements for 115V systems. (Source: Mouser)

80 PLUS Cert.115V Internal Non-redundant115V Industrial
% of Rated Load10%20%50%100%10%20%50%100%
80 PLUS---80%80%80%/PFC 0.9---   
80 PLUS Bronze---82%85%/PFC 0.982%---   
80 PLUS Silver---85%88%/PFC 0.985%80%85%/PFC 0.988%85%
80 PLUS Gold---87%90%/PFC 0.987%82%87%/PFC 0.990%87%
80 PLUS Platinum---90%92%/PFC 0.9589%85%90%/PFC 0.9592%90%
80 PLUS Titanium90%92%/PFC 0.9594%90%---   

Meeting the requirements of 80 PLUS is challenging, especially at the higher levels that were introduced after the certification scheme was developed in 2004. The basic level required 80% efficiency at 50% load while achieving the Titanium level (94%) implies reducing losses by three quarters.

This is a 14% efficiency increase, but a 1kW power converter would need to reduce losses from 250W to 64W. Clearly, tweaking an existing topology or design will not achieve this, and the industry has responded with innovative approaches. For example, diodes have been replaced with synchronously driven MOSFETs. Additionally, PSFB and LLC resonant topologies have been introduced to limit switching losses, and new WBG materials allow for lower losses when raising the switching frequency.

With many converters requiring two-stage conversion (e.g., power factor correction (PFC) and DC-DC), the efficiency in each section is required to be even higher. The input mains bridge rectifier has changed from four diodes into a network of MOSFETs that enhance the PFC stage efficiency.

As these technologies are new, they can be expensive and there is a risk associated with anything that does not (yet) have years of field-proved reliability. Nonetheless, there remains an incessant demand for ever higher efficiency figures, moving towards 99% and beyond.

1%: A Little or a Lot?

As efficiency gets higher, every small increase becomes correspondingly more difficult. Moving from 97% to 98% requires reducing losses by a third. Tougher still, moving from 98% to 99% implies reducing losses by a further half.

This 50% reduction would almost certainly demand a total redesign based on more complex techniques and high-priced components, with a significant amount of design time and risk. A 1kW supply dissipates 20.4W at 98% efficiency; moving this to 99% reduces the loss to 10.1W (Figure 2). The cost implications of saving just 10.3W are very significant over time and the eventual BOM cost.

Figure 2: Losses versus efficiency in a 1kW power converter. (Source: Mouser Electronics)

One could say that all energy savings are worth having, but this may not be entirely true when you look at the bigger picture. In the U.S., the industry pays about $0.165 per kilowatt-hour. Over a five-year lifespan for a 1kW power supply at 100% uptime, a reduction of 10.1W saves about $73 while the load power is costing over $7,300.

There are a lot of management costs in acquiring, purchasing, and qualifying a new power supply in addition to the disposal costs for obsolete equipment. A price must also be put upon the risks associated with making the change. It is highly doubtful that any analysis could show that saving $73 could even begin to cover all of these costs, except (possibly) in installations where many thousands of such power supplies were used. ‘Efficiency for efficiency’s sake’ is rarely a solid business strategy.

Should We Worry about Heat?

The extent to which a business must consider heat from a power supply depends on the source of the electrical power. If it is a fossil fuel (e.g., coal, gas) energy consumed by end equipment and HVAC systems, then there will be an impact on global warming and pollution. According to analysis, even ‘clean’ nuclear power plants push heat into the ambient air as their thermal efficiency is generally about 33%.

Enhancing efficiency is clearly a good thing, but even in hot regions of the world, people generate heat in boilers, showers, baths, washing machines, dryers, and more. It seems counterintuitive that designers strive to save a few tens of watts while someone runs a multi-kilowatt clothes dryer for hours in the next building. Addressing this anomaly, cogeneration schemes or Combined Heat and Power (CHP) can harvest and channel waste industrial heat for positive use within local communities.

An early example of this was Thomas Edison’s first Pearl Street Station power plant in 1882. A similar principle is used within the IBM-built data centre at Syracuse University in New York, and while not yet commonplace, the principles could be used in industry. As operators tend to migrate data centres to colder climates where ambient air can be used for cooling, the heat (if correctly channelled) can be very useful—especially where electricity is cheap from hydro or geothermal sources (such as in Norway or Iceland).

Heat Impacts Reliability

It is worth reducing power supply losses as this reduces internal temperatures and improves predicted lifetime and reliability. However, this is only relevant if the case and cooling are unchanged. Various formulae define that the lifetime of electronics halves for every 10ºC rise in ambient. Additionally, many reliability handbooks will tell you that the semiconductor failure rate increases by around 25% and capacitors by about 50% for the same rise in temperature.

Modern technology is generally very reliable and durable. Even with these figures, reliability remains high—but there is a thermal effect that should be recognised and understood. The industry will generally try to maintain an inlet temperature of around 21°C in data centres, but research by Intel and others has demonstrated that an increase does not have a significant impact on system reliability. A report by APC quoting the American Society of Heating and Air-Conditioning Engineers (ASHRAE) predicts just a 1.5 times increase in overall equipment failure rate for an inlet air temperature rise of 20–32°C (68–90°F) (Figure 3).

Figure 3: How inlet temperature impacts reliability. (Source: Mouser Electronics)

Each degree Celsius increase in temperature in data centres is said to reduce associated cooling costs by about 7%, allowing equipment to run (slightly) warmer, which can be a real benefit to operating expenditure.

The newer WBG materials can cope with higher junction temperatures than their silicon counterparts, so these become an enabler for running equipment (especially high-frequency power supplies) at elevated temperatures.

Power Density Is Where It’s At

Efficiency can often be improved by slowing switching speeds, but this implies larger passive components and bigger power converters. While this will improve reliability as the temperature is lower, it comes at the cost of space which creates system-level challenges.

Running hotter allows system engineers to pack more functionality into a given cabinet, whether in data centres or in industry, where standard-sized housings are almost always packed with motor drives and PLCs.

New, high-performance power converters with smaller form factors can eliminate the need for an additional cabinet, reducing costs (and space) by using an existing one. As floor space is expensive, there is a tangible gain to be realised by saving space, especially if that space can be used for revenue-generating equipment.

Summary

Power supply selection should not be based solely on efficiency figures. Factors such as system or process efficiency, environmental obligations, and financial goals are more important considerations. While manufacturers strive to improve power supply efficiency through advanced topologies and materials, end users prioritise power density over efficiency, as it allows them to maximise revenue-generating equipment in limited space. Low-load efficiency is critical, and industry standards like the 80 PLUS initiatives address this aspect. Achieving higher efficiency levels becomes increasingly challenging and costly, with diminishing returns. The focus on efficiency should be balanced with the overall cost, reliability, and environmental impact, considering factors such as acquisition, disposal, and heat management. Power density plays a significant role, allowing for more functionality within limited space and reducing costs. Ultimately, a holistic approach that considers various factors is necessary to make informed power supply decisions.

About the Author

Part of Mouser's EMEA team in Europe, Mark joined Mouser Electronics in July 2014 having previously held senior marketing roles at RS Components. Prior to RS, Mark spent 8 years at Texas Instruments in Applications Support and Technical Sales roles and holds a first class Honours Degree in Electronic Engineering from Coventry University.

Profile Photo of Mark Patrick