The Data Center Temperature Debate

Jul 21
08:40

2009

ariel liu

ariel liu

  • Share this article on Facebook
  • Share this article on Twitter
  • Share this article on Linkedin

Though never directly articulated by any data center authority, the prevailing practice surrounding these critical facilities has often been “The colder, the better.” However, some leading server manufacturers and data center efficiency experts share the opinion that data centers can run far hotter than they do today without sacrificing uptime and with a huge savings in both cooling related costs and CO2 emissions.

mediaimage

Though never directly articulated by any data center authority,The Data Center Temperature Debate Articles the prevailing practice surrounding these critical facilities has often been “The colder, the better.” However, some leading server manufacturers and data center efficiency experts share the opinion that data centers can run far hotter than they do today without sacrificing uptime and with a huge savings in both cooling related costs and CO2 emissions. One server manufacturer recently announced that their rack of servers can operate with inlet temperatures at 104 deg F.

Why does it feel the need to push the envelope? The cooling infrastructure is an energy hog. This system, operating 24x7x365, consumes a lot of electricity to create the optimal computing environment, which may hover anywhere between 55 to 65 deg F. (The current “recommended” range from ASHRAE is 18-27 C or 64.4 deg F through 80.6 deg F)

To achieve efficiencies, a number of influential end users are running their data centers warmer and are advising their contemporaries to follow suit. But the process isn’t as simple as raising the thermostat in your home.  Here are some of the key arguments and considerations:

Position:  Raising server inlet temperature will realize significant energy savings.

Arguments for:

·          Sun Microsystems, both a prominent hardware manufacturer and data center operator, estimates a 4% savings in energy costs for every one (1) degree increase in server inlet temperature. (Miller, 2007)

·          A higher temperature setting can mean more hours of “free-cooling” possible through air-side or water side economizers. This information is especially compelling to an area like San Jose, California, where outside air (dry-bulb) temperatures are at or below 70 deg F for 82% of the year. Depending on geography, the annual savings from economization could exceed six figures.

Arguments Against:

·          The cooling infrastructure has certain design setpoints. How do we know that raising server inlet temperature won’t result in false economy, causing additional, unnecessary consumption in other components like the server fans, pumps, or compressors?

·          Free-cooling, while great for new data centers, is an expensive proposition for existing ones. The entire cooling infrastructure would require re-engineering and may be cost prohibitive and unnecessarily complex.

·          Costs from thermal-related equipment failures or downtime will offset the savings realized from a higher temperature setpoint.

Position: Raising server inlet temperature complicates reliability, recovery, and equipment warranties.

Arguments for:

·          Inlet air and exhaust air frequently mix in a data center. Temperatures are kept low to offset this mixing and to keep the server inlet temperature within ASHRAE’s recommended range.  Raising the temperature could exacerbate already-existing hotspots.

·          Cool temperatures provide an envelope of cool air in the room, an asset in the case of a cooling system failure.  The staff may have more time to diagnose and repair the problem and, if necessary, shut down equipment gracefully.

·          In the case of the 104 degree F server, what’s the chance every piece of equipment—from storage to networking—would perform reliability? Would all warranties remain valid at 104 deg F?

Arguments Against:

·          Raising the data center temperature is part of an efficiency program. The temperature increase must follow best practices in airflow management: using blanking panels, sealing cable cutouts, eliminating cable obstructions under the raised floor, and implementing some form of air containment. These measures can effectively reduce the mixing of hot and cold air and allow for a safe, practical temperature increase.

·          The 104 degree F server is an extreme case that encourages thoughtful discussion and critical inquiry among data center operators. After their study, perhaps a facility that once operated at 62 deg now operates at 70 deg F. These changes can significantly improve energy efficiency, while not compromising availability or equipment warranties.

Position: Servers are not as fragile and sensitive as one may think. Studies performed in 2008 underscore the resiliency of modern hardware.

Arguments For:

·          Microsoft ran servers in a tent in the damp Pacific Northwest from November 2007 through June 2008. They experienced no failures.

·          Using an air side economizer, Intel subjected 450 high density servers to the elements—temperatures as high as 92 deg and relative humidity ranges from 4 to 90%. The server failure rate during this experiment was only marginally higher than Intel’s enterprise facility. 

·          Data centers can operate with a temperature in the 80s and still be ASHRAE compliant. The upper limit of their recommended temperature range increased to 80.6 deg F (up from 77 deg F).

Arguments Against:

·          High temperatures, over time, affect server performance. Server fan speed, for instance, will increase in response to higher temperatures. This wear and tear can shorten the device’s life.

·          Studies from data center behemoths like Microsoft and Intel may not be relevant to all businesses:

o   Their enormous data center footprint is more immune to an occasional server failure that may result from excessive heat.

o   They can leverage their buying power to receive gold-plated warranties that permit higher temperature settings.

o   They are most likely refreshing their hardware at a more rapid pace than other businesses. If that server is completely spent after 3 years, no big deal. A smaller business may need that server to last longer than 3 years.

Position: Higher Inlet Temperatures may result in uncomfortable working conditions for data center staff and visitors.

Arguments for:

·          Consider the 104 degree F rack. The hot aisle could be anywhere from 130 deg to 150 deg F. Even the higher end of ASHRAE’s operating range (80.6 deg F) would result in hot aisle temperatures around 105-110 deg F. Staff servicing these racks would endure very uncomfortable working conditions. 

·          Responding to higher temperatures, the server fan speed will increase to dissipate more air. The increased fan speed would increase the noise level in the data center. The noise may approach or exceed OSHA sound limits, requiring occupants to wear ear protection.

Arguments Against

·          It goes without saying that as the server inlet temperature increases, so does the hot aisle temperature. Businesses must carefully balance worker comfort and energy efficiency efforts in the data center.

·          Not all data center environments have high user volume. Some high performance/supercomputing applications operate in a lights-out environment and contain a homogeneous collection of hardware. These applications are well suited for higher temperature setpoints.

·          The definition of data center is more fluid than ever. The traditional brick and mortar facility can add instantaneous compute power through a data center container without a costly construction project. The container, segregated from the rest of the building, can operate at higher temperatures and achieve greater efficiencies (Some close-coupled cooling products function similarly).

Conclusions

The movement to raise data center temperatures is gaining but it will face opposition until the concerns are addressed. Reliability and availability are at the top of any IT professional’s performance plan. For this reason, most to date have decided to error on the side of caution: to keep it cool at all costs. Yet, higher temperatures and reliability are not mutually exclusive. There are ways to safeguard your data center investments and become more energy efficient.

Temperature is inseparable from airflow management; data center professionals must understand how the air gets around, into, and through their server racks. Computational fluid dynamics (CFDs) can help by analyzing and charting projected airflow on the data center floor, but as cooling equipment doesn’t always perform to spec and the data you enter could miss some key obstructions, onsite monitoring and adjustments are critical requirements to insure that your CFD data and calculations are accurate.

Data centers with excess cooling are prime environments to raise the temperature setpoint. Those with hotspots or insufficient cooling can start with low-cost remedies like blanking panels and grommets.  Close-coupled cooling and containment strategies are especially relevant, as server exhaust air, so often the cause of thermal challenges, is isolated and prohibited from entering the cold aisle.

With airflow addressed, users can focus on finding their “sweet spot”—the ideal temperature setting which aligns with business requirements and improves energy efficiency.  Finding it requires proactive measurement and analysis. But the rewards—smaller energy bills, improved carbon footprints and a message of corporate responsibility—are well worth the effort.

 

Bibliography

Miller, R. (2007, September 24). Data Center Cooling Set Points Debated. Retrieved February 19, 2009, from Data Center Knowledge: http://www.datacenterknowledge.com/archives/2007/09/24/data-center-cooling-set-points-debated