Having trouble reading this email? View it on your browser. Not interested anymore? Unsubscribe Instantly.

Building Manager Green Tip

July 2011

Follow GBS on Facebook. Follow GBS on Twitter.

Small Room - Big Consumption

The integration of computing into life at Harvard is now standard practice, and as a result understanding the keys to efficient operation of data centers and server rooms is critical for building managers.  Advances in building technologies and rapid growth in the demand for computing resources make efforts to optimize energy consumption in such facilities both significant and complex.  

While servers of the early 1990’s consumed little more than 1 kW per rack, today’s high density computing technology can consume 10-15 kW per rack.  It’s not uncommon for these servers to require 1 ton (12,000 BTU/h) of cooling each!

What Can Be Done?

Basic Hot Aisle/Cold Aisle Diagram

While building managers have little ability to control the computing loads, there are strategies that can help ensure that the cooling is handled as efficiently as possible.  Cooling loads created by the IT equipment are typically the largest component of high consumption facilities.  GBS strongly recommends working with your local IT staff before making any changes to ensure that your server rooms are running at optimum efficiency.

  • Temperature and Humidity Setpoints – In 2008, an ASHRAE technical committee responsible for setting data center standards revised the recommended ranges to encompass a wider swath of temperature and humidity setpoints to reflect improvements in equipment durability.  For buildings with air economizers, this could allow you to run on ‘free cooling’ for a longer period of time without damaging equipment.   It’s important to note that setting temperatures too high could have disastrous effects if the equipment is allowed to overheat.
  •  Air Flow – Best practices indicate effort should be made to control airflow to reduce mixing of warm return air with cool supply air to the greatest extent possible.  The most common configuration is a ‘hot aisle/cold aisle’ scheme where supply air is provided on one side of the rack and return registers are located on the back end, with one of these sides usually being physically segregated from the other via blanking panels or other air barriers of some kind. 

Metrics and Baselines

Blanking Panels

The most common metric for determining data center efficiency is Power Usage Effectiveness, or PUE.  PUE is simply defined as the ratio of total facility consumption--heating, cooling, lighting, and IT loads--to IT equipment loads alone.  A PUE of 1.0 would mean that 100% of facility loads are being used for IT consumption (i.e. no cooling or lighting is used).  A ratio of 2.0 would mean the facility is using twice as much energy overall than is required for IT equipment alone, and a score of 3.0 would mean the facility is performing poorly. 

According to the EPA, “State of the Art” data centers in 2011 operate around a PUE of 1.2, with an average data center operating closer to 1.9. Recent data from the 60 Oxford St. data center shows it typically performs in the 1.3-1.5 range, though it's monthly best over the past two years is 1.17.

Green Building Services provides consulting services to ensure that the design, construction and operation of Harvard's built environment has minimal environmental and human health impacts, maximizes occupant comfort and generates an awareness of sustainable design and building operations. To learn more about our work and services, visit http://green.harvard.edu/gbs.