Today, the EvoSwitch multi-tenant data centers feature a design PUE (Power Usage Effectiveness) of 1.2. This accounts for our new data center builds but experience shows that we can consistently operate below this figure. The question is how low can you go with regards to establishing a low thus energy-efficient data center PUE figure?
Author: Eric Lisica, Operations Director at EvoSwitch
By combining the latest data center energy-efficiency design and infrastructural components with easy-to-calibrate modules, we have driven PUE in our data centers down almost as low as it will probably go. In our Amsterdam facility, in EvoSwitch AMS1 Campus Hall 6 for example, we manage to have a day-to-day PUE range of 1.1 to 1.15. Especially for the size of our data centers, this really is a very low figure. Nevertheless, we’re always on the lookout for infrastructural enhancements.
Campus Expansion Amsterdam
With the announcement last month of the new 7.5 MW three phase build-out of EvoSwitch’s AMS1 Campus in Amsterdam, the Netherlands (AMS1 Campus Hall 7, with Hall 8 and Hall 9 planned for subsequent phases), now I think it’s a good time to reflect on the relationship between energy efficiency, design, and operations in the data center environment.
First of all, we’ve always been open about the PUE for our data centers. Looking back over the last decade, we’ve seen a lot of colocation providers being reluctant to share their PUE figures. This has never been the case with EvoSwitch. Back in 2009, we shared our aim to reach a PUE figure of 1.2 for our new data center builds, and we set out the significant savings (70-80%) this would create. We have also been offering real-time PUE reporting by using more than 8,000 power monitoring devices, despite the fact that this is not the standard in data centers.
As said, now we have a day-to-day PUE that’s even below our design PUE of 1.2, but how are we beating our own PUE? This is partly due to our conservative estimating. In 2011 we were one of the first companies to use Indirect Adiabatic Free Cooling systems, so we were not sure how effective this would actually be. Since then, the Indirect Adiabatic Free Cooling systems – and the way we have learned to use them – have outperformed assumptions.
Granular Monitoring & Control
A key factor in data center PUE reduction is optimizing the set points. This requires a careful balancing of the inlet and return temperatures for each individual module. For example, where our Service Level Agreement (SLA) specifies a maximum of 24 degrees at inlet point, we might measure an exit/return temperature of say 30 or 40 degrees. Fortunately, the modules we use are individually adjustable, giving us a granular overview and a greater overall efficiency gain. We can then calibrate that specific module to bring down the exit temperature. The granularity of our monitoring and control system provides us the opportunity to go through this optimization process over and over again and at a greater variety of monitoring points on the data floor.
This level of granular control and variation would not work for all data center operators, we are aware of that, but it perfectly matches our retail colocation proposition. We also expect to be able to even improve these monitoring and control processes further in the new data halls to be deployed.
Modularity, PUE of 1.06
Designed and engineered in-house, the modular setup of the EvoSwitch data centers also adds to our low PUE figure. The highly modular characteristics provide for a low PUE from day-1. Many retail facilities only manage to significantly lower their PUE when enough tenants have entered their systems and workloads into the data center.
At the same time, modularity enhances the customer experience. Our customers want to be able to customize their data center space in terms of power density, rack numbers, and UPS capacity, so we have engineered our latest 88 rack data center module to be even more modular. The design allows for easy-adjustment by our engineering teams. They can easily reduce capacity while able to add interconnecting sections to enlarge it, as well as tuning UPS capacity up and down.
The net impact of it all is that we are approaching the ultimate limits of our PUE figure. Our power overhead currently lies around 10% while the ultimate possible figure would probably be about 3%, which equals a PUE of 1.06 to 1.07. So there are still only a few percentage points to be gained at this moment. As a colocation services company focused on innovation from the beginning, however, EvoSwitch will continue to close that gap. Operating a 12,5MW data center campus in Amsterdam, next to our U.S. facility in Manassas, Virginia, the impact of gaining another few percentage points can be huge – financially as well as environmentally.
About Eric Lisica and EvoSwitch
Eric Lisica is the Operations Director of colocation services provider EvoSwitch since June 2013. His 15 years of management experience in the data center, hosting and telecommunications industry includes positions with Interoute, PSINet, and Verizon Terremark. He has run operations departments in both national and international organizations and is used to working in dynamic, technical environments.
EvoSwitch provides secured and sustainable data center services, with cloud- and carrier-neutral data centers in Europe and the United States offering currently 139.900 sq. ft. (14.000 m2) and ample room for further growth on both sides of the Atlantic. The company is home to growing ecosystems of customers around interconnection and hybrid cloud, operating at the edge of the Internet and providing access to public clouds. EvoSwitch’s data centers, with ultra-low PUE figures while utilizing 100% renewables, meet strict compliance and third-party accredited standards including ISO 27001:2013, ISO 14001:2004, PCI-DSS, SIOC1 Type II, and LEED Gold.
To learn more about EvoSwitch, visit https://evoswitch.com.