Cooling problems generally go unrecognised because data centres have traditionally operated at power densities well below their design values. But recent increases in the power density of new IT equipment is now pushing data centres to their design limits and many are now incapable of providing effective cooling.
The raised floor method has been the preferred option for many years as it is cost-effective, and the technology is simple and flexible. CRAC (Computer Room Air-Conditioning) systems nonetheless do have a number of disadvantages; the airflow around the rack is critical to cooling performance and if the height of the raised floor is reduced or swirl mixing occurs (perhaps as a result of cable laying), the cooling performance can be diminished. Similar problems can also occur if further enclosures or additional outlet tiles are installed, as the cooling now has to be shared even further. Servers that had been receiving sufficient cooling air could develop ‘hot spots’, endangering sensitive equipment.
It is essential to ensure that the air intake and outlet openings to all the servers blow into defined cold and warm aisles, and that a sufficient body of cold air is available, which cannot mix with any warm exhaust air before it is consumed. Although the cabinet is often thought of as having a supporting role, it actually provides a critical function as it prevents hot exhaust air from the installed equipment from circulating back into the apparatus air intake. Most people believe that as hot air rises it will rise away from equipment. However, the extent of this effect is much greater than the degree of buoyancy of hot exhaust air.
A recent development is ‘cold aisle containment’, which takes these issues into account by installing the climate control components at the top of the enclosure, away from the raised floor. This method prevents cold and warm air mixing, as well as improving the efficiency of the CRAC system and limiting the energy demand.
EC-controlled fans are now used in the latest generation of CRAC systems, and energy savings of up to 30% can be achieved in normal part-load operation. By placing the fans within the double floor, additional space is also provided for the obliquely fitted heat exchangers, and unnecessary air deflections are avoided and flow resistance is minimised.
Proper airflow is essential to effective cooling, but is not sufficient alone. Proper layout of the racks is also critical to ensure that air temperature is correct and air quantity is sufficient. It is well documented that by placing racks in rows and reversing the direction that alternate rows of racks face, re-circulation can be significantly reduced. The distribution of loads can also stress the data centre capabilities. Pockets of heat loads or ‘hot-spots’ typically occur when high-density, high-performance servers are packed into racks. This may demand more CRAC units, or a lowering of the air temperature set point, which will have a negative impact on energy efficiency.
The concept of redundancy also plays a central role within the data centre. Nowadays, almost all business processes are considered critical, as they rely heavily on the IT function. Technically speaking, data centres can be built in accordance with Tier II or Tier IV standards, which assure a certain level of availability. However, in many industries, the associated costs bear no reasonable relation to the risk, which makes it even more essential that the infrastructure is as secure as possible, even if that means making a compromise.
To minimise system failures and data loss, a system that continuously monitors specified factors ensures any potential risks are quickly identified. Wireless I/O units can be connected to a processing unit that collects information from specified sensors, such as hotspots, smoke, fire, airflow or perhaps a blocked filter mat. Using this type of monitoring allows any problem to be quickly rectified before it affects the data centre operation.
Avoidable mistakes that are routinely made when installing cooling systems and racks in data centres or network rooms compromise availability and increase costs. Technologies such as blade servers are a huge investment and it is vital that the cooling infrastructure engineering is up to the task.
John Wilkins is with Rittal UK