Before Sun Microsystems (now Oracle) launched Project Blackbox, later the Sun Modular Datacenter, a number of companies, including Google, had played around with the idea of modularising the data centre. However, it was Sun that took the headlines when it launched the Sun Modular Datacenter in 2008.
Sun took a 20 foot standard shipping container and built in all the racks, cabling and equipment required for it to become a data centre. All it needed was external power, chiller units and a network link.
So what exactly is a modular data centre? It all depends on who you talk to. While there is no standardised description, most in the industry sees it as a container based solution. However, the length of the container and even the width can vary by manufacturer and product range.
Despite this, what makes it modular is that the data centres can be bolted together and even stacked on top of each other to create a larger facility. This is being driven by both cost and technology.
When Sun launched the Modular Datacenter, it talked of being able to buy a data centre for less than one percent of the price of a traditional facility. There were many reasons for this. No planning permission; no building costs or need to acquire an existing facility; no decommissioning or reconfiguration costs as the business demand change, and no business rates on the use of a building.
All of those reasons are as true today as they were back then but perhaps the biggest cost reason was speed. It can take months and even years to build or refurbish a facility but a modular data centre can be provisioned in just weeks from the order being placed.
For service providers and data centre owners, modular is a huge opportunity. Google and Facebook have hundreds of containers stacked together to create their data centres. When they need additional capacity, they just drop in more containers.
One of the real challenges for the data centre is flexibility. For decades, the data centre didn't change much. Even as we moved from mainframes to mini-computers and into early rack-based systems, the data centre was reasonably static. With the explosion of commodity computing and blade servers, the data centre became a place of constant change.
For older facilities that means redesign, overhauling and updating in order to deliver the power and cooling requirements of new technologies. This is expensive; an overhaul of a data centre can cost millions of pounds and take months. During that period, no money is coming in and in a competitive market that means the potential loss of customers.
With modular data centres, a refurbishment is simply a replacement module - customer systems moved from one module to another. Meanwhile the underperforming module can be replaced or updated as required.
Dense computing brings scale-back
Modular data centres are not just about the provision of extra capacity or for cover when refurbishing a data centre. With dense computing, data centres have shrunk in size for many companies. Downsizing a data centre is just as expensive as adding more capacity. Power and cooling systems need to be maintained even if they are not being used. Data centre halls that are not being fully used, need to be partitioned to reduce waste and that means refurbishment costs. Using modular data centres, companies can quickly downsize or move from older, larger, systems to smaller, more efficient ones.
One of the real benefits of modular data centres has been the ability to deploy in new areas. The football World Cup, the US Football SuperBowl, summer and winter Olympics, football World Cup as well as other global events require data centre facilities, especially for media. These events process large amounts of data, film, audio and run vast Internet sites to provide public information.
Disaster recovery operations, oil rigs, intelligence headquarters in war zones and even major political conferences have also bought into modular data centres. They all need to manipulate very large volumes of data and modular data centres make this possible and provide an opportunity to do so securely.
In 2010, the US government took a look at how it responded to major disasters. One of the failings was the inability of government departments to respond due to a lack of data and IT facilities. The result was a document highlighting how departments should evaluate and then commission modular facilities.
Modular data centres are engineered to customer requirements. As they can be accessed from all sides, the components are integrated to create the most optimal configuration for power and cooling. Over time, as components alter, some of the initial integration may be lost but those losses will be offset by the power efficiencies of new generations of IT equipment.
The advantage of being able to integrate equipment within a confined space also means that the equipment can be as dense as possible in the space. Racks of blade systems require kilowatts of power per square foot, something that many traditional data centres struggle to deliver. Modular data systems are capable of supporting racks of blade systems, petabytes of storage and very high bandwidth, something that traditional facilities would need ten times the floor space to accommodate.
A case study
The ICT sector in South Africa is fast growing. One of the rising stars of the sector, Datacentrix provides infrastructure, managed services and business solutions to both the public sector and corporate customers. This means that it needs to be able to provide data centre capacity on demand either on customer sites or in its own data centres. Datacentrix business development manager, Briam Lendrum takes up the story:
“When we decided to take on the T4 Modular Data Centre (MDC) range for our customers, Cannon Technologies flew one into South Africa at no extra cost to us. It took just 14 weeks to get it fully commissioned. During the construction phase, we were getting temperatures of around 50°C in the morning. With the insulation that Cannon put into its MDC, we saw that drop by over 30C inside the unit. Although we are only cooling the centre aisle, there is no excessive heat build-up in the rest of the MDC. This means that operators can work inside where necessary."
Deploying a T4 MDC is a relatively straightforward task and starts with a level concrete slab with proper drainage to ensure no risk of flooding. Next come the generators, network and other infrastructure such as a chilled water supply, which is installed and connected. The final step is the construction of the MDC itself, which is delivered as a flat pack.
Datacentrix decided to use two different power densities on their racks. Computer racks can draw up to 7kW while network racks can draw 2.2kW. These measurements compare favourably with the majority of data centre spaces where the average power for computer racks is around 5kW.
Extending a traditional data centre facility is a complex process involving load tests, infrastructure reconfiguration and planning permission. With the MDC, Datacentrix believes that it has a system that not only allows it to extend as required, but one that provides a level of flexibility other approaches lack.
For example, it is possible to chain several MDCs together to create a larger space. Alternatively they can be stacked on top of each other, though this needs to be considered during initially planning as it will have an impact on the underlying slab on which the MDC is built.
Deploying a modular data centre might not be the first choice for many customers when they find themselves short of data centre capacity. However, with the US and EU both expecting to be suffering from a shortage of data centre capacity in the next five years, the ease and simplicity of Cannon Technologies’ MDC systems is something worth considering.
Matthew Goulding is managing director of Cannon Technologies