I finally took a leisurely tour around our nearly finished data centre. Almost all the big ticket items are in. So starting in the UPS room:
Those are the first two cabinets of our Uninterruptible Power Supply system (with another two behind), the buffer between the servers and the mains supply or the diesel generator. We can pull the plug on the mains and switch to the generator, and batteries will keep us going for a few minutes.
We can even turn some very big switches to take a UPS out of the loop for maintenance:
Here’s one of many shelves of batteries:
In the early days, these batteries will give us more of a buffer than we need – probably an hour or more depending on how quickly we fill up the main room. And we will expand both the UPS unit themselves and the batteries without interrupting the supply to the equipment.
Here’s the hole where our electricity substation will be, out the front of the building.
at the back here’s the rear compound, and you can see the concrete pad where the diesel generator will be hoisted in a few days time:
There’s room there for two more generators, to cover us for full site load, and also in the case one fails. It’s a bit over-the-top but by having N+1 generators (and not just power sources) there are particular external data centre standards that we can meet fairly easily, which impresses government and other compliance types.
Also in the UPS room is the cabinet for our fancy building management system:
That’s where the controller will go that has access to every switch and measurement in the data centre: fire suppression, air conditioning power, fresh air cooling vents, missile defences etc. For obvious reasons, it’s not connected to any other part of our network (look what happened to all the other Battlestars…).
One of the main things it does is decide when to switch between fresh air cooling and direct expansion (DX) cooling methods. So here are the air handling units in the main room which push the air back and forth:
Now they can pass air over these external units, which basically are enormous refrigerators that work by direct expansion cooling (thanks wikipedia):
But, as we mentioned before, we’ve done our sums and the same units can also simply open the vents to the chilly York air, as long as the outside temperature is lower than 16 degrees:
Whoosh. That’s most of the year, in fact. So we’re not using the power-hungry direct expansion cooling most of the time, and we’re hoping for a Power Usage Effectiveness figure of 1.1 for the cooling side of things (i.e. we’re hoping to only have to use 10% extra power to run all the cooling).
For the times when the building management system decides it’s too hot and closes all the vents to the outside world, using DX cooling the whole time, it would be nearer 1.5
There will also be losses introduced by the UPS system, but are hoping that overall the PUE will be under 1.2 on a normal chilly day.
Back inside, here’s the guts of the air handling unit, including the big pumps necessary to push the cooling fluid in and out:
The building management system also has the crucial decision of when to let these babies off:
That’s the Kidde FM200 Fire Suppression system, masses of high-pressure liquid which turns into a fire-suppressing gas when blasted out of the tubes criss-crossing the main room. If they have to go off, the gas doesn’t hurt anyone in the room (there used to be carbon-dioxide systems which basically suffocated people along with the fire), and the power doesn’t need to be cut, so the data centre soldiers on. But the bottles are one-shot, and refilling them costs quite a lot. So there are various measures to make sure it doesn’t happen too automatically.
To decide whether to sound the alarm or fire the suppression system, the building management system takes inputs from a set of sampling points across the ceiling, little tubes that look like this one:
There’s a box that sucks the air samples down, and using a laser decides how “smokey” things are getting:
Ultimately it’s the building management system that decides what to do, and that’s down to some high-stakes configuration and policy decisions by Peter and me.
For more information about the work that has been carried out so far, read our step-by-step guide to the build.