The data center power crux
The world’s data centers are using more and more power as they get bigger. The question is - will the growth in data centres, with their ever-increasing power demands, soon outweigh the global capacity to supply their electricity needs?


Australasia Data Centre Leader
Dave Martin
Principal
Last updated: December 2019
Get in touch with our team
The Data Center Power Crux will become a focus for the industry as we continue to build bigger facilities, consuming more power, to feed the global appetite for data.
Our hunger for all aspects of data and the need for data centers is growing exponentially.
This will be further fueled as those in the world who aren’t yet connected, become connected. We use data every day: gaming, streaming, the financial sector, and industry are all driving this development with new data streams, services, and infrastructure.
Things such as streaming, gaming, cloud computing, and IoT have made access and the quantity of data explode. We are at a time where our dependence on this data has never been so symbiotic.
In America alone, data center electricity consumption was projected to increase to roughly 140 billion kilowatt-hours annually by 2020, the equivalent annual output of 50 power plants, costing businesses $13 billion annually in electricity bills and emitting nearly 100 million metric tons of carbon pollution per year.


The IT factories of the technology era
Data centers have become the IT factories of the current technology era. From the early days of dedicated computer and tape storage facilities to the development of global IT and the internet with associated connected services, there has been ongoing growth in the worldwide demand for data.
Early data centers were initially focused on maintaining the correct environment for the IT equipment, but as systems became more critical, reliability and resilience of the power and cooling infrastructure became more important.
In the past 10 years, data centers’ power density has gone from 1kW/m² to 4kW/m², server racks are now averaging a power density of up to 10kW each and growing, and the data centers themselves are getting bigger as we install more and more racks. Ten years ago, a 10MW data center was considered large; now with the emergence of cloud computing, data centers are measured in hundreds of MWs.
The drive has become to supply MWs of IT capacity at increasing density. The higher power densities and total power capacities have led to a need to focus on efficiency of operation, with most emphasis on the data center PUE (power usage effectiveness) - a measure of the total power delivered to the data center relative to the ‘useful’ power delivered to the server.


The Power Crux
This increasing data demand and power consumption is putting pressure on power systems. If this growth is extrapolated, we will reach a point where the pressure on the global ability to supply and distribute power is no longer sustainable in cost of utility connection, cost of power, the impact on resources and impact to the planet.


Current growth is unsustainable
Current data center design is focused on bigger and bigger facilities to support the growth in cloud services, data, and global connection. With finite resources, a growing population, and an increasing understanding of our impact on the planet, this growth is unsustainable.
We are faced with increasing numbers of users of data and in the processes using data.
Access to the global source of data, processing, and storage is accepted by the United Nations as a basic human right, and access to the internet for the rest of the unconnected population is a driving focus. Now, coupled with growing data traffic, storage, processes, and services, the current global standard is that our lives, the facilities we use, and the services we rely on demand access to data and grind to a halt without it.
For some time, we have been focused on improvements in PUE. PUE has been driven from 2.0+ a few years ago to as low as 1.15 today in the right circumstances. This reduction has been achieved by designing energy-efficient cooling and electrical systems. We have separated hot and cold air streams through aisle containment; we are using variations of free cooling systems, allowing our data halls to run hotter, and specifying energy-efficient equipment such as UPSs, transformers, and so on.
All of this has reduced PUE—saving millions of dollars in electricity costs for data center operators—but we are still seeing upward pressure on power consumption due to the scale of modern data centers. The reduction of PUE is no longer enough to counteract the increasing pressure for data power consumption.
This is just not sustainable in terms of infrastructure cost, the impact on the environment, and the drain on the global resource pool.
The cutting-edge data centers have been focused on power efficiency and cost-effectiveness. We are driving designs to be more cost-effective in terms of installed capital as well as ROI by getting the most out of our data centers, but we still do not have the right mindset. For the most part, we sell data center space based on usage of power, which provides little or no incentive to make things better.

Future state
On the one hand, the future has to have a renewable focus and the industry is certainly rising to that challenge. In 2018 Apple and Google both reported that they are meeting their electricity demands through renewable sources. Facebook committed to being 100% renewable by 2020. AWS is building wind farms and solar farms just to offset its data center power usage. Google this year announced the largest corporate purchase of renewables, bring their total renewable energy portfolio to 5,500MW.
But renewable energy sources aren’t enough. We need to shift our focus to the servers themselves, to the computer technology that’s using so much power. Rather than providing more power to meet the demands of the servers, we need to develop servers that use less power.
We need to look at the computer technology being used in the servers. We have been using the same silicon chip-based technology for 50 or 60 years. Gordon Moore, co-founder of Intel, said in 1965 that the speed and performance of computers would more or less double every couple of years and he was correct. But efficiencies that have been gained are coming to an end. Moore’s Law has all but run its course. We have run into a physical barrier of transistor limitation.


What are the next technologies?
IBM, Google, and Intel are in a race to develop the Quantum Computer. Quantum computing has the ability to process exponentially more data than a classical computer through the quantum state known as superposition, while at the same time using significantly less energy.
Quantum computers use cryogenic refrigerators to operate at extremely low temperatures. At these temperatures, superconductivity takes place and electricity is conducted with virtually no resistance and therefore virtually no power consumption or heat emission.
Other technologies are also being developed and considered to replace classical computers, such as Photonic Computing, where light is used instead of electricity, and Neuromorphic Computing, which uses a completely different and energy-efficient way of building and operating a computer.
While these types of technologies are possibly decades away from being commercially and practically available to replace classic computers, we need to continue to develop them to offset the power-hungry way we are proceeding.
If we accept the science of global warming, it should be enough. We have a small planet with limited resources. If we do nothing, the chance to decide our future will slip away and we may be left with a situation we didn’t plan for.