By Angelos Angelou, CEO, Angelou Economics, and Allan Paddack
“There were five exabytes of information created between the dawn of civilization through 2003, but that much information is now created every two days.” Eric Schmidt of Google (2010)
Information has been called the oil of the 21st century, and managing it is big business. There are approximately three million data centers in the U.S. That’s about one for every 100 of its citizens. They contribute, either directly or indirectly, $1 trillion or more, to the economy each year, which is over seven percent of the nation’s GDP. Given the relationship of information to the economy, it is not surprising that nearly every community across the country is actively seeking to attract one to their location. Sure, there are jobs associated with a data center, but as has often been noted by data center critics that there aren’t many direct jobs in an automated facility. The real benefit to a community of a data center is the indirect economic impact of technology investment. That investment attracts other high tech businesses, raises the quality of jobs and wages in the area and, of course, the tax base. And, at the rate of data center expansion seen in the first decade of the 21st century, every city and town could expect to have one by 2020.
Unfortunately, given emerging trends in the management, movement and storage of data, that’s probably not going to happen. The number of data centers is near its peak and will start to decline in about 2017, according to a recent report from International Data Corporation (IDC). Why? Because the economy is becoming less reliant on data and information? Nothing could be farther from the truth, it is more reliant than ever. The number of data centers will decline because of advances in data storage and processing, technical obsolescence in older centers, and the need for increased data security and cost efficiency that has resulted in the construction of larger highly-specialized facilities and a wave of consolidation and outsourcing.
The Business of Big Data
In the early days of automated data collection before smart phones, streaming media and the explosive growth of what we have come to know as eCommerce in the late 1990s and early 2000s, companies tended to collect and manage their business related data on premises in collocated internal or distributed Point of Presence (POP) facilities managed by their own IT organizations. But as businesses and enterprises became more reliant on information for profits and growth, the number of data centers grew as did the cost of data management. When the growing number of data centers began to stretch IT departments and their budgets too thin, corporate data centers were consolidated for efficiency, or the responsibility was outsourced to specialized data Service Providers (SPs). In both cases, the physical size of data centers had to grow substantially to accommodate the increased volume of information and required processing. Today, as more data services, including Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Software Defined Networks (SDNs) are accessed through the cloud, the growth in data centers (as measured in total sq. ft.) is expected to be about 15 percent in 2017, but more than 70 percent of that growth will be in large “hyperscale” facilities in excess of 225,000 square feet. The construction of new data facilities is actually expected to decline.
Technological Barriers
This consolidation and growth in data center size might have occurred sooner if it were not limited by the technology associated with data storage, power management, and equipment cooling. It has always been necessary for data managers to seek tradeoffs between the volume of data and the amount of electrical power required to store it, cool it and insure uninterrupted operation. However, years of integrated circuit (IC) development have reduced the physical size of storage and processing components and made those tradeoffs much easier to make.
Reducing the physical size of ICs while increasing their capacity and power has enabled a decrease in the number of racks of equipment necessary for any given data function. This has simplified the cooling problem by reducing the number of heat generating components for a given capability, or conversely, has greatly increased the capacity and power of any data center given an adequate cooling solution. The miniaturized components and systems we are so familiar with today would not have been possible without the work of two pioneers in the field.
Integrated Circuit Scaling
In 1965, Gordon Moore, a co-founder of Intel, first phrased what has become known as Moore’s Law. He postulated that rate of technological progress would be such that the number of components per integrated circuit would double every year. That prediction has been largely born out in a reduction in the size of all semiconductor chips whether they are for processing or data storage, which have shrunk from refrigerator size devices to those that can be easily slipped into a pocket or placed in a watch.
But Moore’s Law also implied that the density of integrated circuits would increase to allow more components on the board. That breakthrough is credited to Robert H. Dennard, who spent his professional life as a researcher for IBM. In a paper written in 1975, he proposed that component size could be scaled down as long as the electrical properties of the field in and around the chip was also proportionately scaled down. Dennard Scaling underlies Moore’s Law and has resulted in the micro-circuits that empower everything from hearing aids and smartphones to the Mars Rover.
The electronics industry, building upon the work of Moore and Dennard, has created systems with increasing data capability in an ever-decreasing footprint. That equates to more computing power per square foot which, in turn, could have led to increasing the rack density in existing data centers without necessarily building new facilities. So, why would companies build larger data facilities when the existing ones could have been repopulated with newer, smaller and more capable equipment? The answer is that increased computing densities means increased heat and increased energy costs. The older, smaller facilities have not been optimized to minimize those costs. The adequate cooling solutions for the smaller and denser ICs mentioned earlier don’t exist in older facilities and must be designed into new facilities at conception.
Data Center Information Management (DCIM)
Data Center Information Management is an emerging field that is developing to solve the specialized problems associated with data and data centers. The field is broad because the data center manager must not only understand the latest technology and be an expert in the use of new data center management software tools, but must also have an intimate knowledge of the facilities engineering required to support the physical equipment.
Data companies and their data center managers must be doing a pretty good job. Between 2000 and 2005 there was a 90 percent increase in the data center sector’s consumption of electricity. Alarm bells went off in the environmental community and rightly so because growth in the sector showed no sign of slowing down. But, between 2005 and 2010, as more data centers were being built, electricity consumption grew by only 40 pecent. Astonishingly, from 2010 to 2015 as even more data facilities were being built, the increase in power consumption was only four percent, which is where consumption growth is expected to remain through 2020. Today, data centers account for approximately 1.5 percent of all electricity consumed in the U.S.
So, what drove this dramatic increase in energy efficiency? A number of improvements in the way data centers are designed, built and managed.
Circuit and Rack Design. Servers are often custom built for a data center according to the specifications of the operator. The newest ones may be delivered to the data center grouped in modularized containers where they are placed directly on concrete slabs with pre-existing internalized cable runs. The overhead cable runs, raised floors and clean rooms we typically picture in a data center will be absent. The new servers are much more powerful, but they also use more power, driving energy consumption up from about 7 KW per rack to nearly 12 KW. That places a premium on stable power and equipment cooling.
Power Reliability and Distribution. Multiple sources of power are a necessity because a shutdown in a data center can be catastrophic to a business that relies on the web for internal communication or sales. An uninterruptible power supply (UPS), which stores electrical power and supplies it instantaneously to the data center in the event of a failure from the main power source, is a necessity, as is a backup source of main power. The newest data centers are being designed to use more renewable power because it is obtained and controlled locally, and the operating costs are low. Wind and solar can be inconsistent and seasonal. For this reason, the new data centers are often located close to a river to take advantage of renewable hydroelectric energy.
Equipment Cooling. Using standard air conditioning to cool a confined space in which temperatures have been driven above 100oF is inefficient and prohibitively expensive. Instead, the new data centers are designed to take advantage of outside ambient air to aid the cooling process, and data center locations are often chosen to avoid temperature extremes and minimize year-round variation. Many new data centers use water, preferably captured and recycled, in association with the ambient air to cool the modularized racks adiabatically using fans to blow a fine mist in the module. This causes the heat energy in the room to dissipate as it evaporates the water rather than raise the temperature of the interior.
DCIM Software. According to a Stanford University study done in 2015, about 30 percent of servers in a data center are “zombies.” That is, they are on and consuming power but they are not being utilized for processing. Additionally, according to another study by McKinsey and Company, if a server in a data center is active, it is only delivering about 5-15 percent of its maximum computing output on average. Server inefficiency wastes energy, so these devices need to be shut down when not in use, and processing tasks optimized around the computational assets. According to the Natural Resources Defense Council (NDRC), “Much of the energy consumed by U.S. data centers is used to power more than 12 million servers that do little or no work most of the time.”
These are the tasks of DCIM software, which recognizes the zombie servers and automatically shuts them down or brings them up as the workload dictates, and distributes that workload to optimize server utilization. The NRDC estimates that optimization of data center server usage could save as much as 40 percent of the annual U.S. data center energy consumption.
Site Selection Considerations
Energy
Advances in technology, the design of data centers, and the tools to manage them have drastically reduced the energy costs of a data center. But even though the PUE (the ratio of power used to compute to power used in cooling and distribution) has fallen from about 1.6 just a few years ago to less than 1.2 today, energy and cooling costs still represent about 20 percent of the data center’s operating budget and designing for those considerations drive the cost of data center construction. Minimizing those costs is a priority for data center companies as they strive for a perfectly efficient PUE of 1.0 and cheap, reliable electrical power is the prime driver. North Carolina has 43 data centers located across the state, largely drawn there due to the abundance of cheap coal-fired power. However, cooler but temperate climates seem to be attracting much of the industry’s attention for the new hyperscale facilities. Microsoft, Yahoo, and Dell have all got very large facilities located in the small community of Quincy, WA. They were attracted there by the cool, stable climate and the promise of cheap, locally-sourced power from the Columbia River.
- Are you located in a temperate zone?
- Does your area have access to reliable and inexpensive energy?
- Does your area have land available close to the source of electrical power?
Land
The growth of web-based commerce is still accelerating as is the use of web-based applications and networking to support operations in all business. New web-based businesses, yet unimagined, will be created. By the same token, there will be consolidation of businesses through M&A and some will fail. The point is that flexibility is key. A data center must be scalable. Able to expand or contract as the business environment dictates.
- Does your area have abundant land and can its acquisition be phased and scaled to fit the current and future needs of a data center?
Skilled Labor
Technology changes rapidly. Having access to an educated workforce armed with various IT certifications is not necessarily adequate. Having access to a workforce that is fluent in Science, Technology, Engineering and Mathematics (STEM) disciplines may be a better option for IT companies in the future.
- Is your area located close to a research university, facility, or existing technology cluster?
If the answer to all or most of these questions in yes, and you believe having a data center is right for your area, you have a real chance for future success. If the answer is no, and you still believe a data center is critical for your area, you will probably have to mitigate shortcomings through aggressive incentives and marketing.