New Study Diagnoses 30% of Servers as "Comatose"

Posted by Mary McCoy on Jun 10, 2015 6:00:00 AM

30-percent-of-servers-are-comatose

Servers don't just take up a lot of space. They also eat up a lot of a hosting provider's budget. When you're spending hundreds of thousands of dollars (if not more) on physical equipment, you need to ask yourself whether you are getting the maximum return on investment (ROI) in your data center.

A new study by Anthesis Group, a global sustainability consultancy, and Stanford researcher Jonathan Koomey finds that up to 30% of servers are in a "comatose" condition. That is, they are turned on and using electricity, but have not delivered information or computing services in at least six months.

Waste of Resources

Poor data center energy use is costly for any hosting provider. When you purchase servers, are you sure you'll use all of them? Consider the average cost of a server, which amounts to roughly $3,000 according to Anthesis Group when you exclude other capital and operating costs. For simplicity, let's say your data center houses 10,000 servers. What if 30% or 3,000 servers just sat there running without storing any data? At minimum, that's $9 million down the drain!

Even if we use a more conservative figure of 5,000 servers, that's $4.5 million in capital you're doing nothing with. Are you keeping these expenses in mind when planning your budget? This waste of resources is an issue for the hosting industry as a whole, as indicated by the shocking statistic about vegetated servers. Being savvier about your allocation of resources and capital will help prevent your business from flat-lining.

Now, perhaps you purchased more servers than you're currently using, but are expecting an uptick in growth and are preparing to use your remaining servers soon. That's more understandable than buying something that consumes incredible amounts of energy and routinely paying high electric bills for nothing.

To put things in perspective, two years ago the US Energy Information Administration estimated a single server running 24 hours a day would cost roughly $732 dollars to power. Now, multiply that cost for each unused server and add that to the millions you're already wasting. Data center electricity consumption is only expected to increase in the next few years. Is your company's management properly accounting and scaling for these costs and making the best use of your data center?


Other Ways to Correct Data Storage Inefficiencies 

These latest findings shine a light on data processes in need of improvement. Properly utilizing all IT infrastructure in your data center is just one example. How you currently store data may also be inefficient, though it's not always obvious. Explore the following solutions to ensure that you're utilizing your backup servers as efficiently as possible:

Disk Defragmentation 

Over time as you save, change, or delete files, data for that one file is stored in multiple locations. When you make changes to a Word Doc, for instance, these edits are physically stored in a different place than where you originally saved the file. This will inevitably cause fragmentation on your disk, which will increasingly cause inefficiencies and performance issues as time goes on. In time, this leaves your servers looking more like a poorly played game of Tetris than an economical storage device. What's worse is that, along with eating up more storage than necessary, this will also slow down your restore process. 

Luckily, solutions exist that address this inefficiency. Microsoft defines disk defragmentation as "the process of consolidating fragmented data on a volume (such as a hard disk or a storage device) so it will work more efficiently." This process essentially re-organizes your messy server data into a sensible, vacuum tight unit, saving you time and money. Each data center has its own processes and requirements, so it's up to your stakeholders to decide how regularly you need to run through the defragmentation process for each of your servers.

Block-Level Backups

Does it take you forever to complete backups? It shouldn't. Backing up at the block level is faster and more efficient than performing backups at the file level. Since this process isn't as arduous on a server, hosting providers are able to back up more frequently - even as often as every 15 minutes - without any reduction in server performance.

A block-level backup application ignores files and does not care how many you have on a server. Rather than spending time scanning the entire disk to find out where on the directory tree to find the correct files to backup, block-level backups read the disk in the order that they actually appear on the disk. This is a much more efficient process that prevents you servers from becoming overtaxed. Wouldn't you prefer to use a backup process that runs straight through the data as quickly as possible, as opposed to one that searches for individual files like a needle in a haystack? 

Continuous Data Protection

Let's say you are backing up every Monday morning. That means that should system error or disaster strike on Friday night, you've lost nearly a week's worth of your customer's business-critical data. Surely, you see the inefficiency (and potential headaches) inherent in this system. As I mentioned in referring to block-level backups, you need to be able to backup as often as possible without it eating up your server's I/O operations in the middle of the day. In other words, you need a backup manager that offers some form of Continuous Data Protection (CDP). 

Remember our example above about saving a Word Doc? Let's say that same document is updated every day. With an outdated model of backup, your server would have to re-read the whole file each time you would backup, needlessly exhausting its resources and diminishing performance.

A backup solution with CDP, however, constantly monitors for changes so that, when it's time for a backup, it can simply look at the blocks where the data has changed, rather than the whole file. Additionally, since you only run a full backup the first time when you capture that initial snapshot, you save loads of disk space. Look for a backup manager with CDP so you can minimize your recovery point objective (RPO) by performing incremental backups at the block level, allowing you to painlessly backup more frequently and lose less data. 

 

Download "The Big Book of Backup"

See also:

Meet Mary! Mary McCoy is Continuum’s resident Inbound Marketing Specialist and social media enthusiast. She recently graduated from the University of Virginia (Wahoowa!) with a BA in Economics and served as digital marketing intern for Citi Performing Arts Center (Citi Center), spearheading the nonprofit’s #GivingTuesday social media campaign. Like her school’s founder, Thomas Jefferson, Mary believes learning never ends. She considers herself a passionate, lifelong student of content creation and inbound marketing.

Find me on:

Topics: Continuous Data Protection, server backup, block-level backup

Recent Posts

Posts by Topic

see all