How to Achieve Higher Density with Your Backups

Posted by Mary McCoy on Dec 30, 2015, 11:38:27 AM

How-to-Achieve-Higher-Density-with-Your-Backups.png

Once you have an effective backup system in place, you should be thinking about ways to achieve higher density, or the number of servers that can be backed up. That’s good advice from R1Soft’s Product Manager, Ben Thomas. 

After investing in a backup system like Server Backup Manager (SBM), fine-tuning your system can result in improved performance as well as higher density. That can give you more RAM, more storage, more processor power.

There are many tactics you can use to increase density, depending on the system you’re using and what you’re trying to accomplish. There are a few tuning parameters in SBM that are particularly useful for achieving higher density.

But first...

Ben warns, “Before you do anything make sure you understand how your SBM is performing. Measure and record the system’s performance metrics, especially under normal working conditions.” He says there are many performance monitoring tools available. Munin is generally a good choice, but there are many others.

As you modify your SBM server, Ben recommends giving each change time to be reflected in the performance metrics. That way you can see if the change has had the desired result.

Now let’s look at Ben’s SBM tuning recommendations.

 

Choose the Right Hardware

As we’ve noted in previous blog articles, right-sizing your backup system is important for functional and financial efficiency. Nonetheless, you need room to grow as you add clients and data. Therefore although the two system requirements noted below are generic, they err on the side of oversizing the system:

  • Physical memory – 1 GB of RAM per open disk safe (concurrent backup/restores), with an additional 2GB of RAM per terabyte of backups
  • CPU – 2 cores minimum, plus 1 core per concurrent backup/restore task

 

It’s also important to note these interactions between hardware selection and measuring storage performance:

  • The load number shouldn’t exceed the number of cores -- ideally, less than 70%. If it consistently exceeds the number of cores, either the system is not properly balanced with regard to RAM/CPU/IO for the workload, or the system is generally under-sized.
  • Transfers-per-second should be within 50% of the IOPS, and %iowait should be between 10% and 80%. A very high %iowait value could indicate the CPU is in excess of the workload, which is desirable. A low %iowait time indicates the CPU is struggling to keep up with the I/O operations.

 

Tweak Your Backup Policies

  1. Run backups frequently. Hourly is generally ideal. It’s counter-intuitive, but more frequent backups mean smaller deltas (block changes captured). That’s easier for the SBM to manage.
  2. Merge recovery points after each successful backup. A big merge job of multiple recovery points will impact the available resources (usually IO) on the SBM server. If the merge takes a long time to process, it can back up other queued tasks and cascade into a system that is generally performing poorly.
  3. Keep recovery point and archive point retention to the minimum number which satisfies your recovery point objectives. More is not better, because each recovery point will slightly degrade the performance of the disk safe due to the overhead of the required meta data.
  4. Schedule your jobs so they are effectively staggered. Don’t let your SBM server sit idle, distribute your backup jobs as much as possible.

 

Try the “Old Insert Method” for Block Ordering in the Disk Safe

Server Backup Manager gives users the option to revert change block ordering back to the format used in version 5.2.2. This can significantly improve backup performance if you have aging disk safes. How well does this work? One customer had this to say as they began to implement the old block insert method:

“It was on two nodes overnight, the 100 plus small VM. Both nodes and our own backup server have shown impressive improvements in replication times, averaging 4 to 6 times faster. The small VMs are no longer queuing indefinitely, which is also great news.” 

Note that R1Soft engineers are now working on significant changes to the libraries used to manage and update disk safes – changes that will make the “old insert method” obsolete. And of course, if you have the newer installation of Server Backup 5.4.3, you don’t need this option anyway.

 

Tweak Your Memory Settings

Adjust the JVM heap size. You can also improve UI responsiveness by increasing the H2 database cache size.

 

Consider Tuning Options for the H2 Internal Database

The SBM uses this database internally to record configuration information and activity records. It keeps most-frequently-used data in the main memory. Improved tuning helps control runtime characteristics, and you can change the amount of memory used for caching.

 

Implementing these suggestions will enable you to achieve higher density, making the most of your backup system. That boosts ROI as well as performance.

Webinar: What Does Your Backup Service Look Like?

Download Now!

See also:

 

Meet Mary! Mary McCoy is Continuum’s resident Inbound Marketing Specialist and social media enthusiast. She recently graduated from the University of Virginia (Wahoowa!) with a BA in Economics and served as digital marketing intern for Citi Performing Arts Center (Citi Center), spearheading the nonprofit’s #GivingTuesday social media campaign. Like her school’s founder, Thomas Jefferson, Mary believes learning never ends. She considers herself a passionate, lifelong student of content creation and inbound marketing.

Find me on:

Topics: Server Performance, Backups, Server Backup Manager

Recent Posts

Posts by Topic

see all