Storing up issues

Farid Ouazzani on how the rise of the Cloud has affected the approach organisations are taking to backing up data.

We all know that we should backup our data, that much is clear. Organisations, regardless of industry, create, alter and utilise data daily, in an ever increasing number of ways and on a larger scale than ever before.

Data is the lifeblood of organisations. It represents countless hours of productivity, transactions, collaborations and more. Choosing what to protect should therefore seem relatively straightforward then?

Surely we want to backup everything? The answer, however, very much depends on your approach to recovering from any given incident of data loss and whether your data holds varying degrees of importance within your organisation.

Granular file and folder backup

For example, if you suffered the total loss of a file server, how would you go about recovering it? You may choose to replace the hardware (if physical) or create a new virtual machine (if virtual) and then restore the files and folders after having rebuilt the base operating system (either from physical media or a system image).

If this approach was taken you would likely backup the user data and nothing else. In this example, we can see the “how” determines the “what” and in doing so focuses our backup strategy on what is important within our given scenario.

If we examine another scenario where the total loss of an application server has occurred, you could follow a similar approach to that outlined above. This is fine if the rebuild time is considered acceptable but what if it is not?

The acceptable time taken to recover data is commonly referred to as your recovery time objective (RTO) and is a measure used by organisations to determine worst case recovery windows.

To view it another way, it is a measure of how long an organisation can afford to be without access to its data or without “business as usual” capability.

The ability to satisfy recovery time objectives will largely depend on the backup approach being deployed. In scenarios where a total loss of any given server has occurred, it is likely that the aforementioned backup approach will result in a more cumbersome and possibly longer recovery process.

For this reason, some would argue that the above approach, whilst suited to granular recovery of data, is not well suited to complete server recovery.

Image-based backup

In such cases, an alternative approach can be taken involving the use of image backups of devices, facilitating faster full system recoveries.

The recovery is faster because all of the required data is contained in the image, which can be “spun up” when and where required.

Depending on whether the device is physical or virtual, you may see this approach referred to as Bare Metal Recovery (BMR, for physical recovery) or Virtual Disaster Recovery (VDR, for virtualised recovery).

Application level backup

In addition to file and folder backup and image-based backup for full systems, you may wish to consider application level backup for frequent backup of critical applications, as this will provide the ability to perform point-in-time recovery of the given application.

Many backup technologies feature “plugins” that effectively enable native backup for specific applications such as Microsoft SQL, Exchange or Sharepoint.

Using granular files & folders backup and/or application backup in conjunction with image level backup provides the best of both worlds as it allows for the most appropriate recovery option in any given scenario.  

The ability to perform this optimal approach will largely depend on the backup solution implemented. Ideally, the solution should be simple, highly secure and automated, whilst supporting the three levels of recovery (files and folders, native application support and image level protection).

It should also have the ability to facilitate the recovery of data to any location in order to support the need to invoke disaster recovery.

The table below summarises the recovery options and the ideal use cases.

Recovery Scenario

Files and Folders Backup

Application Level Backup

Image Level Backup

End user(s) requests file(s) restore(s)

IDEAL – Restore specific files and folders without impacting anything else


NOT IDEAL – Results in overriding ALL data back to point in time

Application Database corruption.

May be IDEAL depending on application backup method

IDEAL – Can recover specific database to point in time

NOT IDEAL – Results in overriding ALL data back to point in time

Total server loss

May be IDEAL – If RTO is not critical, Slower than image level recovery

IDEAL – When used in conjunction with image level recovery

IDEAL – Restores server image in FULL in fewest number of steps.

Choosing when and where to backup data to is just as important as the method of backup. Choosing when to perform backups is usually determined by the organisation’s hours of operation.

The majority of organisations choose to perform backups “out of hours” when system and network load and user activity is at a minimum. For most, this is likely to be overnight and will need to have been completed prior to the next working day commencing. This is period is referred to as the backup window.

Traditional backup solutions

Traditionally a backup schedule will have involved performing a combination of daily incremental backups followed by weekly full backups.

There are two main drawbacks of this approach. Firstly, it tends to be inefficient in terms of storage usage, regardless of the storage target used, as data on weekly basis is being duplicated in the form of full weekly backups.

Secondly, it makes for more cumbersome point in time recoveries as changes have to be rolled forward/backward to the appropriate point in time.

Modern Cloud Backup

Modern backup solutions that have been specifically designed with the Cloud in mind, often feature block level “delta” backup and restore.

These technologies also feature compression and deduplication, minimising the amount of data that is required to be sent to the backup target location which ultimately saves bandwidth, time and money.

The main advantage of delta based backup is performance.

Compared with traditional methods, it is extremely fast as changes are tracked within a journal and if data has not changed, it does not need to be included as part of the next backup.

By taking this modern approach to backup, it is possible to perform, smaller, more frequent backups which translates into being able to recover to a point in time closer to “now”.

Where to Backup to

We have briefly explored the options regarding when to backup but what about where? Where often depends on what resource is available to you. For example, do you backup to disk, tape or the cloud? Do you need a local copy?

Do you need an off site copy for DR or legal compliance? Answering these questions helps to formulate a set of backup requirements.

With cloud backup, it is now possible to have a local copy of your data on hardware agnostic storage, an encrypted offsite backup and built in disaster recovery capabilities all in one.

Such solutions also provide restore anywhere capability for disaster recovery.

Farid Ouazzani is technical consultant at Redstor

Colin Marrs

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *