WannaCry NHS attack - lessons for data recovery strategies
A fresh look at data protection and backup best practice, particularly when it comes to ransomware.
The WannaCry ransomware attack crippled thousands of organisations in 150 countries around the globe, most notably the NHS. Trusts were quick to implement their tried and tested disaster recovery strategies and many hospitals were able to return to normality within a matter of days, which is commendable considering the scale and nature of the attack. But, this latest cyber attack has prompted us to take a fresh look at data protection and backup best practices, particularly when it comes to ransomware.
New threats – when backup isn’t enough
The age of malware has added a whole new threat to NHS IT systems. We know that ransomware only works if the damage is reversible. As a cyber criminal there’s little point holding data to ransom and demanding payment if the data is irretrievable. And we know that the perpetrators of these attacks chose their victims carefully, targeting organisations that can least afford downtime and, as a result, are more likely to pay the ransom.
Robust data protection is essential in the battle against cyber attacks but, increasingly, we’re seeing that having a single backup strategy is not sufficient and, depending on the storage media, potentially even part of the problem. Historically, there was little risk to backups themselves, yet ransomware adds a new dimension that threatens and attacks not just the data, but also the backups, as was the case with the WannaCry attack.
Because the risks to NHS systems have evolved, the precautions to protect against new threats are evolving too. Similarly, as the drivers for backing up data changes, the way backups are performed should reflect this.
When backup is actually part of the problem
Today many Trusts use online de-duplication devices as their primary backup media. These devices can store many generations of backup in a small footprint at a reasonable cost. They are convenient to use, quick to restore from – no fetching tapes from off-site storage. BUT, they may actually be more vulnerable to malicious or malware attack, as demonstrated by WannaCry’s proficiency at encrypting files; and, on their own, they present a single point of failure. While you can protect the single failure point by replicating the device to another location, that does not protect against deliberate corruption. This is a general point to remember - resilience features like replication are great if one piece of hardware fails, but no defence against deliberate corruption. They simply ensure that the data is perfectly corrupted in multiple locations.
- UK retains pole position on open data
- Local government ‘key audience’ for European data sharing initiative
- Public Health England shake-up brings data focus
Typically, de-duplication devices simply look like network file servers. They appear just like any file server presenting what looks like a regular file system (for the technical they present an SMB share). Unfortunately, that is just the sort of thing that ransomware looks for – network file servers are where most sites keep their data so the ransomware looks for these and encrypts them. In effect, you may have made your backups convenient and easy to use, but also easy to damage and vulnerable to malware like WannaCry. What better way for a cyber criminal to incentivise an organisation to pay up than by corrupting the backups as well as the data?
Lessons from history
Traditionally, data backups were written to tape and stored offsite. While there were, and still are of course, physical threats to backups, such as damage to hardware and disasters such as fires and flood, they were not vulnerable to cyber attack. An offsite tape in a fire-safe with the write-protect switch set remains the safest form of backup from any threats, cyber or otherwise. We refer to it as the gold standard. Having an offline or tape backup is a good secure media, but of course it is a pain to use. Tapes have to be located, loaded, positioned and can only be used by one process at a time. For this reason, many Trusts have a desire to move away from tape, but they haven’t always considered the potential vulnerability of the disk-based backups.
Rather than moving away from tape completely, we at BridgeHead feel that offline media should supplement online backups and provide the second layer of protection. Backups are best protected when they are maintained offline from production environments to avoid ransomware viruses corrupting backup copies. So how can you get the best of both convenient quick access and secure offsite protection?
We recommend an easy to restore from, but less secure first stage backup with a 'cascade' on to tape or similar offline removable media. Because the ‘cascade’ copying the data is all on backup servers it does not impact production systems. This is commonly called Disk to Disk to Tape. The final copy doesn’t have to be tape, but it must be safe against malware, secure and offsite. Tape is arguably still the simplest though some cloud storage could be considered. The disk copy, most likely de-duplication, is used for quick convenient restores, while tape is used for site disasters or if the de-dupe device itself gets damaged physically or corrupted. The first layer might be a backup to a de-duplication store, or as we commonly do at BridgeHead Software, a Storage Array snapshot that is then cascaded onto tape, or similar offline media, for long term and more robust backup.
There is no one single best practice when it comes to backup but considering, planning and testing disaster recovery strategies regularly is an essential part of keeping up with evolving threats and minimising impact on patient care through downtime. No one single backup media really meets all the necessary requirements so there is often a compromise and therefore a need for multiple methods and multiple layers of protection from Storage Array snapshots to online de-duplication stores and finally to secure offsite media.
Even with the best firewalls and protection in place, we must accept that cyber attacks can and will still happen and some will get through the defences. Reflecting on the WannaCry attack, we urge Trusts to think of an offline backup as being like an insurance policy – “We hope not to have to make a claim, but it’s essential to be covered in the event of a major disaster.”
Plan and Practice
The final reminder is to have a written plan, make sure all the IT staff know where the plan is, and practice that plan. You do not want to be working out what to do in the middle of a crisis, that’s how mistakes happen and a crisis becomes a disaster.
Gareth Griffiths is chief technology officer at Bridgehead Software
CyberArk, our sponsor for PublicTechnology Cyber Week, writes about how industry and government are working together to meet Australia’s cyber challenges
Information request reveals that number of reported incidents increased slightly
A major government-commissioned study found that about half of UK organisations are lacking basic security skills. PublicTechnology talks to the researchers behind it to find out where...
Cabinet Office minister said that, despite the controversy that often surrounds the PM’s top adviser, ‘people are interested in Dominic and his ideas’
PublicTechnology talks to Rich Turner about why organisations need to adopt a ‘risk-based approach’ to security – but first make sure they get the basics right
CyberArk's David Higgins explores the cyber risks of hiring independent contractors
HPE shows why organisations are increasingly seeking to understand and consider the environmental impacts of their IT purchasing decisions
HPE makes the case for hybrid cloud services to transform and enhance relationships with citizens...