What happens during a cyberattack on critical infrastructure?

Leaders at the National Cyber Security Centre lift the lid on the impact of and lessons learned from the Triton malware assault

A participant in cyber defence exercise run by NATO’s Allied Command Transformation analyses real-time threat information   Credit: SHAPE NATO/Public domain

If you asked most people to conjure a mental image of the effects of a cyberthreat, most would probably call to mind a scenario involving hacked online accounts or stolen credit card details. While clearly traumatising for victims, that kind of practical damage can, thankfully, often be undone with a few phone calls or emails.

A quick search for news headlines related to the Triton malware, which is widely reported to have hit a Saudi petrochemical plant and in 2017, reveals that this is a different beast altogether.

Triton is the world’s most murderous malware, and it’s spreading.

Murderous malware: Can a computer virus turn deadly?

Triton is a new malware ‘deliberately’ designed to put lives at risk.

The motives of the Triton attackers are not known for certain, but it is safe to assume that, in targeting a critical infrastructure facility, the intent was to wreak serious harm. The attack was aimed at the refinery’s operational safety systems, rather than its IT systems.

Deborah Petterson, deputy director for critical national infrastructure at National Cyber Security Centre, explains that these systems – “the ones that can actually go ‘bang’ if they go wrong” – are designed to offer additional layers of security than cyber defences. 

“If safety instruments see that actual physical characteristics – the temperature or the pressure – start to go wonky, then the system will safely shut down,” she says. “So, why this particular incident (Triton) was interesting, is it is the very first time that we saw an adversary go for that safety system. You see, if you mess with that safety system, then that safe shutdown might be at risk.”

Having got in, the attackers seemingly “made a mistake” that exposed the threat to cybersecurity firm FireEye and Schneider Electric, the firm behind the safety system in question. 

“It is FireEye’s belief that they had not meant to expose themselves at that stage,” Petterson says. “It had been two years of work… building up to the point at which it failed.”

This likely represented something of a lucky escape – one that the NCSC and its industry partners wish to learn from. Petterson’s advice for operators of critical infrastructure is to begin with a thorough examination of their operational systems.

“The first [step]… is actually knowing where their safety systems are, and how they are connected… The one in this example was 15 years old – when is the last time you actually looked at your risk management around that?,” she says. “Have you got the detection systems where you can go searching for those indicators of compromise? [We are] working on getting the intelligence out there – but if you can’t feel that intelligence, if you haven’t got the monitoring and detection systems that you need, then it is going to be useless. When that information is out there – can you deploy it?”

But, as any security professional can no doubt attest, even the most up-to-date system cannot protect against the threat of human error. This was demonstrated by the example of the Triton attack.

“This took a huge amount of investment, expertise, and knowledge. Luckily, today there aren’t that many adversaries with that capability, but I would certainly say that we see a number of adversaries that we track developing their maturity along that spectrum.”
Paul Chichester, NCSC

“This system is a very old one, and you had to stick a key in it,” Petterson says. “Someone left the key in, turned to ‘program’.”

She adds: “People talk about the security through obsolescence – well this was 15-year-old kit, it had been there 10 years, and been designed five years before that. What it shows is that, if someone is prepared to put the effort in, they can learn this stuff – and, in this case, they have reverse-engineered the protocols in order to get in there. That argument is never a good one when you say ‘my kit is so old is fine’ – actually, no it’s not.”

Ian Levy, technical director at the NCSC, says that a major takeaway from the attack should be the need for cybersecurity and operational safety people to work harmoniously together.

He says: “There is a mantra in the cybersecurity community, that says ‘safety people will never patch’, because they are too scared to ever patch anything. And there is a mantra in the safety community that says ‘cybersecurity people are cowboys’, because they patch really quickly. Neither of those things are true… it is about trying to bring those safety and security cultures together, so they can have a common conversation.”

Zero consequences
Triton was an example of a zero-day attack. Such assaults exploit vulnerabilities that the target of the attack was not previously aware of – and for which there is, consequently, no pre-existing fix or patch. This makes them much more difficult to respond to than assaults on known weakness and, as a result, potentially much more destructive.  They are so called because there is no time – ‘zero days’ – between the discovery of a weakness and its exploitation by hostile actors.

Due to their severity, and the fact they are often aimed at high-profile targets, zero-day attacks typically attract a high amount of attention and public scrutiny.

Paul Chichester, director of operations at the NCSC, tells PublicTechnology that, while such breaches remain “extremely rare”, it is important to heed the warning that Triton represents.

He says: “Triton is very much a wake-up call… we are trying to get people to realise that… there is a lot of talk about these things not being possible, and you hear about cyberattacks and think ‘that couldn’t happen’. This is a real case where it did. Clearly, there was an actor, with an intent – why was somebody on a safety system on a refinery? You can make up your own theories around that.” 

Chichester adds: “What I certainly don’t want to do is get people thinking that we see this all the time… if you think about the complexity of writing malware to be on a controller for a safety control system – that takes a huge amount of investment, expertise, and knowledge. Luckily, today there aren’t that many adversaries with that capability, but I would certainly say that you see that maturity and you see a number of adversaries that we track developing their maturity along that spectrum… There is quite a number of actors who are on that spectrum – but there are very few who are at the end of Triton.”

Nevertheless, operators of critical infrastructure must ensure any vaguely suspicious or unusual activity is thoroughly analysed, according to Chichester. He urges them to “not just stop at the obvious answer”, but to work with their IT providers and systems manufacturers to reach a comprehensive understand of anything out of the ordinary.

“The challenge for us is how many incidents out there have not been investigated to the depth that this one was, to prove that it was actually malware?,” Chichester says. “How many normal failures are there that aren’t investigated?”

 

 

This article is part of the Government Cybersecurity Index – two weeks of content on PublicTechnology focused on the state of data protection and security across the public sector. Look out in the coming days for more exclusive research, insight, comment, and analysis, and click here to read our exclusive research revealing which government department suffers far more data breaches than any other.

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere