A study from the Department for Science, Innovation and Technology has found that security professionals focused on attack simulations are, for various reasons, taking a circumspect approach to new tools
Cyber red teams that emulate the methods of attackers are “deeply sceptical” of the potential impact of artificial intelligence in improving organisations’ cyber defences, government-commissioned research has found.
In December of last year, the Department for Science, Innovation and Technology retained cyber consultancy Prism Infosec to undertake a research exercise intended to explore “how the commercial offensive cyber sector is integrating emerging technologies into their commercial offerings and what the implications are of this integration”.
The study found that the most high-profile of all such technologies is anticipated to have only limited impact on the ability of red teams – which exist to replicate and simulate attackers’ methods – to probe organisations’ security set-ups.
“Overwhelmingly, our interviews demonstrated the sector remains deeply sceptical of the promises of AI, considering many of its capabilities overstated and overused in products, creating a confused environment as to its true potential and capabilities,” says the newly published research report. “It was perceived that the most common use by threat actors for AI at this time was to deliver more sophisticated social engineering attacks. Aside from the ethical issues of such use, interviewees highlighted risks of data privacy, large costs, and the security of public models as reasons for hampering widescale adoption of the technology in their current offerings.”
However, red teams also reported expectations that AI could, in due course, become a tool in their arsenal. But, in the meantime, offensive cyber ops will rely on professional expertise, rather than automation.
“There was optimism that, in time, these factors would be addressed by more accessible models which can be hosted and tuned privately by cybersecurity firms and then used for a variety of commercial offerings from attack surface monitoring through to vulnerability research and prioritisation. Until the technology reaches this level of maturity however, the red team element of the sector will continue to focus on the manual specialised human efforts for the delivery of commercial offensive cyber services.”
The study also reports that other “surprising results [included] the lack of discussion around technologies such as blockchain or cryptocurrencies.”
Respondents have, instead, found that “adoption and migration into cloud-based architecture has had a larger impact to services being offered by the commercial red teams”.
The report adds: “It has provided changes to traditional infrastructure, enforcing development of new tooling and practices as the sector has adapted to how client organisations have migrated into the cloud following the global coronavirus epidemic Covid-19, advancements in detection and response capabilities, changes in real-world threat actor behaviours.”
Related content
- Cybercrime: One in every 100 CMA offences were charged in FY25
- Home Office seeks head of cyber detection and response
- Government extends use of digital simulation for ‘information incident’ crisis training
Offensive cyber professionals also noted that their sector has not kept pace with threats that might face organisation’s working with non-Windows computing environments. This has also been a factor in stymieing the use of AI, according to survey participants.
“It was felt that investment into developing offensive cyber tools and capabilities for MacOS, Linux, Unix, Android, iOS, etc. had lagged significantly behind Microsoft Windows estates,” the research says. “[This was], in part, due to the prevalence of that operating system in wider society. As a result, the lack of published research and tools into these was seen as having a hampering effect for using technologies like AI to be used to help develop new capabilities.”
The study concluded with a finding that red teams feel that – following a tip in the scales towards the work of their colleagues in blue teams, focused on cyberdefence – the IT security space is currently fairly “balanced”. The increased focus on defensive posture has led to a greater degree of circumspection among cyber professionals – which, in turn, is also presenting a barrier to the use of new and innovative tech.
“This perceived increased speed of defensive adaptation was ultimately leading to a more cautious approach to knowledge sharing among offensive security practitioners so as to avoid techniques being burned prematurely and inhibiting operations,” the report says. “Analysis of this topic from the interviews revealed that the effective capability bar for offensive security professionals was rising requiring deeper coding knowledge, automation expertise, and adaptability. Traditional offensive techniques were becoming less effective, which was forcing red teams to find short-term gaps in defences rather than relying on old exploits.”
It adds: “Interviewees expressed the conclusion that offensive cyber operators were needing to constantly evolve as security solutions are becoming more sophisticated and harder to bypass. This means knowledge of innovative tools and techniques may become more restricted, and only become public once they have been effectively neutralised by defences.”
The civil service operates its own Government Security Red Team – also known as OPEN WATER – which tests the defences of departments by mimicking the work of cyberattackers. In late 2022, PublicTechnology exclusively reported on the team leading a campaign of hostile digital and in-person reconnaissance destined to identify vulnerabilities.
Numerous agencies across government – including the Ministry of Defence and the Government Digital Service – have also hired the services of external cyber firms to perform attack simulations or other red-team exercises.

