Having completed a review into how and why sensitive information was leaked ahead of the November 2025 Budget, government should now ask whether its systems are designed to learn from such setbacks, according to cyber expert Vsevolod Shabad
The government’s Budget Information Security Review has largely been read as a serious but contained incident: a publication misconfiguration, followed by tighter controls and a narrower circle of access to market-sensitive information.
That reading is understandable. When something goes wrong in a high-stakes system, the instinct is to reduce exposure, formalise controls, and make recurrence less likely.
But the more important question raised by the review is not whether this specific incident could have been prevented. It is whether government systems are designed to learn fast enough when prevention inevitably fails.
In complex, insider-heavy environments, failure is not exceptional. What differentiates resilient organisations is not that they never fall, but that they reliably recover with a better understanding than before.
The National Cyber Security Centre’s investigation deserves to be recognised for what it is: a pragmatic, professional technical response carried out by people whose job is to contain harm and stabilise systems.
That work matters. When something breaks, someone has to fix it — and fix it quickly.
But mature security and safety practice draws a deliberate distinction between incident response and organisational learning. They are neighbouring responsibilities, not interchangeable ones.
Response focuses on containment and recovery.
Learning focuses on understanding why the incident occurred, which assumptions failed, and what must change so the system does not simply reset to its previous state.
When these roles blur, organisations often become very good at remediation — and quietly weak at learning.
To illustrate this dynamic, we do not need to reach for national emergencies or once-in-a-generation shocks.
A 12-month retention period is not neutral; it reflects a prioritisation of cost and minimisation over long-term learning.
The power outage at Manchester Airport in 2024 — which disrupted multiple terminals, led to the cancellation or delay of dozens of flights, and left thousands of passengers stranded for hours — is sufficient to show the scale at which these tensions already operate. The response itself was appropriate: systems were made safe, services were restored, and established procedures were followed.
The relevance of incidents at this level is not that they demonstrate catastrophic failure or that learning did not occur. It is that they show how quickly complex organisations must move from disruption to stabilisation — often leaving little space, time, or data for systematic reflection unless learning has been deliberately designed in advance.
If structural learning is difficult at this scale, it will not become easier in larger crises.
Designed to forget
One detail in the Budget Information Security Review deserves particular attention.
The NCSC was unable to examine whether similar incidents had occurred earlier because relevant web logs were retained for only 12 months. This was not an investigative failure. It was an architectural choice embedded in the system itself.
The implication is straightforward. Even after a serious incident, the system lacked the historical context needed to determine whether the event was isolated or part of a broader pattern.
Without sufficient institutional memory, organisations cannot reliably reconstruct events, test hypotheses, or recalibrate their understanding of risk. Each incident is treated as new, even when it may not be.
What appears as a modest operational saving can therefore become a structural weakness: a system that forgets just quickly enough to prevent cumulative learning.
Modern security frameworks increasingly assume that anomalies are inevitable. The question is no longer whether systems will deviate from expectations, but whether those deviations can be detected and understood against a continuously updated baseline.
This is the logic behind zero trust as described in NIST guidance.
Trust is not granted once and withdrawn later; it is constantly recalibrated based on observed behaviour over time. That recalibration depends on visibility, historical context, and the ability to distinguish meaningful anomalies from normal variation.
Without sufficient institutional memory — including long-term logs and behavioural history — this model cannot function. There is no stable baseline to update, and no reliable way to determine whether a deviation is genuinely anomalous or simply undocumented normality.
In that sense, zero trust is not primarily a control philosophy. It is a learning architecture.
Systems designed to forget quickly undermine it by design.
Risk and retention
The review describes technical controls in detail, but it leaves the government’s underlying risk appetite implicit.
Yet a risk appetite already exists, whether articulated or not. It is encoded in decisions about access models, consultation practices, logging depth, retention periods, and auditability.
An explicit risk appetite cannot sit at the level of delivery teams, security functions, or technical assurance alone. Decisions about how much leakage risk is acceptable — and how much historical visibility the system must retain in order to learn — shape consultation practices, market fairness, and public trust.
These are governance trade-offs, not implementation details.
Wherever these choices are formally located, they must be owned at a level with the authority to balance policy effectiveness, operational risk, legal constraints, and institutional learning — and to revisit those balances after incidents, rather than allowing them to harden into technical defaults.
Log retention is not cost-free. Storage, security, privacy obligations, and data minimisation requirements are real constraints, particularly under GDPR.
But the presence of legitimate constraints does not remove the need for explicit choice. A 12-month retention period is not neutral; it reflects a prioritisation of cost and minimisation over long-term learning.
The issue is not whether that trade-off is defensible. It is whether it is conscious, owned, and revisited in light of experience.
Learning architectures require the same governance discipline as control architectures.

The real lesson
The most consequential design choice revealed by the review is not any single control, but how quickly the system forgets.
A system that retains just enough memory to recover, but not enough to recognise patterns, will remain vulnerable to repeating failures it cannot see.
Resilience, in that sense, is less about preventing the next incident than about ensuring the system remembers enough to know whether it has already been here before.

Vsevolod Shabad (pictured above) is a principal enterprise architect and cybersecurity leader with experience as CIO, CISO, and board adviser across critical infrastructure and financial services. He holds CISSP and CCSP certifications and is a Fellow of the BCS.

