With public services at the forefront of the deployment of AI, Dan Howl of BCS explains why ‘trust must not only be done, it must be seen to be done’
A noticeable shift is taking place across the UK tech landscape, spanning both the public and private sectors.
This change is being driven not simply by the hype surrounding artificial intelligence, but by the government’s determination for Britain to achieve the broadest and deepest adoption of AI in the G7. Ministers have been clear that success in this next phase of industrial change will not belong to the countries that merely design new technologies, but to those that adopt and deploy them most effectively.
Effective adoption, however, requires more than technical capability. It demands accountability, trust and public perception of both.
Trust must not only be done, it must be seen to be done.
As AI and other digital technologies become more embedded in public services and everyday life, expectations around how technology is governed, assured and delivered are rising. This is acting as the catalyst for a more fundamental shift in how technology is being managed in Britain: towards greater professionalism.
That shift is now visible across multiple parts of the system. While the initiatives emerging in different sectors are not formally linked, together they signal a clear direction of travel. Professionalism is becoming the foundation on which trust, adoption and legitimacy in the UK’s digital future will rest.
One of the clearest examples of this trend can be seen in the civil service.
In September 2025, BCS, the Chartered Institute for IT, published research on trustworthy technology based on commissioned YouGov polling. The findings were clear: 85% of the public said they would trust technology practitioners more if they were professionally registered.
This matters particularly in the public sector, where consent, approval and engagement are prerequisites for success. BCS’s research shows that professionalism is key to unlocking that trust. This will be fundamental to the delivery of major policy initiatives, from the introduction of a national digital identity to the wider transformation of public services and the increasing use of AI-enabled systems, such as chatbots, in interactions between government and the public.
Critical mass
Digital ID provides a useful illustration. The government recently announced that the ID will not be mandatory. Its success now hinges entirely on voluntary uptake and usage. Without a critical mass of users, the efficiency gains the policy promises cannot be realised. That places trust at the centre of delivery: the public will not choose a system they do not fully believe in.
Senior civil servants are increasingly receptive to this message. Work is already underway, in coordination with professional bodies, to move towards a model in which a greater proportion of civil servants working in digital and technology roles are professionally registered. BCS expects this trend to accelerate further as 2026 progresses, reflecting a growing recognition that public trust in digital government depends not just on systems, but on the people who design, deploy and manage them.
The ability to demonstrate ethical behaviour, technical competence and security is becoming a prerequisite for adoption and public acceptance. Professionalism is no longer optional; it is foundational.
A similar logic is now shaping how government is approaching artificial intelligence more broadly.
To date, AI regulation in the UK has largely followed a sectoral approach.
This is now evolving as government looks to strengthen the concept of AI assurance, and to define the skills, standards and qualifications required for individuals providing independent assurance of AI systems.
As with developments in the civil service, this shift reflects a move towards professional skills and recognised qualifications as the means of addressing risk, trust and accountability. Over the coming months and years, we expect to see the establishment of a defined AI assurance profession, capable of providing confidence that AI systems are operating as intended, ethically and safely.
These professionals will play a critical role not only in building public trust, but also in protecting organisations from the reputational, financial and operational risks associated with poorly designed or irresponsibly deployed AI. In this context, professionalism becomes both a public good and a commercial necessity.
A shared realisation
The same principles are now emerging in healthcare, where trust is not abstract but deeply personal. Recently, sector body the Federation for Informatics Professionals recommended to NHS England that digital and technology staff within the NHS should be professionally registered. This was explicitly in response to concerns around trust.
NHS England has approved this proposal, with the Department of Health and Social Care also responding positively. Further detail on how these proposals will be implemented is expected in the spring.
Decisions taken by the NHS on trust and data carry particular weight.
The public trust no organisation more when it comes to handling their data and so, when the NHS changes policy, it is worth paying attention. For patients, trust in digital systems is inseparable from trust in the people responsible for them. Ensuring that NHS digital staff meet recognised professional standards will provide greater reassurance that systems handling sensitive health data are designed, managed and governed competently and ethically.
Taken together, the professionalisation of the civil service, the emergence of AI assurance and proposals for professional registration within the NHS are linked not by formal structure or mandate, but by a shared realisation. To successfully implement cutting-edge technology, it must be underpinned by a level of professionalism that has not previously been treated as essential.
This direction of travel is unlikely to stop here. More sectors will follow.

On 21 January, education secretary Bridget Phillipson said that the government will deliver new skills pathways for teachers and support staff. The government is designing a framework to help staff build digital, data and tech skills, folding them into existing qualifications and training programmes and paving the way for greater digital professionalisation in education.
The days when technical delivery alone was enough are now over. Whether in government, healthcare or industry, the ability to demonstrate ethical behaviour, technical competence and security is becoming a prerequisite for adoption and public acceptance. Professionalism is no longer optional; it is foundational.
For now, this trend is most visible in the public sector, where transparency, scrutiny and public trust bite first. But as expectations shift and public consciousness grows, businesses and industry are likely to adopt similar practices at increasing speed.
Professionalism is emerging as the answer to the twin challenges of trust and adoption, and as a defining feature of the UK’s approach to technology in the years ahead.

Dan Howl (pictured above right) is head of policy and public Affairs at BCS, The Chartered Institute for IT

