One in six government service assessments ends in failure

PublicTechnology research finds that failing to consider user needs is the most common stumbling block, with a lack of simplicity and intuitiveness also a frequent failing

Credit: Pxhere

More than one in six government services put forward for assessment fails to meet the requisite standards, PublicTechnology research has found.

Data published on GOV.UK contains information on 191 examinations that have taken place over a period of about six and a half years. This includes assessments of digital services in the early stages of development, as well as those that have been live for some time.

A total of 34 were, when initially assessed, judged to have failed to comply with the standards set out by the Government Digital Service – equating to more than in six services examined, or 17.8% of the overall figure.

From 2013 to the present day, the failure rate has remained fairly steady in the range of 15% to 20%. Since the beginning of 2019, five out 33 assessments have seen the service in question fail to make the grade, equating to 15.2%.

And even some services developed by GDS itself have been sent back by the organisation’s own assessors to make necessary improvements.

The current Service Standard against which services are judged has been in place for a year and features 14 requirements that must all be met. 

It replaced an 18-point checklist that set the standard for four years – which itself replaced the original standard, which was introduced in 2014 and placed 26 requirements on services being tested.

The current standard represents a consolidation of the measures proposed by the two versions that came before it; many of the previous documents’ requirements have remained more-or-less intact, others have been melded together, and a couple have been dropped or subsumed into the small print of other points.

For the purposes of our research, we will refer to the 18-point version of the standard, as it was the version that was applied to the majority of the 191 services for which assessment reports have been published.

But, whichever metrics are applied, it is apparent that most services that do not pass their assessment do so because of a failure to do the basics.

The most common point on which services are failed is the very first one: the requirement to understand user needs. (See table right; click on it to open, expand, and magnify)

Of the 34 services that have not made the grade over the years, 25 have fallen at the first hurdle, having been judged to have failed to understand their users’ needs.

Some are on the right track, but have not been thorough enough in their discovery phase.

Last year, the Ministry of Housing, Communities and Local Government put forward an alpha version of its ‘Find an energy certificate tool’. The service failed to pass seven out of 18 points of the standard – including the first.

The assessors’ report said: “It is evident from user research that the team have been able to identify most users of the service. However, the assessment panel felt the numbers – 34 – spoken to is not enough to state that all user needs have been uncovered which would have resulted in breadth of insights.”

It added: “The panel would have liked to see more around what users are trying to do whilst encountering the service, how they interact with the service face to face – moderated testing – where users currently go for help and more around their thoughts, circumstances and behaviours.”

An insufficient effort to understand user behaviour was also cited in the decision not to pass the ‘Manage and Register Pension Schemes’ service put forward for alpha assessment by HM Revenue and Customs in 2018.

“There are few areas where research needs to be focussed, so the team have assurance they have made the right design decisions, and that they thoroughly understand the users’ needs, motivations and behaviours,” the report said. “This includes focusing on the data that is being captured during the registration process, and being clear if the user understands why this is being asked for, and, understanding the issues they face in collecting it, and how this may affect their journey.

It added: “Through research, the team identified that public sector users would have different needs and a different journey. The team described a ‘work-around’ in order to manage this, but the panel were disappointed to learn this. The team must concentrate additional effort on understanding the user needs of the public sector and amend user journeys accordingly. Personas should be clearly segmented by behaviour, goals and attitudes that are not tied to the solution. Some of the presented needs felt like functional requirements.”

Assessors also concluded that this service had failed to meet points four other points and, ultimately, recommended that HMRC undertake a fairly major rethink.

“The service team should redesign this as two separate services, making it more intuitive for users,” they said.

Keep it simple
One of the points that the pensions service did not pass was the 12th, which stipulates that design teams must “create a simple and intuitive service”.

This is the second-most failed point, with 23 services in the last six years having been assessed as falling short in this area.

The GOV.UK design system – an internal cross-government tool developed by GDS to provide “a place for service teams to find styles, components, and patterns to use in designing government services” – was one of the many that, during its alpha assessment, did not meet the requirement of simplicity and intuitiveness.

“The team demonstrated information architecture (IA) improvements they’ve made, however there is scope for further work here, particularly around the ‘patterns’ section,” the report said. “Research has identified ‘pattern’ as an ambiguous label in this context, and including it as a top-level section makes the other navigation labels appear less clearly delineated. The choices made around IA now are likely to have long-lasting impacts, so it’s important to go into private beta with a reasonably mature approach.”

In 2017, NHS Digital put forward for assessment an “information governance toolkit”. The service, which was being developed for use by an estimated total of 25,000 organisations that process NHS data, brings together all relevant government and legal guidance on data processing, as well as offering self-assessment tools.

Assessors failed the service on point 12 and six others, and the recommendations made indicated a great deal of work was necessary to achieve a successful outcome next time.

Whichever metrics are applied, it is apparent that most services that do not pass their assessment do so because of a failure to do the basics – the most common point on which services are failed is the very first one: the requirement to understand user needs

NHS Digital was instructed to “clarify the exact scope of the service and what the priorities and minimal viable product are for the first publicly accessible version”. Designers also needed to “focus on user journeys and outcomes through the service to avoid unnecessary design and interaction”.

The report said: “There has been limited usability testing so far, much behind the expected for a service of this type with the broad range of users identified by the personas. Some thought has been put to the capability of users, but there are some significant assumptions on skills and capabilities to the newly targeted users. It has not been shown that these users have an understanding of the service, their requirements to use it or the capability of users to complete the tasks required of them.”

It added: “Language and terminology is a significant barrier, with different user groups representatives requiring different terminology for the same activities or tasks. Some research has shown that professional users do not want language changed or simplified, although it was unclear to the panel what the motivations were for this as a significant insight from users was the desire to move to ‘Plain English’.”

Teaming problems
Improving the service based on user research and testing – point 2 of the standard – is another common stumbling block, with 20 services being assessed having failed to pass in this area.

The requirement to operate “a sustainable, multidisciplinary team” is not far behind, with 18 services being judged to have failed to fulfil this requirement.


191
Number of services for which assessment reports are available
 

17.8%
Percentage of initial assessments that ended in failure

14
Number of points that must be met under the current standard – down from the previous versions which featured 18 and 26

 

April 2014
Date of the introduction of the first service standard

‘Understanding user needs’
The first requirement of the standing – and the one on which more services fail than any other


One of those to do so was the ‘Local Land Charges’ service put forward by HM Land Registry for alpha assessment in January 2017. Assessors found that the team, while boasting strong technical skills, needed to add those that could bring into focus the needs of people without such expertise.

“The team didn’t have a dedicated content designer, and the solutions presented suffered because of this,” the report said. “A content designer will be able to work with the research team to learn more about how people understand the legal or technical language used within this domain.” 

It added: “The danger of focusing on the internal-facing service first is the team aren’t challenging complex language which will become more obvious when designing and conducted further user research with citizens. This feels like a missed opportunity at the moment. There’s a danger that this could make delivering a simpler, clearer citizen facing service more difficult if this isn’t considered now.”

Other fairly frequent areas of failure were point 13, which requires consistency with GOV.UK design, and point 10, which asks that teams conduct end-to-end testing, including on all applicable browsers and devices. A total of 14 and 13 services failed on these points, respectively.

Point seven, managing data security and privacy, point 11, make a plan for being offline, and point 14, encourage everyone to use the digital service, were each failed by nine services being assessed.

Eight services failed to make code available as open source, which is point 8. 

Six fell short on points four and five: use agile and user-centred methods; and iterate and improve frequently.

Point nine, use open standards, and points 15 and 16 – use analytics to collect data; and establish KPIs and performance benchmarks – have each been failed on five occasions. Four assessments failed to report performance data, as per point 17.

The least common causes of failure were point six, which asks that teams evaluate tools, systems, and ways of procuring them, and the 18th and final point, which stipulates that the service in question must be tested with the minister responsible.

 

This article is part of PublicTechnology’s How to Design a Government Service project, in association with BJSS. This specially created content week will feature a range of exclusive interview, feature, and analysis content dedicated to the art of delivering digital services for citizens and public sector professionals – from the earliest stages of discovery, right through to maintaining live services in use by millions of people. Click here to access all the content.

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere