Having found that officials wished to deploy powerful AI tools to help analyse and derive information from documents, DBT has striven to create internal infrastructure allowing them to do so
The Department for Business and Trade is enabling its staff to experiment with large language models by deploying the artificial intelligence tools in a “secure, internal environment”.
LLMs are AI tools built on the information gleaned from enormous volumes of text and other data. These models underpin the likes of ChatGPT and other generative technologies.
Having conducted research of its employees, the trade department “found a growing demand across DBT for using powerful LLMs for analysis”, according a newly published blog post from Emily Lambert, a junior data scientist for DBT.
Potential use cases cited by officials included “performing topic modelling on free trade agreement articles, extracting structured data from long-form documents, analysing large volumes of policy consultation responses, [and] conducting sentiment analysis on parliamentary debate records”, she writes.
DBT’s research found that some teams had already undertaken work using open-source LLMs. But “more powerful models, such as Meta’s Llama or Mistral, require high memory and processing speeds to handle complex language tasks”, according to the blog.
Related content
- EXCL: DWP reverses policy prohibiting staff use of ChatGPT and other LLMs – except DeepSeek
- Government signs collaboration agreement with ChatGPT-maker OpenAI
- Government LLM pilots to assess potential ‘employee satisfaction’ uplift
“To meet this user need, we’ve been working to deploy self-hosted LLMs securely on our internal data platform, Data Workspace, using AWS SageMaker,” Lambert adds. “Data Workspace ensures data is contained within our network. It is in isolation from the public internet and restricts access to authorised users to maintain a high-level standard of security. SageMaker uses open-source models, but spins resource within a private virtual private cloud as a part of the Data Workspace ecosystem. Deploying LLMs in this way is essential when dealing with sensitive information.”
Having established this model on a pilot basis – and based on user input – “we believe DBT is one of the first government departments to deploy self-hosted LLMs in this way, [which] marks an important step in how government can responsibly adopt cutting-edge AI technologies while maintaining strict control over data”, the blog says.
Going forward, DBT will continue to “work with users to improve the experience, expand access, and further develop the product in line with departmental needs”.
Lambert writes that these initial deployments should only be considered the beginning of the department’s work in this area.
“We’re taking a holistic approach to self-hosted LLMs – ensuring models we deploy are both technically appropriate and offer clear value for money,” she adds. “We’re also experimenting with running larger models on smaller instances where possible, assessing the performance trade-offs to maximise efficiency. Additionally, we’re actively monitoring model usage patterns to aid in optimising request flows. These combined efforts will help us scale the use of LLMs responsibly while staying mindful of costs.”

