The National Cyber Security Centre and the US Cybersecurity and Infrastructure Security Agency have worked with 16 other countries to create best practice document for development and deployment of AI
The UK has been at the forefront of creating globally agreed cybersecurity guidelines for the development and deployment of artificial intelligence systems.
The Guidelines for Secure AI System Development have been jointly published by the UK’s National Cyber Security Centre and the US Cybersecurity and Infrastructure Security Agency (CISA). The development of the guidance was supported by – and the document is undersigned by –government or security agencies representing 16 other countries.
The guidance is split into four areas, respectively addressing the design, development, deployment, and operation of AI systems.
Topics covered in the guidance include cyberthreats, the security of supply chains, data governance, incident-management, and sharing best practice.
NCSC chief executive Lindy Cameron said: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout. I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities.”
- GOV.UK Chat – government tests AI from ChatGPT firm to answer online users’ questions
- Terrorism: Government keeping tabs on potential for radicalising chatbots, minister says
- Artificial intelligence to empower public services, says minister
Alongside the US and UK, the other countries represented in the signatories to the guidelines are: Australia; Canada; Chile; Czechia; Estonia; France; Germany; Israel; Italy; Japan; New Zealand; Nigeria; Norway; Poland; the Republic of Korea; and Singapore.
CISA director Jen Easterly said: “The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design. As nations and organisations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices. The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future.”
The announcement comes several weeks after 28 nations – including 13 of the same countries that signed the cyber guidelines – unveiled an international collaboration agreement to work together to tackle the risks created by AI.