PRESS RELEASE : Alan Turing Institute – AI will be key to future national security decision making – but brings its own risks [April 2024]
The press release issued by the Cabinet Office on 23 April 2024.
Government prepares for age of AI with the publication of a new report from The Alan Turing Institute outlining the importance of AI to support strategic decision-making on national security.
- New report from one of the UK’s leading institutes for AI highlights the importance of harnessing AI to support national security decision making.
- AI tools can identify patterns, trends, and anomalies beyond human capability, and assist intelligence analysts to make sense of complex problems.
- The report was jointly commissioned by the Joint Intelligence Organisation (JIO) and Government Communication Headquarters (GCHQ) and authored by the independent Centre for Emerging Technology and Security (CETaS), a research centre based at The Alan Turing Institute.
AI must be viewed as a valuable tool to support senior national security decision makers in Government and intelligence organisations, according to the findings of a new independent report commissioned by government.
Published today, the new report reiterates the potential for AI to make transformational improvements in intelligence analysis by supporting analysts to process data more quickly and accurately, helping keep the UK safer.
The report also finds that the use of AI has the potential to exacerbate dimensions of uncertainty inherent in intelligence analysis and assessment, suggesting additional guidance for those using AI within national security decision-making is necessary.
With the huge growth of data available for analysis, AI can be used to handle the administrative tasks of data processing as well as to identify patterns, trends, and anomalies beyond human capability. The report authors state that not utilising the technology would be a missed opportunity and could undermine the value of intelligence assessments.
Jointly commissioned by the Joint Intelligence Organisation (JIO) and Government Communication Headquarters (GCHQ), and authored by The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), the report also considers how both the risks and benefits of AI-enriched intelligence should be communicated to senior decision-makers in national security.
Whilst shining a light on its significant potential, the report highlights the importance of using AI for intelligence assessments safely and responsibly, with continuous monitoring and evaluation involving both human judgement and AI recommendations to help counteract biases.
The report suggests additional training and guidance for strategic decision-makers to help them understand the new uncertainties introduced by AI-enriched intelligence.
Additional recommendations include upskilling intelligence analysts, and strategic national security decision makers, including Director Generals, Permanent Secretaries and Ministers, and their staff to build trust in the new technology.
This report follows action already taken by government to ensure the UK is leading the world in the adoption of AI tools across the public sector, as set out in the Deputy Prime Minister’s recent speech at Imperial College on AI for Public Good.
For example, Government has already begun this work through its Generative AI Framework for HMG, which provides guidance for those working in government on using generative AI safely and securely.
The Deputy Prime Minister Oliver Dowden said:
We are already taking decisive action to ensure we harness AI safely and effectively, including hosting the inaugural AI Safety Summit and the recent signing of our AI Compact at the Summit for Democracy in South Korea.
We will carefully consider the findings of this report to inform national security decision makers to make the best use of AI in their work protecting the country.
Dr Alexander Babuta, Director of The Alan Turing Institute’s Centre for Emerging Technology and Security said:
Our research has found that AI is a critical tool for the intelligence analysis and assessment community. But it also introduces new dimensions of uncertainty, which must be effectively communicated to those making high-stakes decisions based on AI-enriched insights. As the national institute for AI, we will continue to support the UK intelligence community with independent, evidence-based research, to maximise the many opportunities that AI offers to help keep the country safe.
Anne Keast-Butler, Director GCHQ said:
AI is not new to GCHQ or the intelligence assessment community, but the accelerating pace of change is. In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security.