The Republic of Agora

UK’s AI Technologies


Risks and Opportunities for National Security

Kenneth Payne | 2025.04.17

This briefing explores AI and national security and the risks and opportunities for the UK.

Introduction: What is Artificial Intelligence?

Artificial intelligence (AI) is rapidly altering the landscape of national security, as with so much else. For the UK, AI offers the possibility to reimagine how it defends its citizens and asserts its influence globally. But there are also many challenges, not least the risk of adverse, unintended consequences, whether from AI’s use in combat, or from its potential to transform democratic society.

“Artificial intelligence” encompasses a range of technologies that enables machines to perform tasks traditionally requiring human intelligence. These include: machine learning, where algorithms improve through experience; computer vision, which allows machines to interpret and analyse visual information; natural language processing, which enables AI to understand and generate human language; and autonomous systems, capable of independent decision-making in defined environments. These diverse capabilities underpin AI’s expanding role across societal activities, including in national security.

AI is already embedded in everyday life, from personal assistants such as Siri and Alexa to recommendation algorithms that shape our online experiences. In national security, its applications and potential applications are just as varied: AI-driven cybersecurity tools detect and neutralise threats in real-time;predictive analytics forecast potential crises; and autonomous drones provide persistent surveillance with minimal human intervention. These technologies are advancing rapidly, raising both opportunities and risks for defence and security policymakers.

AI technologies exist on a spectrum, from “narrow AI” – specialised systems trained for particular tasks, such as image recognition or language translation – to more complex agentic AI that can operate autonomously within specified limits. Recent developments have expanded AI’s capacity for reasoning and adaptation, making it an increasingly powerful tool in high-stakes domains such as national security. Today’s AI is demonstrating the potential to transform intelligence analysis, logistics and even strategic planning, in addition to warfighting itself. AI’s strengths lie in decision speed and pattern recognition at scale: qualities that are critical in modern warfare. But those same strengths also expose vulnerabilities. How do we ensure accountability when decisions are delegated to machines? What happens when adversaries deploy AI systems that disregard the norms and ethics that the UK seeks to uphold?

This briefing paper explores AI’s role in UK national security, dissecting its applications and exploring some of the challenges shaping its adoption. There is a degree of urgency here, as the pace of AI development shows no signs of slowing. Successfully adapting to AI requires more than technical mastery: it will likely also demand a willingness to challenge conventional thinking.

The Current State of AI

AI is a general-purpose, dual-use technology, comparable in some ways to electricity or the steam engine. Its applicability across a wide range of economic activities, coupled with its potential to dramatically increase productivity, makes it one of the most transformative forces of our time. Beyond reshaping industries, AI has profound implications for social relations and global power dynamics.

image01 ▲ Table 1: Key AI Terminology

The modern AI era dates back roughly a decade to the adoption of deep learning and big data, which revolutionised how machines process information. Today, we are still in the early stages of this technological revolution, but the pace of change is accelerating.

AI is in the midst of a transformative revolution, driven especially by the advent of transformer architectures. These neural network models, first popularised by innovations such as ChatGPT, have redefined the capabilities of AI, enabling systems to process and generate human-like text, and write computer code. Their applications in national security have already begun to reshape how the UK defends its interests.

This “generative AI”, alongside more established machine learning techniques, enhances intelligence analysis by synthesising vast amounts of unstructured data, identifying patterns and offering predictive insights. This capability allows decision-makers to process information faster and with greater nuance than traditional methods. But the next frontier is AI that reasons. Emerging models, such as OpenAI’s o3, are designed to extend inference capabilities, allowing machines to connect information across disparate domains and produce insights that approximate human-like reasoning. These systems promise to handle complex, multi-step problems.

Adding to this transformative landscape is the imminent arrival of agentic AI – systems capable of autonomous action within defined parameters. These agents will not only support decision-making but may also execute tasks independently, for example, by coordinating logistics or conducting cyber defence operations. In the not-too-distant future, we will see highly capable AI innovating the next generation of even-more-capable AI.

For the UK, all this innovation raises profound questions: how to maintain human oversight, ensure accountability, and balance operational benefits and the risks of unintended consequences? The UK is at a critical juncture. The ability to harness these capabilities effectively will depend on large investment in infrastructure, a robust yet permissive regulatory framework, and a strategic vision for integrating AI into defence systems. The government has begun to act on these requirements. At the same time, the rapid pace of innovation necessitates constant vigilance to ensure that the ethical and security implications of these technologies are fully understood.

Intersections with Other Technologies

AI’s progress is deeply intertwined with advances in other technological fields. These intersections can both accelerate AI development and create new synergies across domains that are critical to national security.

One key area of convergence is quantum computing. While still in its early stages, quantum computing has the potential to exponentially enhance AI capabilities, particularly in optimisation, encryption-breaking and complex simulations. If successfully integrated, quantum AI could drastically improve cryptographic security and provide an edge in intelligence analysis.

In space warfare, AI plays an increasingly vital role in satellite-based intelligence, surveillance and reconnaissance (ISR). Autonomous satellite coordination and real-time data fusion, powered by AI, could enhance situational awareness and decision-making in contested environments. Additionally, AI-driven defence mechanisms against cyber and kinetic threats to space assets are becoming a strategic priority.

Hypersonic weapons – which travel at speeds exceeding Mach 5 – require AI-driven guidance, target tracking and decision-support systems to be effective. AI can optimise trajectory prediction, counter-defence measures, and dynamic mission planning, at speeds far greater than human operators.

Similarly, materials science stands to benefit from AI-powered design and testing of novel materials, including lighter, stronger and more heat-resistant composites. These materials are critical for military applications, from next-generation armour to advanced aircraft and spacecraft.

In biological sciences and genetics, AI is revolutionising biosecurity, disease detection, and even the development of enhanced performance systems for military personnel. AI-driven drug discovery and genomic analysis could enable more effective countermeasures against bioterrorism and emerging biological threats.

These intersections highlight how AI is not just transforming national security on its own but is also amplifying advances in other high-impact fields. Understanding and leveraging these relationships will be key to maintaining technological and military advantage.

Tactical Military Applications of AI

AI is transforming the tactical dimensions of modern warfare, with applications that enhance situational awareness, streamline operations and augment combat effectiveness.

One of the most prominent areas of change is the development of autonomous platforms, such as aerial drones and uncrewed ground vehicles. New AI-focused defence companies have emerged, such as Anduril Industries, providing AI-driven autonomous surveillance and defence systems. Its Ghost helicopter drones, for example, can conduct persistent surveillance and support tactical operations with minimal human intervention. Palantir Technologies, another leading firm, focuses on integrating AI into battlefield management, offering platforms that synthesise data from multiple sources to provide actionable insights in real time.

Traditional defence contractors such as BAe Systems and Lockheed Martin are also investing heavily in AI-driven technologies. BAe Systems is working on AI-enhanced targeting systems for fighter jets and ground vehicles, while Lockheed Martin is developing autonomous systems for logistics and precision targeting.

AI is also beginning to play a role in cyber defence and offence. One goal is for systems that can identify and neutralise threats faster than humans. For example, agent-based AI systems might monitor networks for anomalies, enabling pre-emptive action against potential breaches.

The application of tactical AI is already evident in recent conflicts. In Ukraine, AI has been used to provide terminal guidance for aerial drones flying in electronic warfare-contested environments. Similarly, in Gaza, Israeli forces have employed AI systems for intelligence assessment and targeting, leveraging their ability to process vast amounts of data and identify targets faster than human analysts can.

As AI models and the hardware on which they run become cheaper, more capable and more robust, we can expect to see AI featuring even more prominently “on the edge” – aboard deployed systems. These systems will progressively take more complex, context-rich decisions without human involvement, adapting in real- time and making split-second judgements. This makes many observers uneasy. There is already widespread concern about the implications of “killer robots” from campaign groups, some governments and wider civil society. Achieving internationally binding agreement about tactical military AI will be challenging – the technology is evolving rapidly and states are understandably concerned about falling behind. The testing and validation of such systems will be critical to ensure they perform as intended, especially in high-stakes operational contexts where failures could have severe consequences.

The integration of AI into tactical operations raises important questions. How do militaries maintain human oversight of autonomous systems? What protocols ensure accountability in the event of an AI failure? Addressing these challenges will be essential as the UK and its allies seek to exploit AI’s tactical potential.

Wider Applications of AI in National Security

While tactical AI focuses on the immediate needs of the battlefield, its broader applications extend far beyond the theatre of war. Wider national security uses encompass intelligence analysis, strategic decision-making, and enhancement of the operational readiness of the entire national security ecosystem. There is a clear overlap of the use of these tools in defence and broader issues, such as counterintelligence, counterterrorism and law enforcement.

One area with great potential is boosting situational awareness. AI systems excel at synthesising data from diverse sources – such as satellite imagery, social media and signals intelligence – to identify patterns and predict threats. Predictive modelling, including the creation of “synthetic environments” and “digital twins” can help government agencies to model, and perhaps even pre-empt, cyberattacks or military escalations. The UK’s GCHQ, for instance, uses AI to analyse and secure communications while detecting potential cyber intrusions.

In espionage and intelligence gathering, AI enhances both offensive and defensive capabilities. Advanced algorithms might assist in counterespionage efforts, tracking suspicious activity and identifying insider threats. On the offensive side, AI-driven tools can mine and analyse open-source intelligence to provide actionable insights, giving intelligence agencies a critical edge in fast-moving geopolitical scenarios.

AI is also playing a role in strategic decision-making. Tools such as those developed by Palantir and Hadean enable leaders to wargame scenarios, evaluate the outcomes of different policy options, and coordinate responses during crises. Hadean’s focus on synthetic environments has significant potential at the military operational level, allowing for the simulation of complex scenarios and training exercises. Over time, such capabilities may extend to the strategic level, enabling the modelling of intricate social dynamics and broader geopolitical considerations. By integrating AI into diplomatic negotiations and crisis management, governments can better anticipate adversary actions and optimise their strategic positioning.

However, these wider applications are not without risks. Bias in algorithms can lead to flawed decisions, while concerns about data sovereignty challenge the secure and ethical use of AI technologies. The infrastructure required to support these applications, including secure data centres and reliable energy sources, represents a significant investment for the UK, which currently lags leading global powers China and the US in most salient metrics.

Despite these challenges, the integration of AI into broader national security functions appears increasingly both necessary and inevitable. We are in the early stages of a transformation, in which the UK’s ability to anticipate and respond to evolving threats will depend on the successful application of AI technologies across its national security apparatus.

Challenges of Adopting AI in Defence

While there is much excited talk about AI as a revolutionary development, actual adoption within defence sectors remains more limited. It requires a fundamental transformation of the systems, structures and cultures within which these technologies operate. Decision-making processes in defence institutions can be slow, and there is often a preference for proven methods over new, untested systems. Bridging the gap between military objectives and the innovation-driven ethos of the tech sector is another critical challenge. There is, in particular, a significant gap between the development of cutting-edge “frontier” research and the types of AI technologies currently feeding through into practical defence applications.

To capitalise on these changes, defence organisations must learn to collaborate more effectively with both AI startups and tech giants. Unlike the US or China, the UK has a smaller pool of “unicorn” tech companies with the capacity to develop cutting-edge AI systems at scale. This constrains the UK’s ability to independently innovate, especially at the frontier of AI research. Some AI systems also require vast computational resources and reliable energy infrastructure, both of which can be costly in the UK. In addition, there is a significant shortage of skilled AI professionals with expertise in defence applications, creating competition for talent with the private sector, where salaries and working conditions are often more attractive.

Operationally, the UK risks becoming overly dependent on external contractors or allied countries for critical AI capabilities, potentially compromising strategic autonomy. Ensuring that AI systems are transparent, reliable and accountable is a major challenge. Autonomous systems must operate within established legal and ethical frameworks, but achieving this balance in practice is difficult.

Future Developments: Will We Get to AGI, and When?

Artificial general intelligence (AGI) remains one of the most debated concepts in the realm of AI. Unlike narrow AI systems, which are designed for specific tasks, AGI would possess the ability to perform a wide range of intellectual activities at or above human level. However, the definition of AGI itself is contentious. Some emphasise the breadth of capability as the defining feature, while others focus on AGI’s potential to reason and learn autonomously across domains.

Nor is it clear how AGI might be achieved. Some argue that existing approaches, particularly transformer models and reinforcement learning, will be sufficient to reach this milestone. Others reckon that a combination of these and other approaches will be required. Equally contested is the timeline for AGI’s arrival, with some suggesting it could emerge well within the next decade, while sceptics view it as more distant prospect, if it is achievable at all. Recent advances in reasoning models, however, have prompted many to revise their estimates, moving projected timelines closer to the present. That is especially true of those leading AGI research at frontier companies. Systems such as OpenAI’s o3 demonstrate nascent capabilities in extended inference and multi-step reasoning, sparking both renewed optimism and concern about the pace of progress.

The implications of AGI for national security are profound. Much depends on how quickly AGI emerges and whether its development represents a gradual evolution or a disruptive breakout that is difficult to control. At its most optimistic, AGI could revolutionise strategic planning, decision-making and resource allocation, enabling unprecedented efficiency and precision. However, it also raises existential risks. A monopoly on AGI by any one state or actor could shift the global balance of power dramatically. Furthermore, poorly controlled AGI systems could create vulnerabilities, including unintended escalation in conflict scenarios.

From a UK perspective, preparedness is key. Policymakers and defence leaders should engage far more deeply with emerging AGI research, fostering domestic capabilities while participating in international efforts to establish norms and safeguards. The challenge lies in balancing innovation and caution, ensuring that AGI is developed and deployed in ways that enhance security without compromising stability.

Conclusion and Recommendations

AI is reshaping national security in profound and complex ways. From tactical applications on the battlefield to strategic decision-making and governance challenges, its impact is already being felt. More dramatic changes are likely, and in the short term. For the UK, the opportunities are significant, but so too are the risks.

Key takeaways from this analysis include the transformative potential of AI technologies, the need to integrate them into defence while ensuring ethical oversight, and the importance of maintaining a competitive edge in the face of rapid innovation by global adversaries. The integration of AI into tactical operations and broader national security functions is not merely a technical challenge but one that involves cultural shifts, strategic investments and international cooperation.

The challenges for the UK government include prioritising investment in AI talent and research to ensure the UK remains at the forefront of technological innovation. Strengthening international partnerships is critical, both to shape global norms for AI governance and to ensure interoperability with allies. Developing robust frameworks for ethical AI deployment in national security, with a focus on transparency and accountability, is equally essential. The UK government has already carved out a leading international role on regulation and norm formation and can usefully build on this.

Looking ahead, the UK must prepare for the potential arrival of AGI, navigating its transformative possibilities while mitigating its risks. Ensuring that AI systems serve as tools to enhance human decision-making will be central to maintaining stability and public trust. The pace of change demands vigilance, adaptability and leadership. By embracing innovation responsibly, the UK can position itself not only as a beneficiary of this technological revolution but as a global leader in its ethical and strategic application.


Kenneth Payne is Professor of Strategy at King’s College London, where he researches the role of artificial intelligence in national security. Professor Payne is a Commissioner of the Global Commission for Responsible AI in the Military Domain. He serves as Specialist Advisor to the UK Parliament’s Defence Committee for its work on AI.

Made with by Agora