Who Governs the Algorithms of War? Policy Failures and the Erosion of Sovereignty in Democratic AI Militarisation
Abstract
This paper analyses how the rapid integration of artificial intelligence (AI) in military operations is redefining the power and authority within Western democracies, ultimately eroding their sovereignty. As governments become increasingly reliant on private corporations such as Anduril, Palantir, and Lockheed Martin to develop and operate new military technologies, they are losing control over military decision-making and key infrastructure.
1. Introduction
As artificial intelligence technologies become increasingly developed, their role in militaries and security are rapidly changing the modern battlefield. From predictive targeting to autonomous drone systems, https://ipr.blogs.ie.edu/ of War? Policy of Sovereignty in (AI) in military operations is redefining the power their sovereignty. As governments become increasingly and Lockheed Martin to develop and operate new military and key infrastructure. The rapid militarisation of AI bodies to develop effective policies, creating a policy vacuum warfare. The paper analyses five major policy failures that of military decision-making to AI systems, lack of clear driven international governance, dependency on private policy failures highlight how the militarisation of AI is accountability, and control over their militaries. Without democracies will lose sovereignty and legitimacy in modern to address this policy vacuum: establish enforceable auditing mechanisms to evaluate and certify military Implementing these policies is essential to restoring the remains subject to public interests. algorithmic technologies are becoming the center piece of how decisions are made and how wars are fought. Western governments and defence conglomerates like the European Union (EU) and the North Atlantic Treaty Organisation (NATO) have begun to integrate AI in their operational and strategic doctrines. However, the technologies driving this shift are developed by private corporations such as Anduril, Palantir, and Lockheed Martin, rather than government owned institutions. Marking the shift from traditional, state-led defence institutions towards a public-private ecosystem, raising concerns regarding legal responsibilities, and ethics behind the application of these technologies. Throughout the first half of the twentieth century, military research and technology were funded and directed solely by government owned institutions, involving tightly regulated contracts and government oversight. Today in contrast, the increasing reliance on the private sector for developing and delivering new military technologies is changing the speed and priorities of military research as it is now driven by market incentives rather than public interests. Moreover, when these technologies advise or make decisions autonomously and without governmental or human oversight this inevitably raises questions on accountability, transparency, and even sovereignty. Since these technologies have been developed so quickly, Western governments have struggled to bring up to date their legal and regulatory laws on both national and international levels. Indeed, international laws from the Geneva Conventions and from the United Nations (UN) on humanitarian law have failed to keep up with technological development of AI military technologies. In 2021, NATO proposed the Artificial Intelligence Strategy to its member states, but it lacks concrete enforcement mechanisms or policy recommendations. Additionally, in August 2024, the EU passed the AI Act, which mainly focused on the civilian applications of AI technologies rather than military applications. The lack of concrete and effective policies regarding the use of AI for military purposes by Western states is creating a legal vacuum, leaving many questions unanswered. This paper aims to answer the following question: With the increasing role of AI systems in modern warfare and the growing influence of private corporations, how can Western democracies maintain control and sovereignty over the use of these new military technologies? Western states are experiencing a decline in sovereignty, and their ability to maintain accountability and control over the militarisation of AI due to their reliance on private actors. The increasing dependence of military decision-making to private algorithms risks undermining democratic principles and sovereignty if states fail to create coherent regulations, promote transparency, and establish enforceable oversight mechanisms.
2. Background Analysis
2.1 Evolution of the Military-Industrial Complex Throughout most of the twentieth century, power was organised and exercised through the collaboration of governments and private defence companies. Governments would decide the direction of military innovation by directly funding research and managing industrial production of a few carefully selected private contractors. This was especially the case in the United States (U.S.), where this relationship would later be known as the “military-industrial complex”. This strategic collaboration was characterised by private corporations fulfilling public interests and needs within a heavily regulated industry subject to tight government oversight. Technological breakthroughs of this period, such as nuclear weapons, radars, and satellites, reflected not only the U.S.'s scientific dominance but also demonstrated to the rest of the world the U.S.'s national capabilities in developing new military technologies. However, as the Cold War came to an end, this relationship began to gradually transform. In the 1990s, many Western states began to restructure their defence industries as new neoliberal policies led to the outsourcing of research and development to private companies. Furthermore, technological advancements created new opportunities for innovation beyond traditional military weapons. As telecommunication and computing were being adopted for civilian purposes, their use for military purposes quickly followed. This phenomenon is known as the “dual-use” paradigm, in which, for the first time, civilian and military technologies became more interdependent as states increasingly rely on commercial technologies and private expertise to develop and operate military weapons. The attacks on the Twin Towers on September 11, 2001, further accelerated this shift. Indeed, the war on terror and the rise of global terrorism required new forms of military intelligence and coordination that had to be quicker, more flexible, and data-driven. As a result, Western governments began working closely with the private technology sector to develop digital capabilities for processing large volumes of data, enabling more advanced decision-making.. This period marked the beginning of a new ecosystem in which governmental institutions were no longer the sole actors driving military innovation, and private companies became increasingly influential and integral. By the early 2020s, this new system became even more apparent and interconnected. While states continued to define strategic goals, the means to achieving these goals, such as algorithms, data infrastructure, and autonomous systems, were now designed, owned, and operated by private corporations. This new status quo blurs the boundaries between government and industry in defence, laying the foundation for the challenges Western democracies face in adapting to the ever-increasing vital role of AI and private actors in the military. 2.2 How Private Firms Shape the Digital Battlefield Companies such as Anduril, Palantir, and Lockheed Martin are prime examples of a new wave of defence technology firms whose influence is now at the core of military operations in Western countries. These corporations are becoming vital to modern militaries by providing autonomous systems, digital intelligence networks, and data centres. These tools and infrastructures are central to giving modern militaries the strategic and technological advantage on the battlefield, the core mission of Western armies. Founded in 2017 by Palmer Luckey, Anduril Industries brought the Silicon Valley model to the defence industry. The company specialises in developing autonomous systems designed to operate across land, air, and sea. For example, its flagship program, known as “Lattice”, merges sensor feeds, drone operations, and threat detection in a single centralised AI operational system. Specifically, the system can be deployed in the battlefield alongside weapons such as “Fury”, a high-performance, multi-mission autonomous air vehicle, or with drones like “Ghost” and “Anvil”, used for surveillance and data collection. Palantir Technologies, one of the most advanced and influential private military contractors, occupies a somewhat different, but equally consequential role. ‘Gotham’ is a data-fusion platform designed to analyse large quantities of data and information from satellites, communications intercepts, logistics databases, and sensor networks. The systems created by the company are marketed as tools to grant a decision-making edge, enabling militaries and intelligence users to identify threats, plan operations, and anticipate adversary behaviour as it is happening. Lockheed Martin is a well-established military contractor that, in recent years, has increasingly integrated AI and autonomous systems into its portfolio. Known worldwide for its aerospace and hardware capabilities, this company is a prime example of how traditional military contractors are adapting to new technological developments. For instance, their PAC-3 MSE missile defence system was upgraded with advanced AI systems to improve adaptability, response time, and effectiveness. These three corporations are prime examples of how technological innovation has shifted from the public to the private sector. Military capabilities once controlled solely by governments, such as surveillance and data analysis, are now provided and managed by private companies that retain ownership and operational knowledge of these advanced technologies. While military capabilities have significantly increased, this interdependence creates a new system in which national defence increasingly depends on private infrastructure and technology that governments neither fully control nor understand, raising questions about the very notion of modern democratic sovereignty. 2.3 The Current Governance Landscape The rapid adoption of AI technologies for military purposes outpaced international institutions' ability to understand and regulate them. In fact, the policies currently in place were created and adopted in an era when humans were central to warfare. In the modern period, new autonomous military systems are challenging the core principles underlying these laws regarding accountability, responsibility, and human control in war. While many international organisations and institutions have recognised and begun to address some of these issues, their efforts to adapt or create new international laws have yielded limited results. The United Nations Convention on Certain Conventional Weapons (CCW) is the leading platform for the global discussion of the legal and regulatory parameters for Lethal Autonomous Weapon Systems (LAWS). The group of Governmental Experts of the CCW has deliberated on whether to restrict or prohibit such technologies and on how to do so. Many member states and humanitarian organisations have actively advocated for the creation of binding treaties focused on meaningful human control over these systems; however, major powers like the U.S., Russia, and China have hindered progress by arguing that sufficient safeguards already exist. Currently, the absence of formal international rules governing the use of AI in warfare has led states and private companies to set their own practice standards rather than creating universal guidelines. As of 2021, NATO established its first Artificial Intelligence Strategy which recognises AI as a game-changing technology in collective defence. The strategy outlines six principles: lawfulness, responsibility and accountability, explainability and traceability, reliability, governability, and bias mitigation. Although NATO has recognised the ethical questions surrounding AI militarisation, its strategy lacks operationalised, enforceable actions. In fact, it provides no common procurement standards, no binding oversight mechanisms, and no shared verification processes. Ultimately, leaving the implementation of these principles entirely to individual nations' interpretations and compliance. On the other hand, the European Union (EU) adopted the Artificial Intelligence Act in 2024 to set an example for other countries to adopt AI laws. While this Act imposes strict regulations on safety, accountability, and transparency, it explicitly excludes military applications from its scope. While civilian AI technologies are subject to strict new regulations, the use of AI for military purposes remains without proper legal standards, highlighting the EU's limitations in addressing defence-related issues. When analysed together, the current international legal landscape highlights the absence of a coherent legal framework for the use of AI on the modern battlefield. This is particularly apparent in democratic states, where, despite their focus on enforcing the rule of law, the governance of the military use of AI remains voluntary due to a lack of regulations. A policy vacuum is created, allowing private corporations to operate as they please in pursuit of their interests, reshaping the moral boundaries of modern warfare. 2.4 Governance Vacuum and Its Consequences The technological transformation of military weapons in recent decades reflects how military power is produced, managed, and consequently legitimised. State-controlled defence systems have now developed into an intricate network in which private firms like Anduril, Palantir, and Lockheed Martin design and operate core technologies for national and international security. Concurrently, international institutions such as the United Nations, NATO, and the EU have failed to establish effective legal frameworks for the development and deployment of AI for military purposes. The growing interconnectedness of private corporations and the lack of regulations have created a governance vacuum. The following section reveals how this translates into the erosion of sovereignty and a system of policy failures founded on the progressive transformation away from democratic ideals to private corporations.
3. Discussion of Findings
3.1 Section Introduction The governance vacuum introduced in the previous section highlights how Western democracies face new challenges due to the militarisation of AI, raising essential questions on the ethical, legal, and strategic application of these technologies. As private corporations become increasingly integral and influential in defence, democratic governments are losing control over how modern war is conducted due to outdated and inadequate mechanisms and laws. This section will examine the policy failures arising from these challenges, which ultimately erode the sovereignty of Western democracies. Indeed, the increased reliance on AI systems for decision-making, the lack of clear responsibility when these systems are used, and the dependence on privately owned technology and infrastructure indicate a transition of authority away from governmental institutions. 3.2 Policy Failure 1: Delegation of Military Judgement to AI Systems A predominant policy failure arising from the rapid integration of AI systems into military operations is the growing reliance on privately developed technologies for decision-making. For example, targeting recommendations and threat detection are crucial aspects of military decision-making, which are now increasingly relying on AI platforms that Western governments neither fully understand nor control. New research on the interaction between humans and machines on the battlefield indicates how AI systems are slowly shifting decision-making authority. When operators find themselves in the field, they come under severe pressure, which in turn makes them struggle to understand or challenge the output produced by AI models. This phenomenon indicates an overreliance on these systems and constrains human control over military operations. Furthermore, this shift in control and hierarchy is especially problematic, as current international and national laws do not ensure that humans retain meaningful control and oversight over these technologies. Due to their speed and complexity, these machines are reshaping command hierarchies by limiting how humans can intervene in critical moments. Essentially, the lack of regulations and laws allows private corporations to influence military operations while operating without meaningful government oversight. This new phenomenon highlights how the reliance on AI defence technologies directly challenges the concept of modern sovereignty. When military decisions are increasingly being made by systems that governments neither understand nor control, authority shifts from public institutions to private technology companies. 3.3 Policy Failure 2: Accountability Gaps in the Use of Autonomous Systems Another major policy failure arising from the integration of autonomous systems for military purposes is assigning responsibility when these systems act independently. The traditional frameworks currently in place were designed with clear chains of command, which made it easy to hold individuals accountable under humanitarian law. On the other hand, when decisions are made by autonomous AI systems, their accountability fragments, creating “responsibility gaps” that are particularly apparent when systems cause unlawful harm. Recent policy reviews highlighted that current laws on individual and state responsibilities were designed with human decision-making and oversight in mind, not with autonomous systems in mind. In fact, when one of these systems independently engages a target, it becomes unclear who is responsible for its actions. This lack of clear accountability undermines core democratic principles of international humanitarian law. Another problem contributing to the lack of properly defined accountability is the lack of consistent definitions of what constitutes an autonomous system. In fact, governments, international organisations, and private corporations all use different classifications, which further creates legal problems. Without clear definitions of what constitutes these technologies, it remains almost impossible to define who is responsible for these systems. These two examples illustrate how accountability is spread across multiple stakeholders, and as long as clear responsibilities remain undefined, governments will not be able to regulate these technologies in war. 3.4 Policy Failure 3: Fragmented International Governance and Geopolitical Competition The fragmented international governance surrounding the regulation of military AI introduces a further policy failure. The governmental bodies that are meant to oversee the development and implementation of these systems are uncoordinated, lack legitimacy and authority, and rely heavily on voluntary enforcement. Consequently, states may operate under non-binding principles that private corporations exploit. Moreover, geopolitical instability and rivalry further exacerbated the fragmentation in international governance. The Carnegie Endowment has shown how major players such as the U.S. avoid rigorous international regulations and constraints out of fear that they would erode their strategic advantage. Similarly, the Western bloc often prioritises competition over collective cooperation, ultimately leading to a modernisation race which encourages rapid adoption of AI systems without proper oversight, weakening the capacity to establish universal standards. A further example of this disintegration is the lack of shared international definitions, standards, and risk classifications. The European Parliament’s 2024 study highlights how military AI technologies are addressed differently across institutions, with civilian frameworks like the EU AI Act completely omitting defence. This results in regulatory gaps, as military AI systems are developed and governed by national legislation and interpretation, with no shared benchmarks or minimum governance requirements across democracies. Thus, private companies face inconsistent expectations, making it difficult to hold them accountable and ensure compliance across multi-national jurisdictions. The cultivation of these factors results in a military landscape driven by strategic competition and institutional inertia rather than a coordinated regulatory system. Inconsistencies and this division of international powers undermine the creation of effective oversight mechanisms, and allow both states and private corporations to operate with minimal constraints. Ultimately, this creates a challenge for Western democracies to maintain control over how AI is integrated into the conduct of war. 3.5 Policy Failure 4: Strategic Dependency on Private Infrastructure Another apparent policy failure arises from the increased dependency Western governments and militaries have on private corporations. Many modern military operations by Western governments now rely heavily on privately owned, developed systems and technologies. Moreover, the infrastructure that allows these technologies to operate is also owned and maintained by the same companies. This reliance on private entities for defence is shifting control away from governments while weakening their ability to oversee and regulate these companies. A 2024 study by the European Parliament indicates that military AI systems depend on privately owned technology and infrastructure. Since these technologies are protected by commercial rights, making it harder for governments to access internal processes. In turn, governments cannot verify whether these technologies meet specific legal and ethical requirements. The increased reliance creates new vulnerabilities and issues that would never have occurred in the past. Another report by UNIDIR in 2024 discussed how this reliance on private infrastructure for defence purposes will reshape command hierarchies. Indeed, when key elements, tools, and systems used on the battlefield rely on a few private suppliers, governments may find themselves locked into long-term “relationships”, limiting their flexibility and authority. In other words, this means that private companies can now influence how their technologies and systems operate and can also set the pace and direction of military development. This increased reliance on private companies raises new questions over the sovereignty that governments now have. Indeed, as militaries rely on privately owned, developed, and maintained technologies and infrastructure, this limits states' ability to exercise autonomous control over their defence capabilities. 3.6 Policy Failure 5: Decline of Democratic Oversight and Transparency Another major reason for policy failure is the decline in democratic oversight of military AI, largely due to corporate discretion, technical complexity, and limited institutional capacity. The complex nature of AI-driven technologies that militaries are adopting is making it significantly more difficult for parliaments, courts, and civil societies to obtain the information needed to understand how these systems operate and to scrutinise them. Undermining democratic frameworks such as accountability and placing key aspects of defence and safety under private actors rather than public oversight. A primary source of this decline in transparency is the growing divergence between democratic institutions and the underlying technical complexity of AI systems. Research highlights that governments lack the expertise to evaluate and understand the architecture, risks, and limitations of advanced machine-learning systems. As a result, it is difficult for them to implement substantive oversight mechanisms. This arises from the lack of tools and knowledge to interrogate system behaviour, meaning that governmental and democratic institutions are unable to exercise meaningful control over these technologies. Furthermore, the way commercially developed AI systems are shielded by proprietary means and protections acts as another limiting factor in their oversight. According to the 2024 report by the European Parliament, private companies can retain ownership and thus control over data structures, training models, and risk-assessment tools. Creating difficulties for governments to review and assess how decisions are reached and whether they comply with humanitarian laws, representing a significant corrosion of democratic oversight, as elected institutions are restricted on how they can scrutinise these essential technologies for military applications. When information essential to accountability is no longer publicly available to governments and civil society, the fundamental principles of transparency and public control are compromised, leading to governance failure and a direct challenge to sovereignty. 3.7 Conclusion The five policy failures analysed reveal a significant shift in how authority and sovereignty are exercised in the 21st century. The delegation of decision-making to AI systems, the lack of clear responsibilities, the fragmentation on the international stage, the reliance on private infrastructure, and the decline in transparency indicate that Western states are losing their capacity to control the technologies on which their militaries depend. This results in a political and governmental environment in which private corporations and unregulated technologies now play a key role in how war is waged.
4. Policy Recommendations
In order to address the policy vacuum analysed in the previous section, new laws need to be introduced to strengthen government oversight, limit reliance on private actors, and ensure democratic control and principles over the use of force in war. While on their own they will not fix the issues outlined before, together they will help Western democracies regain their authority over private technology companies. The first priority of international organisations and Western countries should be to establish global standards for human oversight when using AI systems in war. This will help address the "responsibility gaps” by clarifying the boundaries within which autonomous systems can operate without human supervision, while never entirely replacing human judgment in defining targets and operational decisions. This policy would also help clearly define accountability by ensuring that military personnel and commanders maintain their roles and hierarchy. Secondly, Western democracies should introduce independent auditing bodies and certification mechanisms for military AI systems. These auditing organisations would allow states to test the reliability, efficiency, and risk profile of these technologies when used in military operations, even if it means accessing commercially sensitive documents. This policy would allow governments to increase their oversight and control over which technologies are used for military operations, while increasing transparency. Lastly, democratic states should aim to reduce their long-term dependence on private infrastructure. This could be done by having states finance this infrastructure directly, thereby allowing governments to restore their control over military AI tools. While full technological autonomy is impossible in the 21st century, partial sovereignty over key infrastructure and technologies would ensure that core defence functions remain under public control and authority. These proposed policies aim to restore the balance between technological innovation and democratic sovereignty. By increasing oversight and transparency and reducing reliance on privately owned infrastructure, Western governments can improve their current frameworks to ensure the responsible and accountable use of AI technologies in modern warfare.
5. Conclusion
Artificial intelligence has become an integral element of modern warfare, producing a major shift in how military power is produced, legitimised, and controlled. This paper has illustrated both the unprecedented capabilities of AI in today’s society and outlined how its rapid development has outpaced the ability to develop effective governance frameworks for it. By evaluating the five main policy failures that include the delegation of decision-making to AI systems and thus the presence of accountability gaps, fragmented international regulation, strategic dependency on private infrastructures, and the decline of democratic oversight, it becomes evident how and why AI has created an erosion of sovereignty in Western states. The governance vacuum presented in this analysis reflects a political problem that showcases the strain between the rapid pace of technological innovation and the stagnant, more time-consuming bureaucratic and deliberative process. The increased influence of private firms over a system as intricate and perilous as warfare and military power continues to blur the traditional boundaries between public and private power. Raising essential questions regarding legitimacy, responsibility, and control in the new digital age of warfare. To address these challenges, countries must make sustained political commitments, increase international cooperation, and integrate and develop a robust oversight mechanism capable of keeping pace with the complexity and rapid advancement of AI-enabled military technologies. By reasserting democratic principles in the way military AI is governed, Western states can restore sovereignty and ensure that the future of warfare remains subject to public authority, responsibility, and accountability. Bibliography: Afina, Yasmin, and Giacomo Paoli. “Governance of Artificial Intelligence in the Military Domain: A Multi-Stakeholder Perspective on Priority Areas.” United Nations Institute for Disarmament Research (UNIDIR), September 2024. https://unidir.org/wp-content/uploads/2024/09/ UNIDIR_Governance_of_Artificial_Intelligence _in_the_Military_Domain_A_Multi-stakeholder _Perspective_on_Priority_Areas.pdf Anduril Industries. “Anduril’s Lattice: A Trusted Dual Use — Commercial and Military — Platform for Public Safety, Security, and Defense.” Anduril Industries, July 31, 2023. https://www.anduril.com/article/anduril-s-lattice- a-trusted-dual-use-commercial-and-military-platfo rm-for-public-safety-security/ Anduril Industries. “Fury.” Anduril Industries. Accessed 18 November 2025. https://www.anduril.com/fury/ Anduril Industries. “Mission Autonomy, Lattice.” Anduril Industries. Accessed 18 November 2025. https://www.anduril.com/lattice/mission-autono my Csernatoni, Raluca. “Governing Military AI amid a Geopolitical Minefield.” Carnegie Endowment for International Peace, July 17, 2024. https://carnegieendowment.org/research/2024/0 7/governing-military-ai-amid-a-geopolitical-minefi eld?lang=en EU Artificial Intelligence Act. “High-Level Summary of the AI Act.” February 27, 2024. https://artificialintelligenceact.eu/high-level-sum mary/ Gaeta, Paola. “Who Acts When Autonomous Weapons Strike? The Act Requirement for Individual Criminal Responsibility and State Responsibility.” Journal of International Criminal Justice 21, no. 5 (January 30, 2024): 1033–55. https://doi.org/10.1093/jicj/mqae001 Johnson, James. “The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-Enabled Warfare.” Journal of Military Ethics 21, no. 3-4 (February 12, 2023): 246–71. https://doi.org/10.1080/15027570.2023.2175887 Kmentt, Alexander. “Geopolitics and the Regulation of Autonomous Weapons Systems.” Arms Control Association, February 2025. https://www.armscontrol.org/act/2025-01/featur es/geopolitics-and-regulation-autonomous-weapo ns-systems Lockheed Martin. “How Lockheed Martin Is Using AI to Evolve PAC-3 Capability.” Lockheed Martin, October 13, 2025. https://www.lockheedmartin.com/en-us/news/fea tures/2025/how-lockheed-martin-is-using-ai-to-ev olve-pac-3-capability.html Molas-Gallart, Jordi. “Which Way to Go? Defence Technology and the Diversity of ‘Dual-Use’ Technology Transfer.” Research Policy 26, no. 3 (October 1997): 367–85. https://doi.org/10.1016/s0048-7333(97)00023-1 NATO. “Summary of the NATO Artificial Intelligence Strategy.” NATO, October 22, 2021. https://www.nato.int/en/about-us/official-texts-a nd-resources/official-texts/2021/10/22/summary- of-the-nato-artificial-intelligence-strategy Palantir. “Gotham.” Palantir. Accessed 19 November 2025. https://www.palantir.com/platforms/gotham/ Palantir. “The Software Advantage for Multi-Domain Operations.” Palantir. Accessed 19 November 2025. https://www.palantir.com/offerings/defense/army /#edge Taddeo, Mariarosaria, and Alexander Blanchard. “A Comparative Analysis of the Definitions of Autonomous Weapons Systems.” Science and Engineering Ethics 28, no. 5 (August 23, 2022). https://doi.org/10.1007/s11948-022-00392-3 Zaidan, Esmat, and Imad Antoine Ibrahim. “AI Governance in a Complex and Rapidly Changing Regulatory Landscape: A Global Perspective.” Humanities and Social Sciences Communications 11, no. 1 (September 1, 2024): 1–18. https://doi.org/10.1057/s41599-024-03560-x Ünver, Akın. “Artificial Intelligence (AI) and Human Rights: Using AI as a Weapon of Repression and Its Impact on Human Rights,” May 2024. https://www.europarl.europa.eu/RegData/etudes /IDAN/2024/754450/EXPO_IDA%282024%29 754450_EN.pdf
