Between Innovation and Control: Legal Frameworks for the Use of Artificial Intelligence in Armed Conflict
More details
Hide details
1
Akademia Sztuki Wojennej, Poland
Submission date: 2025-11-20
Final revision date: 2025-11-22
Acceptance date: 2025-11-22
Publication date: 2025-11-22
Cybersecurity and Law 2025;13(1):133-145
KEYWORDS
TOPICS
ABSTRACT
Objectives:
The research aims to analyze and compare the legal and regulatory frameworks, both international and national (Poland), governing the deployment of Artificial Intelligence (AI) in weapons systems. The objective is to determine how existing regulations balance the pursuit of technological advantage against the legal requirements of International Humanitarian Law (IHL), control, and accountability.
Methods:
The methodology employed source document analysis and comparison (treaties, resolutions, strategies, reports), critically assessing the detail of regulations regarding Meaningful Human Control (MHC), attribution of responsibility, and AI explainability within the context of the Law of Armed Conflict (LOAC).
Results:
The analysis revealed a regulatory divergence between NATO's technological imperative and the UN's normative postulate, resulting in a normative gap-the absence of a legally binding, operationally viable definition of MHC. This deficiency leads to the risk of a „responsibility gap” and the erosion of IHL principles like distinction and proportionality. Nationally, the 2039 AI Strategy, while strategically sound, lacks the executive legal acts to operationalize MHC requirements.
Conclusions:
The study concludes that the effective implementation of AI is critically dependent on an urgent shift from strategic declarations to binding, national executive regulations and the establishment of a legally binding MHC definition to ensure full compliance with IHL.