EN PL
Between Innovation and Control: Legal Frameworks for the Use of Artificial Intelligence in Armed Conflict
 
Więcej
Ukryj
1
Akademia Sztuki Wojennej, Poland
 
 
Data nadesłania: 20-11-2025
 
 
Data ostatniej rewizji: 22-11-2025
 
 
Data akceptacji: 22-11-2025
 
 
Data publikacji: 22-11-2025
 
 
Autor do korespondencji
Kacper Zdrojewski   

Akademia Sztuki Wojennej, Warszawa, Poland
 
 
Cybersecurity and Law 2025;13(1):133-145
 
SŁOWA KLUCZOWE
DZIEDZINY
STRESZCZENIE
Objectives:
The research aims to analyze and compare the legal and regulatory frameworks, both international and national (Poland), governing the deployment of Artificial Intelligence (AI) in weapons systems. The objective is to determine how existing regulations balance the pursuit of technological advantage against the legal requirements of International Humanitarian Law (IHL), control, and accountability.

Methods:
The methodology employed source document analysis and comparison (treaties, resolutions, strategies, reports), critically assessing the detail of regulations regarding Meaningful Human Control (MHC), attribution of responsibility, and AI explainability within the context of the Law of Armed Conflict (LOAC).

Results:
The analysis revealed a regulatory divergence between NATO's technological imperative and the UN's normative postulate, resulting in a normative gap-the absence of a legally binding, operationally viable definition of MHC. This deficiency leads to the risk of a „responsibility gap” and the erosion of IHL principles like distinction and proportionality. Nationally, the 2039 AI Strategy, while strategically sound, lacks the executive legal acts to operationalize MHC requirements.

Conclusions:
The study concludes that the effective implementation of AI is critically dependent on an urgent shift from strategic declarations to binding, national executive regulations and the establishment of a legally binding MHC definition to ensure full compliance with IHL.
ISSN:2658-1493
Journals System - logo
Scroll to top