Legal implications of Using Artificial Intelligence in Military Weapons: New challenges faced with the Law of Armed Conflicts

Document Type : Original Article

Author

Department of International Law - Faculty of National Security - National Defense University

Abstract

The advancement of military science and technology and the use of artificial intelligence (AI) in modern and autonomous weapons with the aim of carrying out combat missions without human intervention is a subject that has attracted the attention of most states and armies of the world. According to the rules and regulations of the law of armed conflicts (LOAC), the right of belligerent parties to develop and apply any kind of tools and weapons is not unlimited. Today, the use of new tools and weapons equipped with AI has caused many legal concerns and has made the implementation of the rules and principles governing armed conflicts ambiguous. This article by using the analytical and descriptive method, seeks to answer these questions that with the development and application of AI in autonomous weapons and the transfer of decision-making from humans to AI, compliance with the principles of humanitarian rights what challenges will it face? And who is responsible for violating the principles of humanitarian law? The results of this research show that despite the opportunities and features that artificial intelligence have in recognition military targets and improving the accuracy of weapons and can reduce human casualties, but there is an urgent need for a The new legal framework for the regulation of robotics and intelligent weapons is felt, so it seems that even with the use of AI in new and autonomous weapons, such weapons will not be able to comply with the principles of the law of armed conflicts.

Keywords