Abstract: This paper examines algorithmic bias in AI systems used for military decision-making, identifies key sources of unfairness, and demonstrates practical mitigation strategies with implemented machine-learning experiments. We generate a synthetic but realistic dataset that mimics decisions (e.g., target identification / threat classification) with a binary sensitive attribute (e.g., group A vs group B). We implement baseline classifiers (Logistic Regression, Random Forest), measure fairness-related metrics (statistical parity difference, equal opportunity difference, disparate impact), and apply two mitigation strategies: reweighing (pre-processing) and group-specific thresholding (post-processing). Results include accuracy, fairness trade-offs, and visualizations. The paper ends with recommendations and limitations.
Keyword: Algorithmic bias, fairness, military AI, reweighing, thresholding, fairness metrics, machine learning.
Downloads:
|
DOI: 
10.17148/IJARCCE.2025.14925
[1] Abhishek Singh, Ajay Kumar Maurya, "Algorithmic Bias in Military AI Systems: Challenges and Solutions for Fair and Accurate Decision-Making," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2025.14925