Abstract: Data is the driving force behind the power of modern-day Machine Learning or Deep Learning algorithms. Accuracy and efficiency of these algorithms are largely dependent on the quality of the data that they are trained on; consequently, data poisoning poses a significant threat to these models. Data poisoning attacks present a significant challenge in maintaining the integrity of machine learning models. Currently automated methods and human inspection techniques often fail to identify clean subsets with high precision. In this paper target class of samples for this study’s layer-wise feature analysis includes both poisoned and benign samples. It discovers that the key layer-which is frequently overlooked by existing defences is what distinguishes dangerous from innocuous substances. Key layer analysis of characteristic differences between suspicious and benign samples suggests a simple yet effective approach to filter poisoned samples. Effectiveness of the defences has been verified by in-depth experiments on two benchmark datasets.

Keywords: Data poison, machine learning, Deep Neural Networks, Feature Analysis.


PDF | DOI: 10.17148/IJARCCE.2023.12815

Open chat
Chat with IJARCCE