Abstract: Adversarial attacks on artificial intelligence (AI) are a growing concern in information science. These attacks manipulate input data to deceive AI systems into producing inaccurate or unexpected results. The purpose of this project is to investigate the impact of adversarial attacks on various AI systems and develop effective defence mechanisms to counter them. The project will begin by selecting a neural network model to attack and using various attack methods, such as gradient-based attacks and decision-based attacks, to generate adversarial examples. The attack's effectiveness will be evaluated by testing the adversarial examples on the target model and measuring the success rate and degree of perturbation needed to generate the examples. To defend against the attack, The project will modify the neural network architecture or training data and apply defensive techniques such as adversarial training or input sanitization.The project aims to contribute to developing secure and reliable AI systems that can resist adversarial attacks. By exploring different attack methods and defence mechanisms, hope to identify effective strategies to mitigate the risks of adversarial attacks in critical applications such as autonomous vehicles, medical diagnosis, and financial fraud detection. The project findings will be valuable to researchers, engineers, and practitioners working in the field of AI and information science to develop robust and secure AI systems.

Keywords: Adversarial attacks, Artificial intelligence (AI), Defence mechanisms, Neural network models, Attack methods, Robust AI systems


PDF | DOI: 10.17148/IJARCCE.2023.12636

Open chat
Chat with IJARCCE