Abstract: The rapid adoption of cloud computing has introduced significant security vulnerabilities, making it a prime target for cyber threats. Traditional security frameworks struggle to detect sophisticated attacks due to their high computational complexity, limited scalability, and reliance on handcrafted feature engineering. This study proposes an AI-powered network security framework leveraging the Informer model, a transformer-based architecture optimized for long-sequence time-series data. The proposed framework integrates feature selection using Mutual Information and employs ProbSparse Self-Attention to enhance anomaly detection accuracy. By efficiently processing cloud network logs, it ensures real-time threat identification and mitigation with minimal false positives. The framework also incorporates automated security enforcement mechanisms, including access control, encryption, and intrusion prevention, to enhance cloud security resilience. Performance evaluations demonstrate that the proposed Informer-based model achieves superior accuracy, precision, and recall compared to existing CNN-LSTM and Tab-Transformer models. With a high accuracy of 99.5% and low false positive and false negative rates, the proposed framework offers a scalable and efficient solution for detecting cloud-based cyber threats. This research contributes to strengthening cloud security by providing a robust, adaptive, and real-time anomaly detection mechanism.

Keywords: Cloud Security, Informer Model, Anomaly Detection, Cyber Threats, Transformer-Based Architecture.


PDF | DOI: 10.17148/IJARCCE.2021.10485

Open chat
Chat with IJARCCE