Abstract: In this work, we propose DRLEER (Dynamic Reinforcement Learning-Based Energy-Efficient Routing), a novel routing protocol designed to maximize energy efficiency and prolong the operational lifespan of Internet of Things (IoT) networks. DRLEER aims to minimize energy consumption while optimizing data delivery by employing a dynamic Reinforcement Learning approach to routing decisions. The protocol comprises three key phases: network design and Cluster Head (CH) selection, clustering, and energy-aware data transmission.

During the first phase, DRLEER calculates Q-values for CH selection by considering both hop count and initial energy, allowing the network to identify the most appropriate CHs for efficient communication. Subsequently, in the clustering phase, CHs broadcast invitation messages to nearby nodes, while nodes farther from the base station associate with the closest clusters. This process results in an optimally organized network structure.

The final phase utilizes Reinforcement Learning to enable energy-conscious routing decisions based on residual energy and network conditions. An energy threshold is defined to control CH replacement and maintain the stability of the network. Simulation results show that DRLEER significantly outperforms existing protocols, extending network lifespan to 5866 rounds, reducing average end-to-end delay to 55ms, and conserving energy with an average consumption of 2.75 per round. Furthermore, DRLEER successfully delivers 14.2 × 10^5 packets, demonstrating its ability to efficiently handle data delivery under energy constraints.

Overall, DRLEER provides a scalable, adaptable, and energy-aware solution for IoT routing, extending network service life and conserving resources through a low-power Reinforcement Learning framework

Keywords: IOT, WSN , Deepa Reinforcement Learning , Energy Efficiency


PDF | DOI: 10.17148/IJARCCE.2024.131265

Open chat
Chat with IJARCCE