Research Article | | Peer-Reviewed

Duty Cycle Scheduling in Wireless Sensor Networks Using an Exploratory Strategy-Directed MADDPG Algorithm

Received: 18 January 2024     Accepted: 29 January 2024     Published: 28 February 2024
Views:       Downloads:
Abstract

This paper presents an in-depth study of the application of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithms with an exploratory strategy for duty cycle scheduling (DCS) in the wireless sensor networks (WSNs). The focus is on optimizing the performance of sensor nodes in terms of energy efficiency and event detection rates under varying environmental conditions. Through a series of simulations, we investigate the impact of key parameters such as the sensor specificity constant α and the Poisson rate of events on the learning and operational efficacy of sensor nodes. Our results demonstrate that the MADDPG algorithm with an exploratory strategy outperforms traditional reinforcement learning algorithms, particularly in environments characterized by high event rates and the need for precise energy management. The exploratory strategy enables a more effective balance between exploration and exploitation, leading to improved policy learning and adaptation in dynamic and uncertain environments. Furthermore, we explore the sensitivity of different algorithms to the tuning of the sensor specificity constant α, revealing that lower values generally yield better performance by reducing energy consumption without significantly compromising event detection. The study also examines the algorithms' robustness against the variability introduced by different event Poisson rates, emphasizing the importance of algorithm selection and parameter tuning in practical WSN applications. The insights gained from this research provide valuable guidelines for the deployment of sensor networks in real-world scenarios, where the trade-off between energy consumption and event detection is critical. Our findings suggest that the integration of exploratory strategies in MADDPG algorithms can significantly enhance the performance and reliability of sensor nodes in WSNs.

Published in International Journal of Sensors and Sensor Networks (Volume 12, Issue 1)
DOI 10.11648/j.ijssn.20241201.11
Page(s) 1-12
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Multi-Agent Systems, Deep Reinforcement Learning, MADDPG, Wireless Sensor Networks, Energy Efficiency, Event Detection, Exploratory Strategy

References
[1] Begum S, Wang S, Krishnamachari B, Helmy A. ELECTION: Energy-efficient and low-latency scheduling technique for wireless sensor networks. Proceedings of the 29th Annual IEEE Conference on Local Computer Networks (LCN). Tampa, FL, USA: IEEE; 2004. p. 16-18. https://orcid.org/10.1109/LCN.2004.49
[2] Dantu R, Abbas K, O’Neill M II, Mikler A. Data centric modeling of environmental sensor networks. Proceedings of the IEEE Globecom. Dallas, TX, USA: IEEE, 2004: 447-452. https://orcid.org/10.1109/GLOCOMW.2004.1417621
[3] Yang Y R, Lam S S. General AIMD congestion control. Proceedings of the 2000 International Conference on Network Protocols, 2000: 187-198. https://orcid.org/10.1109/ICNP.2000.896303
[4] E. Ucer, M. C. Kisacikoglu and M. Yuksel. Analysis of Decentralized AIMD-based EV Charging Control. 2019 IEEE Power & Energy Society General Meeting (PESGM), Atlanta, GA, USA. 2019: 1-5. https://orcid.org/10.1109/PESGM40551.2019.8973725
[5] Lee J, Lee D, Kim J, Cho W, Pajak J. A dynamic sensing cycle decision scheme for energy efficiency and data reliability in wireless sensor networks. Lecture Notes in Computer Science. 2007; 4681: 218-229. https://orcid.org/10.1007/978-3-540-74171-8_22
[6] Jain A, Chang E Y. Adaptive sampling for sensor networks. Proceedings of the International Workshop on Data Management for Sensor Networks. Toronto, ON, Canada; 2004, 72 p. 10-16. https://orcid.org/10.1145/1052199.1052202
[7] Stone P, Veloso M M. Team-partitioned, Opaque-transition Reinforcement Learning. Proceedings of the third annual conference on Autonomous Agents; 1999.
[8] Foerster J, Nardelli N, Farquhar G, Afouras T, Torr P H S, Kohli P, Whiteson S. Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning. ARXIV-CS.AI; 2017.
[9] Nguyen T T, Nguyen N D, Nahavandi S. Deep Reinforcement Learning For Multi-Agent Systems: A Review Of Challenges, Solutions And Applications. ARXIV-CS.LG; 2018.
[10] Samvelyan M, Rashid T, Schroeder de Witt C, Farquhar G, Nardelli N, Rudner T G J, Hung C M, Torr P H S, Foerster J, Whiteson S. The StarCraft Multi-Agent Challenge. ARXIV-CS.LG; 2019.
[11] Chu T, Wang J, Codecà L, Li Z. Multi-Agent Deep Reinforcement Learning For Large-scale Traffic Signal Control [J]. ARXIV-CS.LG, 2019.
[12] Lowe R, Wu Y I, Tamar A, Harb J, Abbeel P, Mordatch I. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems. 2017, 30.
[13] Wang D, Zhang W, Song B, Du X, Guizani M. Market-Based Model In CR-WSN: A Q-Probabilistic Multi-agent Learning Approach. ARXIV-CS.MA; 2019.
[14] Zhang K, Yang Z, Başar T. Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms. ARXIV-CS.LG; 2019.
[15] Wang C, Shen X, Wang H, Xie W, Zhang H, Mei H. Multi-Agent Reinforcement Learning-Based Routing Protocol for Underwater Wireless Sensor Networks with Value of Information. IEEE Sensors Journal. 2023. https://orcid.org/10.1109/JSEN.2023.3345947
[16] Ren J, et al. MeFi: Mean Field Reinforcement Learning for Cooperative Routing in Wireless Sensor Network. IEEE Internet of Things Journal. 2024 Jan 1; 11(1): 995-1011. https://orcid.org/10.1109/JIOT.2023.3289888.
[17] Liang Y, Wu H, Wang H. Asynchronous Multi-Agent Reinforcement Learning for Collaborative Partial Charging in Wireless Rechargeable Sensor Networks. IEEE Transactions on Mobile Computing. 2023. https://orcid.org/10.1109/TMC.2023.3299238
[18] Cao J, Liu J, Dou J, Hu C, Cheng J, Wang S. Multi-Agent Reinforcement Learning Charging Scheme for Underwater Rechargeable Sensor Networks. IEEE Communications Letters. 2023. https://orcid.org/10.1109/LCOMM.2023.3345362
[19] Zhao M, Wang G, Fu Q, Guo X, Chen Y, Li T, Liu X. MW-MADDPG: a meta-learning based decision-making method for collaborative UAV swarm. Frontiers in Neurorobotics. 2023; 17. https://orcid.org/10.3389/fnbot.2023.1243174
[20] Liu X, Tan Y. Feudal Latent Space Exploration for Coordinated Multi-Agent Reinforcement Learning. IEEE Transactions on Neural Networks and Learning Systems. 2023 Oct; 34(10): 7775-7783. https://orcid.org/10.1109/TNNLS.2022.3146201
Cite This Article
  • APA Style

    Wu, L., Liu, P., Qu, J., Zhang, C., Zhang, B. (2024). Duty Cycle Scheduling in Wireless Sensor Networks Using an Exploratory Strategy-Directed MADDPG Algorithm. International Journal of Sensors and Sensor Networks, 12(1), 1-12. https://doi.org/10.11648/j.ijssn.20241201.11

    Copy | Download

    ACS Style

    Wu, L.; Liu, P.; Qu, J.; Zhang, C.; Zhang, B. Duty Cycle Scheduling in Wireless Sensor Networks Using an Exploratory Strategy-Directed MADDPG Algorithm. Int. J. Sens. Sens. Netw. 2024, 12(1), 1-12. doi: 10.11648/j.ijssn.20241201.11

    Copy | Download

    AMA Style

    Wu L, Liu P, Qu J, Zhang C, Zhang B. Duty Cycle Scheduling in Wireless Sensor Networks Using an Exploratory Strategy-Directed MADDPG Algorithm. Int J Sens Sens Netw. 2024;12(1):1-12. doi: 10.11648/j.ijssn.20241201.11

    Copy | Download

  • @article{10.11648/j.ijssn.20241201.11,
      author = {Liangshun Wu and Peilin Liu and Junsuo Qu and Cong Zhang and Bin Zhang},
      title = {Duty Cycle Scheduling in Wireless Sensor Networks Using an Exploratory Strategy-Directed MADDPG Algorithm},
      journal = {International Journal of Sensors and Sensor Networks},
      volume = {12},
      number = {1},
      pages = {1-12},
      doi = {10.11648/j.ijssn.20241201.11},
      url = {https://doi.org/10.11648/j.ijssn.20241201.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijssn.20241201.11},
      abstract = {This paper presents an in-depth study of the application of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithms with an exploratory strategy for duty cycle scheduling (DCS) in the wireless sensor networks (WSNs). The focus is on optimizing the performance of sensor nodes in terms of energy efficiency and event detection rates under varying environmental conditions. Through a series of simulations, we investigate the impact of key parameters such as the sensor specificity constant α and the Poisson rate of events on the learning and operational efficacy of sensor nodes. Our results demonstrate that the MADDPG algorithm with an exploratory strategy outperforms traditional reinforcement learning algorithms, particularly in environments characterized by high event rates and the need for precise energy management. The exploratory strategy enables a more effective balance between exploration and exploitation, leading to improved policy learning and adaptation in dynamic and uncertain environments. Furthermore, we explore the sensitivity of different algorithms to the tuning of the sensor specificity constant α, revealing that lower values generally yield better performance by reducing energy consumption without significantly compromising event detection. The study also examines the algorithms' robustness against the variability introduced by different event Poisson rates, emphasizing the importance of algorithm selection and parameter tuning in practical WSN applications. The insights gained from this research provide valuable guidelines for the deployment of sensor networks in real-world scenarios, where the trade-off between energy consumption and event detection is critical. Our findings suggest that the integration of exploratory strategies in MADDPG algorithms can significantly enhance the performance and reliability of sensor nodes in WSNs.
    },
     year = {2024}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Duty Cycle Scheduling in Wireless Sensor Networks Using an Exploratory Strategy-Directed MADDPG Algorithm
    AU  - Liangshun Wu
    AU  - Peilin Liu
    AU  - Junsuo Qu
    AU  - Cong Zhang
    AU  - Bin Zhang
    Y1  - 2024/02/28
    PY  - 2024
    N1  - https://doi.org/10.11648/j.ijssn.20241201.11
    DO  - 10.11648/j.ijssn.20241201.11
    T2  - International Journal of Sensors and Sensor Networks
    JF  - International Journal of Sensors and Sensor Networks
    JO  - International Journal of Sensors and Sensor Networks
    SP  - 1
    EP  - 12
    PB  - Science Publishing Group
    SN  - 2329-1788
    UR  - https://doi.org/10.11648/j.ijssn.20241201.11
    AB  - This paper presents an in-depth study of the application of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithms with an exploratory strategy for duty cycle scheduling (DCS) in the wireless sensor networks (WSNs). The focus is on optimizing the performance of sensor nodes in terms of energy efficiency and event detection rates under varying environmental conditions. Through a series of simulations, we investigate the impact of key parameters such as the sensor specificity constant α and the Poisson rate of events on the learning and operational efficacy of sensor nodes. Our results demonstrate that the MADDPG algorithm with an exploratory strategy outperforms traditional reinforcement learning algorithms, particularly in environments characterized by high event rates and the need for precise energy management. The exploratory strategy enables a more effective balance between exploration and exploitation, leading to improved policy learning and adaptation in dynamic and uncertain environments. Furthermore, we explore the sensitivity of different algorithms to the tuning of the sensor specificity constant α, revealing that lower values generally yield better performance by reducing energy consumption without significantly compromising event detection. The study also examines the algorithms' robustness against the variability introduced by different event Poisson rates, emphasizing the importance of algorithm selection and parameter tuning in practical WSN applications. The insights gained from this research provide valuable guidelines for the deployment of sensor networks in real-world scenarios, where the trade-off between energy consumption and event detection is critical. Our findings suggest that the integration of exploratory strategies in MADDPG algorithms can significantly enhance the performance and reliability of sensor nodes in WSNs.
    
    VL  - 12
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China

  • School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China

  • Xi'an Key Laboratory of Advanced Control and Intelligent Process, School of Automation, Xi’an University of Posts & Telecommunications, Xi’an, China

  • Xi'an Key Laboratory of Advanced Control and Intelligent Process, School of Automation, Xi’an University of Posts & Telecommunications, Xi’an, China

  • Political Science and Law, Xinjiang University, Tumushuke, China; Department of Computer, City University of Hong Kong, Hongkong, China; School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China

  • Sections