Machine Learning Research

| Peer-Reviewed |

Performance Evaluation of Cooperative RL Algorithms for Dynamic Decision Making in Retail Shop Application

Received: 27 September 2017    Accepted: 20 October 2017    Published: 12 December 2017
Views:       Downloads:

Share This Article

Abstract

A novel approach by Expertise based Multi-agent Cooperative Reinforcement Learning Algorithms (EMCRLA) for dynamic decision-making in the retail application is proposed in this paper. Performance evaluation between Cooperative Reinforcement Learning Algorithms and Expertise based Multi-agent Cooperative Reinforcement Learning Algorithms (EMCRLA) is demonstrated. Different cooperation schemes for multi-agent cooperative reinforcement learning i.e. EQ learning, EGroup scheme, EDynamic scheme and EGoal driven scheme are proposed here. Implementation outcome includes a demonstration of recommended cooperation schemes that are competent enough to speed up the collection of agents that achieve excellent action policies. This approach is developed for three retailer stores in the retail marketplace. Retailers are able to help with each other and can obtain profit from cooperation knowledge through learning their own strategies that exactly stand for their aims and benefit. The vendors are the knowledgeable agents in the hypothesis to employ cooperative learning to train helpfully in the circumstances. Assuming significant hypothesis on the vendor’s stock policy, restock period, arrival process of the consumers, the approach is modeled as Markov decision process model that makes it possible to design learning algorithms. Dynamic consumer performance is noticeably learned using the proposed algorithms. The paper illustrates results of Cooperative Reinforcement Learning Algorithms of three shop agents for the period of one-year sale duration and then demonstrated the results using proposed approach for three shop agents for the period of one-year sale duration. The results obtained by the proposed expertise based cooperation approach show that such methods can put into a quick convergence of agents in the dynamic environment.

DOI 10.11648/j.mlr.20170204.14
Published in Machine Learning Research (Volume 2, Issue 4, December 2017)
Page(s) 133-147
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Cooperation Schemes, Multi-Agent Learning, Reinforcement Learning

References
[1] Deepak A. Vidhate and Parag Kulkarni, “Expertise Based Cooperative Reinforcement Learning Methods (ECRLM)”, International Conference on Information & Communication Technology for Intelligent System, Springer book series Smart Innovation, Systems and Technologies(SIST, volume 84), Cham, pp 350-360, 2017.
[2] Abhijit Gosavi, “Simulation-based Optimization: Parametric Optimization Techniques and Reinforcement Learning” Kluwer Academic Publishers, 2003.
[3] Andrew Y. Ng, "Sharding and Policy Search in Reinforcement Learning", Ph.D. dissertation. The University of California, Berkeley, 2003.
[4] Deepak A Vidhate and Parag Kulkarni, “Enhanced Cooperative Multi-agent Learning Algorithms (ECMLA) using Reinforcement Learning” International Conference on Computing, Analytics and Security Trends (CAST), IEEE Xplorer, pp 556 - 561, 2017.
[5] Antanas Verikas, Arunas Lipnickas, Kerstin Malmqvist, Marija Bacauskiene, and Adas Gelzinis, “Soft Combination of Neural Classifiers: A Comparative Study”, Pattern Recognition Letters No. 20, 1999, pp429-444.
[6] Deepak A. Vidhate and Parag Kulkarni "Innovative Approach Towards Cooperation Models for Multi-agent Reinforcement Learning (CMMARL) "International Conference on Smart Trends for Information Technology and Computer Communications Springer, Singapore, 2016 pp. 468-478.
[7] Babak Nadjar Araabi, Sahar Mastoureshgh, and Majid Nili Ahmadabadi “A Study on Expertise of Agents and Its Effects on Cooperative Q-Learning” IEEE Transactions on Evolutionary Computation, vol:14, pp:23-57, 2011.
[8] C. J. C. H. Watkins and P. Dayan, “Q-learning”, Machine Learning, 8 (3): 1992.
[9] Deepak A. Vidhate and Parag Kulkarni “New Approach for Advanced Cooperative Learning Algorithms using RL methods (ACLA)” VisionNet’16 Proceedings of the Third International Symposium on Computer Vision and the Internet, ACM DL pp 12-20, 2016.
[10] Deepak A Vidhate and Parag Kulkarni, Parag “Enhancement in Decision Making with Improved Performance by Multi-agent Learning Algorithms” IOSR Journal of Computer Engineering, Vol. 1,No. 18,pp 18-25,2016.
[11] Ju Jiang and Mohamed S. Kamel “Aggregation of Reinforcement Learning Algorithms”, International Joint Conference on Neural Networks, Vancouver, Canada July 16-21, 2006.
[12] Lun-Hui Xu, Xin-Hai Xia and Qiang Luo “The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multi-agent Markov Game Environment”, Mathematical Problems in Engineering, Hindawi Publishing Corporation, Volume 2013.
[13] Deepak A. Vidhate and Parag Kulkarni, “Implementation of Multi-agent Learning Algorithms for Improved Decision Making”, International Journal of Computer Trends and Technology (IJCTT), Volume 35 Number 2- May 2016.
[14] Lun-Hui Xu, Xin-Hai Xia and Qiang Luo “The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multi-agent Markov Game Environment” Hindawi Publishing Corporation, Mathematical Problems in Engineering, Volume 2013.
[15] Deepak A Vidhate and Parag Kulkarni, "Performance enhancement of cooperative learning algorithms by improved decision-making for context-based application", International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT) IEEE Xplorer, pp 246-252, 2016.
[16] Deepak A. Vidhate and Parag Kulkarni, “Design of Multi-agent System Architecture based on Association Mining for Cooperative Reinforcement Learning”, Spvryan's International Journal of Engineering Sciences & Technology (SEST), Volume 1, Issue 1, 2014.
[17] M. Kamel and N. Wanas, “Data Dependence in Combining Classifiers”, Multiple Classifiers Systems, Fourth International Workshop, Surrey, UK, June 11-13, pp1-14, 2003.
[18] V. L. Raju Chinthalapati, Narahari Yadati, and Ravikumar Karumanchi, “Learning Dynamic Prices in Multi-Seller Electronic Retail Markets With Price Sensitive Customers, Stochastic Demands, and Inventory Replenishments”, IEEE Transactions On Systems, Man, And Cybernetics—Part C: Applications And Reviews, Vol. 36, No. 1, January 2008.
[19] Deepak A. Vidhate and Parag Kulkarni, “Multilevel Relationship Algorithm for Association Rule Mining used for Cooperative Learning”, International Journal of Computer Applications (0975 – 8887), volume 86, number 4, pp 20--27, 2014.
[20] Y. S. Huang and C. Y. Suen, “A method of combining multiple experts for the recognition of unconstrained handwritten numerals.” IEEE Trans. on Pattern Analysis and Machine Intelligence 17(1), 1995, pp90-94.
[21] Deepak A. Vidhate, Parag Kulkarni, “To improve association rule mining using new technique: Multilevel relationship algorithm towards cooperative learning”, International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA), IEEE pp 241—246, 2014.
[22] Young-Cheol Choi, Student Member, Hyo-Sung Ahn “A Survey on Multi-Agent Reinforcement Learning: Coordination Problems”, IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications, pp. 81–86, 2010.
[23] Deepak A Vidhate, Parag Kulkarni, “A Novel Approach to Association Rule Mining using Multilevel Relationship Algorithm for Cooperative Learning” Proceedings of 4th International Conference on Advanced Computing & Communication Technologies (ACCT-2014), pp 230-236, 2014.
[24] Zahra Abbasi, Mohammad Ali Abbasi “Reinforcement Distribution in a Team of Cooperative Q-learning Agent”, Proceedings of the 9th ACIS International Conference on Artificial Intelligence, Distributed Computing, IEEE 2012.
[25] Deepak A Vidhate, Parag Kulkarni, “Cooperative machine learning with information fusion for dynamic decision making in diagnostic applications”, International Conference on Advances in Mobile Network, Communication and its Applications (MNCAPPS),IEEE, pp 70-74, 2012.
Author Information
  • Department of Computer Engineering, College of Engineering, Pune, India

  • iKnowlation Research Labs Pvt. Ltd., Pune, India

Cite This Article
  • APA Style

    Deepak Annasaheb Vidhate, Parag Arun Kulkarni. (2017). Performance Evaluation of Cooperative RL Algorithms for Dynamic Decision Making in Retail Shop Application. Machine Learning Research, 2(4), 133-147. https://doi.org/10.11648/j.mlr.20170204.14

    Copy | Download

    ACS Style

    Deepak Annasaheb Vidhate; Parag Arun Kulkarni. Performance Evaluation of Cooperative RL Algorithms for Dynamic Decision Making in Retail Shop Application. Mach. Learn. Res. 2017, 2(4), 133-147. doi: 10.11648/j.mlr.20170204.14

    Copy | Download

    AMA Style

    Deepak Annasaheb Vidhate, Parag Arun Kulkarni. Performance Evaluation of Cooperative RL Algorithms for Dynamic Decision Making in Retail Shop Application. Mach Learn Res. 2017;2(4):133-147. doi: 10.11648/j.mlr.20170204.14

    Copy | Download

  • @article{10.11648/j.mlr.20170204.14,
      author = {Deepak Annasaheb Vidhate and Parag Arun Kulkarni},
      title = {Performance Evaluation of Cooperative RL Algorithms for Dynamic Decision Making in Retail Shop Application},
      journal = {Machine Learning Research},
      volume = {2},
      number = {4},
      pages = {133-147},
      doi = {10.11648/j.mlr.20170204.14},
      url = {https://doi.org/10.11648/j.mlr.20170204.14},
      eprint = {https://download.sciencepg.com/pdf/10.11648.j.mlr.20170204.14},
      abstract = {A novel approach by Expertise based Multi-agent Cooperative Reinforcement Learning Algorithms (EMCRLA) for dynamic decision-making in the retail application is proposed in this paper. Performance evaluation between Cooperative Reinforcement Learning Algorithms and Expertise based Multi-agent Cooperative Reinforcement Learning Algorithms (EMCRLA) is demonstrated. Different cooperation schemes for multi-agent cooperative reinforcement learning i.e. EQ learning, EGroup scheme, EDynamic scheme and EGoal driven scheme are proposed here. Implementation outcome includes a demonstration of recommended cooperation schemes that are competent enough to speed up the collection of agents that achieve excellent action policies. This approach is developed for three retailer stores in the retail marketplace. Retailers are able to help with each other and can obtain profit from cooperation knowledge through learning their own strategies that exactly stand for their aims and benefit. The vendors are the knowledgeable agents in the hypothesis to employ cooperative learning to train helpfully in the circumstances. Assuming significant hypothesis on the vendor’s stock policy, restock period, arrival process of the consumers, the approach is modeled as Markov decision process model that makes it possible to design learning algorithms. Dynamic consumer performance is noticeably learned using the proposed algorithms. The paper illustrates results of Cooperative Reinforcement Learning Algorithms of three shop agents for the period of one-year sale duration and then demonstrated the results using proposed approach for three shop agents for the period of one-year sale duration. The results obtained by the proposed expertise based cooperation approach show that such methods can put into a quick convergence of agents in the dynamic environment.},
     year = {2017}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Performance Evaluation of Cooperative RL Algorithms for Dynamic Decision Making in Retail Shop Application
    AU  - Deepak Annasaheb Vidhate
    AU  - Parag Arun Kulkarni
    Y1  - 2017/12/12
    PY  - 2017
    N1  - https://doi.org/10.11648/j.mlr.20170204.14
    DO  - 10.11648/j.mlr.20170204.14
    T2  - Machine Learning Research
    JF  - Machine Learning Research
    JO  - Machine Learning Research
    SP  - 133
    EP  - 147
    PB  - Science Publishing Group
    SN  - 2637-5680
    UR  - https://doi.org/10.11648/j.mlr.20170204.14
    AB  - A novel approach by Expertise based Multi-agent Cooperative Reinforcement Learning Algorithms (EMCRLA) for dynamic decision-making in the retail application is proposed in this paper. Performance evaluation between Cooperative Reinforcement Learning Algorithms and Expertise based Multi-agent Cooperative Reinforcement Learning Algorithms (EMCRLA) is demonstrated. Different cooperation schemes for multi-agent cooperative reinforcement learning i.e. EQ learning, EGroup scheme, EDynamic scheme and EGoal driven scheme are proposed here. Implementation outcome includes a demonstration of recommended cooperation schemes that are competent enough to speed up the collection of agents that achieve excellent action policies. This approach is developed for three retailer stores in the retail marketplace. Retailers are able to help with each other and can obtain profit from cooperation knowledge through learning their own strategies that exactly stand for their aims and benefit. The vendors are the knowledgeable agents in the hypothesis to employ cooperative learning to train helpfully in the circumstances. Assuming significant hypothesis on the vendor’s stock policy, restock period, arrival process of the consumers, the approach is modeled as Markov decision process model that makes it possible to design learning algorithms. Dynamic consumer performance is noticeably learned using the proposed algorithms. The paper illustrates results of Cooperative Reinforcement Learning Algorithms of three shop agents for the period of one-year sale duration and then demonstrated the results using proposed approach for three shop agents for the period of one-year sale duration. The results obtained by the proposed expertise based cooperation approach show that such methods can put into a quick convergence of agents in the dynamic environment.
    VL  - 2
    IS  - 4
    ER  - 

    Copy | Download

  • Sections