| Peer-Reviewed

Global Optimization with Descending Region Algorithm

Received: 8 April 2017    Accepted: 10 April 2017    Published: 9 June 2017
Views:       Downloads:
Abstract

Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.

Published in Applied and Computational Mathematics (Volume 6, Issue 4-1)

This article belongs to the Special Issue Some Novel Algorithms for Global Optimization and Relevant Subjects

DOI 10.11648/j.acm.s.2017060401.17
Page(s) 72-82
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Global Optimization, Gradient Descent Method, Descending Region, Descending Point

References
[1] M. D. Le and Y. H. Le, “Lecture Notes on Optimization,” Vietnam Institute of Mathematics, Hanoi, 2014.
[2] S. Boyd and L. Vandenberghe, Convex Optimization, New York, NY: Cambridge University Press, 2009, p. 716.
[3] Y.-B. Jia, “Lagrange Multipliers,” 2013.
[4] A. P. Ruszczyński, Nonlinear Optimization, Princeton, New Jersey: Princeton University Press, 2006, p. 463.
[5] Wikipedia, “Karush–Kuhn–Tucker conditions,” Wikimedia Foundation, 4 August 2014. [Online]. Available: http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions. [Accessed 16 November 2014].
[6] P. D. Ta, “Numerical Analysis Lecture Notes,” Vietnam Institute of Mathematics, Hanoi, 2014.
[7] T. Hoang, Convex Analysis and Global Optimization, Dordrecht: Kluwer, 1998, p. 350.
[8] J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” in Proceedings of IEEE International Conference on Neural Networks, 1995.
[9] Wikipedia, “Particle swarm optimization,” Wikimedia Foundation, 7 March 2017. [Online]. Available: https://en.wikipedia.org/wiki/Particle_swarm_optimization. [Accessed 8 April 2017].
[10] R. Poli, J. Kennedy and T. Blackwell, “Particle swarm optimization,” Swarm Intelligence, vol. 1, no. 1, pp. 33-57, June 2007.
[11] Wikipedia, “Quasi-Newton method,” Wikimedia Foundation, 4 April 2017. [Online]. Available: https://en.wikipedia.org/wiki/Quasi-Newton_method. [Accessed 8 April 2017].
[12] H. Jiao, Z. Wang and Y. Chen, “Global optimization algorithm for sum of generalized polynomial,” Applied Mathematical Modelling, vol. 37, no. 1-2, pp. 187-197, 18 February 2012.
[13] T. Larsson and M. Patriksson, “Global optimality conditions for discrete and nonconvex optimization - With applications to Lagrangian heuristics and column generation,” Operations Research, vol. 54, no. 3, pp. 436-453, 21 April 2003.
[14] P. Dawkins, “Gradient Vector, Tangent Planes and Normal Lines,” Lamar University, 2003. [Online]. Available: http://tutorial.math.lamar.edu/Classes/CalcIII/GradientVectorTangentPlane.aspx. [Accessed 2014].
[15] V. H. H. Nguyen, Linear Algebra, Hanoi: Hanoi National University Publishing House, 1999, p. 291.
[16] R. L. Burden and D. J. Faires, Numerical Analysis, 9th Edition ed., M. Julet, Ed., Brooks/Cole Cengage Learning, 2011, p. 872.
[17] L. Nguyen, “Feasible length of Taylor polynomial on given interval and application to find the number of roots of equation,” International Journal of Mathematical Analysis and Applications, vol. 1, no. 5, pp. 80-83, 10 January 2015.
[18] L. Nguyen, “Improving analytic function approximation by minimizing square error of Taylor polynomial,” International Journal of Mathematical Analysis and Applications, vol. 1, no. 4, pp. 63-67, 21 October 2014.
[19] Wikipedia, “Sturm’s theorem,” Wikimedia Foundation, 2014. [Online]. Available: https://en.wikipedia.org/wiki/Sturm%27s_theorem. [Accessed 30 August 2014].
Cite This Article
  • APA Style

    Loc Nguyen. (2017). Global Optimization with Descending Region Algorithm. Applied and Computational Mathematics, 6(4-1), 72-82. https://doi.org/10.11648/j.acm.s.2017060401.17

    Copy | Download

    ACS Style

    Loc Nguyen. Global Optimization with Descending Region Algorithm. Appl. Comput. Math. 2017, 6(4-1), 72-82. doi: 10.11648/j.acm.s.2017060401.17

    Copy | Download

    AMA Style

    Loc Nguyen. Global Optimization with Descending Region Algorithm. Appl Comput Math. 2017;6(4-1):72-82. doi: 10.11648/j.acm.s.2017060401.17

    Copy | Download

  • @article{10.11648/j.acm.s.2017060401.17,
      author = {Loc Nguyen},
      title = {Global Optimization with Descending Region Algorithm},
      journal = {Applied and Computational Mathematics},
      volume = {6},
      number = {4-1},
      pages = {72-82},
      doi = {10.11648/j.acm.s.2017060401.17},
      url = {https://doi.org/10.11648/j.acm.s.2017060401.17},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.acm.s.2017060401.17},
      abstract = {Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.},
     year = {2017}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Global Optimization with Descending Region Algorithm
    AU  - Loc Nguyen
    Y1  - 2017/06/09
    PY  - 2017
    N1  - https://doi.org/10.11648/j.acm.s.2017060401.17
    DO  - 10.11648/j.acm.s.2017060401.17
    T2  - Applied and Computational Mathematics
    JF  - Applied and Computational Mathematics
    JO  - Applied and Computational Mathematics
    SP  - 72
    EP  - 82
    PB  - Science Publishing Group
    SN  - 2328-5613
    UR  - https://doi.org/10.11648/j.acm.s.2017060401.17
    AB  - Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
    VL  - 6
    IS  - 4-1
    ER  - 

    Copy | Download

Author Information
  • Vietnam Institute of Mathematics, Hanoi, Vietnam

  • Sections