| Peer-Reviewed

Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems

Received: 14 June 2015    Accepted: 28 July 2015    Published: 29 July 2015
Views:       Downloads:
Abstract

This paper deals with studying the asymptotical properties of multilayer neural networks models used for the adaptive identification of wide class of nonlinearly parameterized systems in stochastic environment. To adjust the neural network’s weights, the standard online gradient type learning algorithms are employed. The learning set is assumed to be infinite but bounded. The Lyapunov-like tool is utilized to analyze the ultimate behaviour of learning processes in the presence of stochastic input variables. New sufficient conditions guaranteeing the global convergence of these algorithms in the stochastic frameworks are derived. The main their feature is that they need no a penalty term to achieve the boundedness of weight sequence. To demonstrate asymptotic behaviour of the learning algorithms and support the theoretical studies, some simulation examples are also given

Published in American Journal of Neural Networks and Applications (Volume 1, Issue 1)
DOI 10.11648/j.ajnna.20150101.11
Page(s) 1-10
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Previous article
Keywords

Neural Network, Nonlinear Model, Gradient Learning Algorithm, Stochastic Environment, Convergence

References
[1] J. Suykens, and B. D. Moor, “Nonlinear system identification using multilayer neural networks: some ideas for initial weights, number of hidden neurons and error criteria,” in Proc. 12nd IFAC World Congress, vol. 3. Sydney, Australia, July 1993, pp. 49–52.
[2] E. S. Kosmatopoulos, M. M. Polycarpou, M. A. Christodoulou, and P.A. Ioannou, “High-order neural network structures for identification of dynamical systems,” IEEE Trans. on Neural Networks, vol. 6, pp. 422–431, 1995.
[3] A. U. Levin, and K. S. Narendra, “Recursive identification using feedforward neural networks,” Int. J. Contr vol. 61, pp. 533–547, 1995.
[4] Ya. Z. Tsypkin, J. D. Mason, E. D. Avedyan, K. Warwick, I. K. Levin, “Neural networks for identification of nonlinear systems under random piecewise polynomial disturbances,” IEEE Trans. on Neural Networks, vol. 10, pp. 303–311, 1999.
[5] G. Cybenko, “Approximation by superpositions of a sigmoidal functions,” Math. Control, Signals, Syst., vol. 2, pp. 303–313, 1989.
[6] K. Funahashi, “On the approximate realization of continuous mappings by neural networks,” Neural Networks, vol. 2, pp. 182–192, 1989.
[7] Ya. Z. Tsypkin, Adaptation and Learning in Automatic Systems, New-York: Academic Press, 1971.
[8] L. Behera, S. Kumar, and A. Patnaik, “On adaptive learning rate that guarantees convergence in feedforward networks,” IEEE Trans. on Neural Networks, vol. 17, pp. 1116–1125, 2006.
[9] H. White, “Some asymptotic results for learning in single hidden-layer neural network models,” J. Amer. Statist. Assoc., vol. 84, pp. 117–134, 1987.
[10] C. M. Kuan, and K. Hornik, “Convergence of learning algorithms with constant learning rates,” IEEE Trans. on Neural Networks, vol. 2, pp. 484 – 489, 1991.
[11] Z. Luo, “On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks,” Neural Comput., vol. 3, pp. 226–245, 1991.
[12] W. Finnoff, “Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima,” Neural Comput., 6, pp. 285– 295, 1994.
[13] A. A. Gaivoronski, “Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods,” Optim. Methods Software 4, pp. 117–134, 1994.
[14] T. L. Fine, and S. Mukherjee, “Parameter convergence and learning curves for neural networks,” Neural Comput. 11, pp. 749–769, 1999.
[15] V.Tadic, and S. Stankovic, “Learning in neural networks by normalized stochastic gradient algorithm: Local convergence,” in Proc. 5th Seminar Neural Netw. Appl. Electr. Eng., pp. 11–17, (Yugoslavia,Sept. 2000).
[16] H. Zhang, W. Wu, F. Liu, and M.Yao, “Boundedness and convergence of online gradient method with penalty for feedforward neural networks,” IEEE Trans. on Neural Networks, vol. 20, pp. 1050–1054, 2009.
[17] O. L. Mangasarian, and M. V.Solodov, “Serial and parallel backpropagation convergence via nonmonotone perturbed minimization,” Optim. Methods Software, pp. 103–106, 1994.
[18] W. Wu, G. Feng, and X. Li, “Training multilayer perceptrons via minimization of ridge functions,” Advances in Comput. Mathematics, vol. 17, pp. 331–347, 2002.
[19] N. Zhang, W. Wu, and G. Zheng, “Convergence of gradient method with momentum for two-layer feedforward neural networks,” IEEE Trans. on Neural Networks, vol. 17, pp. 522–525, 2006.
[20] W. Wu, G. Feng, X. Li, and Y. Xu, “Deterministic convergence of an online gradient method for BP neural networks,” IEEE Trans. on Neural Networks, vol. 16, pp. 1–9, 2005.
[21] Z. B. Xu, R. Zhang, and W.F. Jing, “When does online BP training converge?” IEEE Trans. on Neural Networks, vol. 20, pp. 1529–1539, 2009.
[22] H. Shao, W. Wu, and L. Liu, “Convergence and monotonicity of an online gradient method with penalty for neural networks,” WSEAS Trans. Math., vol. 6, pp. 469–476, 2007.
[23] S. W. Ellacott, “The numerical analysis approach,” Mathematical Approaches to Neural Networks (J.G. Taylor, ed; B.V.: Elsevier Science Publisher), pp. 103–137, 1993.
[24] F. P.Skantze, A. Kojic, A. P. Loh, and A. M. Annaswamy, “Adaptive estimation of discrete time systems with nonlinear parameterization,” Automatica, vol. 36, pp. 1879–1887, 2000.
[25] M. Loeve, Probability Theory New-York: Springer-Verlag, 1963.
[26] L. S. Zhiteckii, V. N. Azarskov, and S. A. Nikolaienko, “Convergence of learning algorithms in neural networks for adaptive identification of nonlinearly parameterized systems,” in Proc. 16th IFAC Symposium on System Identification (Brussels, Belgium), pp. 1593–1598, 2012.
[27] V. N. Azarskov, L. S. Zhiteckii, and S. A. Nikolaienko, “Sequential learning processes in neural networks applied as models of nonlinear systems,” Electronics and Control Systems, no. 3(37), pp. 124–132, 2013.
[28] B. T. Polyak, “Convergence and convergence rate of iterative stochastic algorithms, I: General case,” Autom. Remote Control, vol. 12, pp. 1858–1868, 1976.
[29] G. C. Goodwin, and K. S.Sin, Adaptive Filtering, Prediction and Control Engewood Cliffs, NJ.: Prentice-Hall, 1984.
Cite This Article
  • APA Style

    Valerii N. Azarskov, Dmytro P. Kucherov, Sergii A. Nikolaienko, Leonid S. Zhiteckii. (2015). Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems. American Journal of Neural Networks and Applications, 1(1), 1-10. https://doi.org/10.11648/j.ajnna.20150101.11

    Copy | Download

    ACS Style

    Valerii N. Azarskov; Dmytro P. Kucherov; Sergii A. Nikolaienko; Leonid S. Zhiteckii. Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems. Am. J. Neural Netw. Appl. 2015, 1(1), 1-10. doi: 10.11648/j.ajnna.20150101.11

    Copy | Download

    AMA Style

    Valerii N. Azarskov, Dmytro P. Kucherov, Sergii A. Nikolaienko, Leonid S. Zhiteckii. Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems. Am J Neural Netw Appl. 2015;1(1):1-10. doi: 10.11648/j.ajnna.20150101.11

    Copy | Download

  • @article{10.11648/j.ajnna.20150101.11,
      author = {Valerii N. Azarskov and Dmytro P. Kucherov and Sergii A. Nikolaienko and Leonid S. Zhiteckii},
      title = {Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems},
      journal = {American Journal of Neural Networks and Applications},
      volume = {1},
      number = {1},
      pages = {1-10},
      doi = {10.11648/j.ajnna.20150101.11},
      url = {https://doi.org/10.11648/j.ajnna.20150101.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajnna.20150101.11},
      abstract = {This paper deals with studying the asymptotical properties of multilayer neural networks models used for the adaptive identification of wide class of nonlinearly parameterized systems in stochastic environment. To adjust the neural network’s weights, the standard online gradient type learning algorithms are employed. The learning set is assumed to be infinite but bounded. The Lyapunov-like tool is utilized to analyze the ultimate behaviour of learning processes in the presence of stochastic input variables. New sufficient conditions guaranteeing the global convergence of these algorithms in the stochastic frameworks are derived. The main their feature is that they need no a penalty term to achieve the boundedness of weight sequence. To demonstrate asymptotic behaviour of the learning algorithms and support the theoretical studies, some simulation examples are also given},
     year = {2015}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems
    AU  - Valerii N. Azarskov
    AU  - Dmytro P. Kucherov
    AU  - Sergii A. Nikolaienko
    AU  - Leonid S. Zhiteckii
    Y1  - 2015/07/29
    PY  - 2015
    N1  - https://doi.org/10.11648/j.ajnna.20150101.11
    DO  - 10.11648/j.ajnna.20150101.11
    T2  - American Journal of Neural Networks and Applications
    JF  - American Journal of Neural Networks and Applications
    JO  - American Journal of Neural Networks and Applications
    SP  - 1
    EP  - 10
    PB  - Science Publishing Group
    SN  - 2469-7419
    UR  - https://doi.org/10.11648/j.ajnna.20150101.11
    AB  - This paper deals with studying the asymptotical properties of multilayer neural networks models used for the adaptive identification of wide class of nonlinearly parameterized systems in stochastic environment. To adjust the neural network’s weights, the standard online gradient type learning algorithms are employed. The learning set is assumed to be infinite but bounded. The Lyapunov-like tool is utilized to analyze the ultimate behaviour of learning processes in the presence of stochastic input variables. New sufficient conditions guaranteeing the global convergence of these algorithms in the stochastic frameworks are derived. The main their feature is that they need no a penalty term to achieve the boundedness of weight sequence. To demonstrate asymptotic behaviour of the learning algorithms and support the theoretical studies, some simulation examples are also given
    VL  - 1
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Faculty of Computer Science, National Aviation University, Kiev, Ukraine

  • Faculty of Computer Science, National Aviation University, Kiev, Ukraine

  • Cybernetics Centre, Dept. of Automated Data Processing Systems, Kiev, Ukraine

  • Cybernetics Centre, Dept. of Automated Data Processing Systems, Kiev, Ukraine

  • Sections