The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods.
Published in | American Journal of Artificial Intelligence (Volume 9, Issue 2) |
DOI | 10.11648/j.ajai.20250902.11 |
Page(s) | 107-109 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2025. Published by Science Publishing Group |
Neural Networks, Stochastic Random Steepest Descent, Perturbation Techniques
[1] | Hardesty L (14 April 2017). "Explained: Neural networks". MIT News Office. Archived from the original on 18 March 2024. Retrieved 2 June 2022. |
[2] | Yang Z (2014). Comprehensive Biomedical Physics. Karolinska Institute, Stockholm, Sweden: Elsevier. p. 1. Archived from the original on 28 July 2022. Retrieved 28 July 2022. |
[3] | Bouwmeester, Henricus; Dougherty, Andrew and Knyazev, Andrew V. (2015). "Nonsymmetric Preconditioning for Conjugate Gradient and Steepest Descent Methods". Proceed. Computer Science. 51: 2 76–285. ArXiv: 1212.6680. |
[4] | Boccara, N. Essentials of Mathematica: With Applications to Mathematics and Physics. Springer, New York, 2007. |
[5] | Nayfeh, Ali H. (2004) Perturbation Methods, Wiley-VCH Verlag GmbH & Co. Kga A. |
[6] | Shao, Feng; Shen, Zheng (January 9, 2022)."How can artificial neural networks approximate the brain?". Front. Psychol. 13: 970214. |
[7] | Levitan, Irwin; Kaczmarek, Leonard (August 19, 2015). "Inter cellular communication". The Neuron: Cell and Molecular Biology (4th ed.). New York, NY: Oxford University Press. pp. 153–328. |
[8] | Rosenblatt, F. (1958). "The Perceptron: A Probabilistic Model For Information Storage and Organization In The Brain". Psychological Review. 65(6): 386–408. Cite Seer X 10.1.1.588.3775. |
[9] | Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer. |
[10] | Needell, D., Srebro, N. & Ward, R. (2015, January). Stochastic gradient descent weighted sampling, and the randomized Kaczmarz algorithm. |
[11] | Bottou, L. (1991) Stochastic gradient learning in neural networks. Proceedings of Neuro-Nimes, 91. |
[12] | Bender, Carl M. (1999). Advanced mathematical methods for scientists and engineers I: asymptotic methods and perturbation theory. Steven A. Orszag. New York, NY: Springer. |
[13] | Holmes, Mark H. (2013). Introduction to perturbation methods (2nd ed.). New York: Springer. |
[14] | Wiesel, William E. (2010). Modern Astrodynamics. Ohio: Aphelion Press p. 107. |
[15] | Vapnik, V. N., (1998). The nature of statistical learning theory (Corrected 2nd print. ed.). New York Berlin Heidelberg: Springer. |
[16] | Goodfellow, Ian, Bengio, Yoshua and Courville, Aaron (2016). Deep Learning. MIT Press. Archived from the original on 16 April 2016. Retrieved 1 June 2016. |
APA Style
Cekirge, H. M. (2025). Tuning the Training of Neural Networks by Using the Perturbation Technique. American Journal of Artificial Intelligence, 9(2), 107-109. https://doi.org/10.11648/j.ajai.20250902.11
ACS Style
Cekirge, H. M. Tuning the Training of Neural Networks by Using the Perturbation Technique. Am. J. Artif. Intell. 2025, 9(2), 107-109. doi: 10.11648/j.ajai.20250902.11
@article{10.11648/j.ajai.20250902.11, author = {Huseyin Murat Cekirge}, title = {Tuning the Training of Neural Networks by Using the Perturbation Technique}, journal = {American Journal of Artificial Intelligence}, volume = {9}, number = {2}, pages = {107-109}, doi = {10.11648/j.ajai.20250902.11}, url = {https://doi.org/10.11648/j.ajai.20250902.11}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.11}, abstract = {The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods.}, year = {2025} }
TY - JOUR T1 - Tuning the Training of Neural Networks by Using the Perturbation Technique AU - Huseyin Murat Cekirge Y1 - 2025/07/06 PY - 2025 N1 - https://doi.org/10.11648/j.ajai.20250902.11 DO - 10.11648/j.ajai.20250902.11 T2 - American Journal of Artificial Intelligence JF - American Journal of Artificial Intelligence JO - American Journal of Artificial Intelligence SP - 107 EP - 109 PB - Science Publishing Group SN - 2639-9733 UR - https://doi.org/10.11648/j.ajai.20250902.11 AB - The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods. VL - 9 IS - 2 ER -