MODIFIED CONJUGATE GRADIENT METHOD FOR TRAINING NEURAL NETWORKS BASED ON LOGISTIC MAPPING

  • ALAA LUQMAN IBRAHIM Dept. of Mathematics, College of Science, University of Duhok, Duhok, Kurdistan Region-Iraq
  • SALAH GAZI SHAREEF Dept. of Mathematics, Faculty of Science, University of Zakho, Zakho, Kurdistan Region-Iraq
Keywords: artificial neural networks:, conjugate gradient:, global convergence:, descent and sufficient descent conditions:

Abstract

In this paper, we suggested a modified conjugate gradient method for training neural network which assurance the descent and the sufficient descent conditions. The global convergence of our proposed method has been studied. Finally, the test results present that, in general, the modified method is more superior and efficient when compared to other standard conjugate gradient methods

Downloads

Download data is not yet available.

References

[1] A. Al-Baali, Descent property and global convergence of the Fletcher–Reeves method with inexact line search, IMA J. Numer. Anal. 5 (1985) 121–124.
[2] A. Hmich, A. Badri, A. Sahel, Automatic speaker identification by using the neural network, in: IEEE 2011 International Conference on Multimedia Computing and Systems (ICMCS), (2011), 1–5.
[3] B. Lerner, H. Guterman, M. Aladjem, I. Dinstein, A comparative study of neural network based feature extraction paradigms, Pattern Recognition Letters 20 (1), (1999), 7–14.
[4] C. Charalambous, Conjugate gradient algorithm for efficient training of artificial neural networks, IEEE Proceedings 139 (3), (1992), 301–310.
[5] C. C. Peng, G. D. Magoulas, Adaptive nonmonotone conjugate gradient training algorithm for recurrent neural networks, in: 19th IEEE International Conference on Tools with Artificial Intelligence, (2008), 374–381.
[6] C. H. Wu, H. L. Chen, S. C. Chen, Gene classification artificial neural system, International Journal on Artificial Intelligence Tools 4 (4), (1995), 501–510.
[7] C. M. Bishop, Neural Networks for Pattern Recognition, Oxford, (1995).
[8] D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning internal representations by error propagation, in: D. E. Rumelhart, J. McClell and (Eds.), Parallel Dis-tributed Processing: Explorations in the Micro structure of Cognition, Cambridge, MA, (1986), 318–362.
[9] E. Polak, G. Ribiere, Note sur la convergence de directions conjuguees, Rev. Francaise Informat Recherche Operationelle 3, (1969), 35–43.
[10] G. Zoutendijk, Nonlinear programming computational methods, in: J. Abadie (Ed.), Integer and Nonlinear Programming, North- Holland, Amsterdam, (1970), 37–86.
[11] I. Jusoh, M. Mamat and M. Rivaie, A new edition of conjugate gradient methods for large-scale unconstrained optimization, International Journal of Mathematical Analysis, Vol. 8, No. 46, (2014), 2277 – 2291.
[12] I. E. Livieris, P. Pintelas, An improved spectral conjugate gradient neural network training algorithm, International Journal on Artificial Intelligence Tools 21 (1), (2012).
[13] J. Sun, J. Zhang, Convergence of conjugate gradient methods without line search, Annals of Operations Research 103, (2001), 161–173.
[14] J. Wang, W. Wu, M. Zurada, Deterministic convergence of conjugate gradient method for feedforward neural networks, Neurocomputing 74, (2011), 2368–2376.
[15] K. Sugiki, Y. Narushima, and H. Yabe, Globally convergent three–term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization, J. Optim. Theory Appl. 153, (2012), 733–757.
[16] H. Lu, H. Zhang, L. Ma, A new optimization algorithm based on chaos, Zhejiang University, Hangzhou 310027, China, (2005).
[17] M.F. Moller, A scaled conjugate gradient algorithm for fast supervised learning, Neural Networks 6, (1993), 525–533.
[18] M.R. Hestenes, E. Stiefel, Methods for conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards 49, (1952), 409–436.
[19] R.Fletcher, C. Reeves, Function minimization by conjugate gradients, Comput. J.7, (1964), 149–154.
[20] R.Fletcher, Practical method of optimization, Unconstrained optimization, 1, John Wiley & Sons, New York, (1987).
[21] Y.H. Dai, Y. Yuan, A nonlinear conjugate gradient with a strong global convergence property, SIAMJ. Optim. 10, (1999), 177–182.
[22] Y.Liu, C. Storey, Efficient generalized conjugate gradient algorithms part1: Theory, J. Comput. Appl. Math.69, (1992), 129–137.
Published
2019-10-29
How to Cite
LUQMAN IBRAHIM, A., & GAZI SHAREEF, S. (2019). MODIFIED CONJUGATE GRADIENT METHOD FOR TRAINING NEURAL NETWORKS BASED ON LOGISTIC MAPPING. Journal of Duhok University, 22(1), 45-51. https://doi.org/10.26682/sjuod.2019.22.1.7
Section
Pure and Engineering Sciences