Мобильная версия

Доступно журналов:

3 288

Доступно статей:

3 891 637

 

Скрыть метаданые

Автор Grippo, L.
Дата выпуска 1994
dc.description In this paper the problem of neural network training is formulated as the unconstrained minimization of a sum of differentiate error terms on the output space. For problems of this form we consider solution algorithms of the backpropagation-type, where the gradient evaluation is split into different steps, and we state sufficient convergence conditions that exploit the special structure of the objective function. Then we define a globally convergent algorithm that uses the knowledge of the overall error function for the computation of the learning rates. Potential advantages and possible shortcomings of this approach, in comparison with alternative approaches are discussed.
Формат application.pdf
Издатель Gordon and Breach Science Publishers
Копирайт Copyright Taylor and Francis Group, LLC
Тема Neural networks, training algorithms, unconstrained minimization
Название A class of unconstrained minimization methods for neural network training
Тип research-article
DOI 10.1080/10556789408805583
Electronic ISSN 1029-4937
Print ISSN 1055-6788
Журнал Optimization Methods and Software
Том 4
Первая страница 135
Последняя страница 150
Аффилиация Grippo, L.; Dipartimento di Informatica e Sistemistica, Università di Roma “La Sapienza”
Выпуск 2
Библиографическая ссылка Charalambous, C. 1992. Conjugate gradient algorithm for efficient training of artificial neural networks.. IEE Proceedings, 139: 301–310. Part G
Библиографическая ссылка Chicocki, A. and Unbehauen, R. 1993. Neural Networks for Optimization and signal Processing, John Wiley & Sons.
Библиографическая ссылка DeLeone, R., Gaudioso, M. and Grippo, L. 1984. Stopping criteria for linesearch methods without derivatives.. Mathematical Programming, 30: 285–300.
Библиографическая ссылка Dixon, L.C.W. and Price, R.C. 1989. The truncated Newton method for sparse unconstrained optimization using automatic differentiation.. Journal of Optimization Theory and Applications, 60: 261–275.
Библиографическая ссылка Gaivoronski, A. A. 1989. Convergence analysis of parallel back-propagation algorithm for neural networks. This volume. 60: 117–134.
Библиографическая ссылка Grippo, L., Lampariello, F. and Lucidi, S. 1988. Global convergence and stabilization of unconstrained minimization methods without derivatives. Journal of Optimization Theory and Applications, 56: 385–406.
Библиографическая ссылка Grippo, L., Lampariello, F. and Lucidi, S. 1986. A nonmonotone line search technique for Newton's method. SIAM Journal on Numerical Analysis, 23: 707–716.
Библиографическая ссылка Grippo, L. 1993. “A class of unconstrained minimization methods for neural network training. Technical Report 09.93,”. La Sapienza, Rome: DIS, University of Rome.
Библиографическая ссылка Hertz, J., Krogh, A. and Palmer, R.G. 1991. Introduction to the Theory of Neural Computation, Redwood City, California: Addison-Wesley.
Библиографическая ссылка Kibardin, V.M. 1980. Decomposition into functions in the minimization problems. Automation and Remote Control, 40: 1311–1323.
Библиографическая ссылка Luo, Z.-Q. and Tseng, P. 1980. Analysis of an approximate gradient projection method with applications to the backpropagation algorithm. This volume. 40: 85–102.
Библиографическая ссылка Mangasarian, O.L. 1993. Mathematical programming in neural networks. ORSA Journal on Computing , 5: 347–3.
Библиографическая ссылка Mangasarian, O.L. and Solodov, M. V. 1992. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. This volume. 5: 103–116.
Библиографическая ссылка Nocedal, J. 1992. Theory of algorithms for unconstrained optimization.. Acta Numerica, 1: 199–242.
Библиографическая ссылка Ortega, J.M. and Rheinboldt, W.C. 1970. Iterative Solution of Nonlinear Equations in Several Variables, New York: Academic Press.
Библиографическая ссылка Rumelhart, D.E., Hinton, G.E. and Williams, R.J. 1986. “Learning internal representation by error backpropagation.”. In Parallel Distributed Processing-Explorations in the Microstructure of Cognition, Edited by: Rumelhart and McClelland. 318–362. Cambridge: MIT press.
Библиографическая ссылка Saarinen, S., Bramley, R.B. and Cybenko, G. 1991. “Neural networks, backpropagation and automatic differentiation”. In Automatic differentiation of Algorithms, Edited by: Griwank, A. and Corliss, G.F. 31–42. Philadelphia: SIAM.
Библиографическая ссылка Vogl, T.P., Mangis, J.K., Rigler, A.K., Zink, W.T. and Alkon, D.L. 1988. Accelerating the convergence of the back-propagation method. Biological Cybernetics, 59: 257–263.
Библиографическая ссылка White, H. 1989. Some asymptotic results for learning in single hidden-layer feedforward network models. Journal of the American Statistical Association, 84: 1003–1013.

Скрыть метаданые