Comparison of the Artificial Neural Network's Approximations for the Levenberg-Marquardt Algorithm and the Gradient Descent Optimization on Datasets

Authors

  • Michael S. Osigbemeh Department of Electrical and Electronics Engineering, Alex Ekwueme Federal University Ndufu-Alike, Nigeria
  • Chimereze Osuji Department of Electrical and Electronics Engineering, Alex Ekwueme Federal University Ndufu-Alike, Nigeria
  • Moses O. Onyesolu Department of Computer Science, Nnamdi Azikiwe University, Awka, Nigeria https://orcid.org/0000-0003-3357-4847
  • Uche P. Onochie Department of Mechanical Engineering, Delta State University of Science & Technology, Nigeria

DOI:

https://doi.org/10.37256/aie.5120243781

Keywords:

gradient descent optimization, Levenberg-Marquardt optimization, artificial neural network, back-propagation algorithm, iteration algorithm, receiver operating characteristic

Abstract

The approximations obtained by gradient descent optimization on a set of datasets were compared with the results obtained with the Levenberg-Marquardt Optimization Method (LMOM) on the same datasets. The datasets, which comprised three orthogonal databases obtained from MATLAB's Neural Network toolbox accompanying databases, were normalized and serially loaded to the artificial neural network Graphical User Interface (GUI) designed by the researchers. The GUI built with Visual Studio Programming Language (VSPL) implements a gradient descent optimization scheme of the back-propagation algorithm. The characteristics of each database for determination of the termination criteria were approximated from the developed feature extractive iteration algorithm. Revalidation sessions of the LMOM on the sampled datasets showed significant spuriousness in outputted results when compared with the gradient descent optimization results which although slow in attaining convergence produced results that can be closely replicated. Analysis of the F-statistics and the Receiver Operating Characteristics (ROC) for the sampled datasets results of both methods also showed that the gradient descent method demonstrated significant accuracy and parsimony in approximating the nonlinear solutions in the datasets when compared with the results from LMOM processing. Additionally, in this research, an algorithm for deducing and producing the ROC for analyzed Artificial Neural Network (ANN) sessions was also developed and implemented using VSPL.

Downloads

Published

2024-03-11

How to Cite

1.
Osigbemeh MS, Osuji C, Onyesolu MO, Onochie UP. Comparison of the Artificial Neural Network’s Approximations for the Levenberg-Marquardt Algorithm and the Gradient Descent Optimization on Datasets. Artificial Intelligence Evolution [Internet]. 2024 Mar. 11 [cited 2024 Dec. 31];5(1):24-38. Available from: https://ojs.wiserpub.com/index.php/AIE/article/view/3781