A Dynamically Adaptive Error-Based Varying Gain for Recurrent Neural Network to Solve Linear Time-Varying Equation
Keywords:
convergence performance, error-based varying gain, time varying, linear matrix equation, Zeroing Neural Network (ZNN)Abstract
Convergence performance is a very important index for the Recurrent Neural Networks (RNN). Network model structure, activation function and learning rate (also named as gain) are the general ways to improve the convergence performance, in which, it is a common and effective method to design a suitable learning rate. Specially, the recent work has already presented the varying learning rate schemes for the superior convergence. However, these schemes have no relationship with the error function of the solved problem. It means that the learning rate would not change along with the error function. This would lead us to adjust the learning rate without purpose. To address this issue, we present a dynamically and adaptively error-based varying gain for the Zeroing Neural Networks (ZNN) to solve the linear time varying equation, together with its theoretical analysis on the convergence performance. The theoretical and experimental results shows that the error-based varying gain can be used to accelerate the convergence speed, and to achieve a superior convergence performance for the ZNN models.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Chengli Sun, et al.

This work is licensed under a Creative Commons Attribution 4.0 International License.
