Abstract:When dealing with large-scale datasets, gradient descent, as a commonly used training method, tends to slow down the convergence speed after obtaining the optimal solution for some data, resulting in the inability to obtain the optimal solution for the overall data. Aiming at the problems encountered by gradient descent method, an innovative solution based on the cerebral cortex basal ganglia circuit and E/I disturbance phenomenon in biological neural networks is proposed based on existing networks - dynamic disturbance attenuation network and dynamic disturbance attenuation gradient descent method. This network introduces a gradually decreasing disturbance layer on the input layer of the existing network, and as the number of iteration rounds increases, the disturbance layer gradually approaches zero on the input layer. This method not only accelerates the convergence speed of the gradient descent method in the early stage of training, but also avoids the problem of obtaining local optimal solutions and overfitting throughout the entire gradient descent process, thereby improving the performance of the network. The effectiveness of the proposed dynamic disturbance attenuation network and algorithm was successfully validated through experiments using different networks and algorithms on the MNIST, CIFAR-10, and CIFAR-100 datasets. Compared to the original network using Adam and SGDM algorithms, the dynamic disturbance attenuation method achieved improvements in testing accuracy of 0.16% to 1.4% and 0.39% to 1.38%, respectively, while also having faster convergence speed.