Abstract:Traditional training methods for deep learning models are no longer able to meet the training needs of class diversity, resulting in low training efficiency and difficulty in convergence. To this end, a data parallel training optimization method for deep learning models based on reserve pool computing is studied. This method balances and divides the training data. Build a reserve pool and use the moth algorithm to obtain the optimal values of the connection density and weight parameters between neurons in the connection matrix. Using balanced and segmented training data as input, utilize the constructed reserve pool to optimize the deep learning model and conduct parallel training. The results indicate that the model loss of the studied method is smaller and the model curve tends to stabilize earlier, indicating that the model is more stable and can converge to a better solution faster. In addition, the CPU usage rate of the studied method remained consistently high and fluctuated less during the training process, with significantly higher F1 scores, indicating that the method can more effectively utilize CPU resources and has good generalization ability, proving the training effectiveness of the method.