Abstract:To address the efficient application of deep learning models on mobile devices and embedded systems, an optimization study on the MobileNetV3 model was conducted; The study analyzed how pruning techniques can reduce model computation and parameter count to enhance its efficiency in resource-constrained environments; A strategy combining coarse-grained channel pruning and fine-grained unstructured pruning was employed, significantly reducing parameters and computational overhead; To compensate for the accuracy degradation caused by pruning, a depth enhancement strategy was incorporated to restore performance by increasing model depth; The technical innovation lies in the combination of coarse-grained and fine-grained pruning, effectively balancing model accuracy and computational efficiency; Experiments conducted on the CIFAR-10 and CIFAR-100 datasets validated the method, with results showing that the optimized model significantly reduced computational costs while maintaining high classification accuracy, with accuracy improvements of 8.1% on CIFAR-100 and 2.08% on CIFAR-10; This method is suitable for resource-constrained devices, meeting the practical requirements for low computational cost and high accuracy.