Abstract:The performance of deep learning models is highly dependent on massive, high-quality annotated data. However, traditional manual annotation methods face challenges such as low efficiency and susceptibility to errors, making it difficult to meet the demand for rapidly constructing large-scale, precise datasets. To address the issue of drone RF signal detection and identification and improve data annotation efficiency, this paper proposes an automatic annotation method based on contour detection. This approach leverages the difference in pixel values between signals and the background in time-frequency spectrograms to automatically separate and locate signal region coordinates using contour detection techniques. Furthermore, it employs the K-Means clustering algorithm to accurately distinguish different target signals and assign category labels. Experimental results demonstrate that this automatic annotation method achieves excellent detection performance across various signal-to-noise ratio (SNR) conditions. When the SNR reaches 8 dB, the detection rate stably approaches 100%. The annotated data was input into a YOLOv11 network for training, and the trained network ultimately achieved high-precision identification of drone signal presence and corresponding aircraft type. The results show high classification accuracy for different drone signals. The proposal of this contour-detection-based automatic annotation method effectively enhances both the efficiency and accuracy of data annotation, holding significant importance for improving deep learning model performance and solving practical engineering problems.