Abstract:To address the problems of low target segmentation accuracy caused by defect interference, dirty noise, and blurred lenses in complex scenarios, a Attention-Guided Shape-Aware Bilateral Segmentation Network was proposed. To solve the problem of low precision caused by semantic propagation errors from deep to shallow layers, a semantic flow alignment module was designed to learn the offset between feature maps and assist information alignment. An attention-guided self-selective fusion module was introduced to guide more accurate segmentation by combining deep and shallow information characteristics. A shape-aware loss function was designed to guide the network to pay more attention to difficult boundary regions to solve the problem of noise and object adhesion, improving segmentation performance. The comprehensive experiments on the self-built chip dataset proved that this method improved feature representation and segmentation performance, with mIoU reaching 94.4% (an improvement of 2.1%), speed reaching 48.86FPS (an improvement of 21%), achieving a balance between accuracy and speed and meeting the needs of actual industrial applications. On the CamVid dataset, compared with the baseline network, mIoU was 65.1% (an improvement of 3.0%), the number of parameters was reduced by 4.6%, proving the universality of the proposed algorithm.