Treffer: Accelerating weed detection for smart agricultural sprayers using a Neural Processing Unit.

Title:
Accelerating weed detection for smart agricultural sprayers using a Neural Processing Unit.
Authors:
Herterich, Nils1 (AUTHOR) nils.herterich@de.bosch.com, Liu, Kai1 (AUTHOR), Stein, Anthony2 (AUTHOR)
Source:
Computers & Electronics in Agriculture. Oct2025:Part B, Vol. 237, pN.PAG-N.PAG. 1p.
Database:
Academic Search Index

Weitere Informationen

Smart spraying technology represents a breakthrough in precision agriculture, enabling innovative weed control methods and increasing crop protection effectiveness through the use of deep neural networks (DNNs). This advancement brings its own set of challenges, particularly in terms of the time neural networks need to reach conclusions. In large sprayers, weed detection is performed by multiple sensors, each focused on a distinct region of interest. While existing research has mainly concentrated on powerful GPUs, the shift to practical applications requires the use of energy-efficient embedded systems to enable intelligent spraying in the field. In this study, the effects of different backbone and neck architectures on an object detection model are analyzed in terms of inference time and compatibility with a Neural Processing Unit (NPU), focusing on efficiency and accuracy. Considering real-time operating conditions, the neural network architectures were optimized for a maximum inference time of 50 ms and a model size of less than 10 MB. To achieve this, the models were validated for operational compatibility during the deployment process and quantized to an 8-bit integer data format. A dataset of 12 weed species was used to test a variety of novel lightweight CNN and hybrid transformer backbones and necks of an object detection model at different image resolutions. The results show that these core components of the model have different effects on inference time and accuracy, with certain configurations meeting the required specifications. The speedup factor, which describes the comparison of CPU and NPU inference time on the target hardware, significantly influences the efficiency. Experimental tests with different backbones showed distinct advantages in weed detection. Building on this success, further efficiency improvements could be achieved by replacing the neck. While the required quantization led to a reduction in accuracy, the proposed combination of components nevertheless could lead to reliable localization and classification of variously sized weed species. This is accomplished while maintaining the required inference time. • Analysis of several weed detection models for embedded systems under 50ms and 10MB. • NPU accelerates performance by aligning with architecture: backbone, neck, head. • Tradeoffs between mAP and inference time in weed detection by image resolution. • Changing the neck improves model accuracy, slightly increasing inference time. • Backbone architecture varies in detecting weed sizes and species. [ABSTRACT FROM AUTHOR]