Treffer: Automatic Image Processing Algorithm for Light Environment Optimization Based on Multimodal Neural Network Model.
Cytometry A. 2020 Mar;97(3):226-240. (PMID: 31981309)
Sensors (Basel). 2019 May 02;19(9):. (PMID: 31052585)
New Phytol. 2019 May;222(3):1284-1297. (PMID: 30720871)
Nat Cancer. 2020 Jan;1(1):99-111. (PMID: 32984843)
Sensors (Basel). 2014 Dec 25;15(1):248-73. (PMID: 25609045)
Mol Plant. 2019 Jun 3;12(6):847-862. (PMID: 31009752)
Weitere Informationen
In this paper, we conduct an in-depth study and analysis of the automatic image processing algorithm based on a multimodal Recurrent Neural Network (m-RNN) for light environment optimization. By analyzing the structure of m-RNN and combining the current research frontiers of image processing and natural language processing, we find out the problem of the ineffectiveness of m-RNN for some image generation descriptions, starting from both the image feature extraction part and text sequence data processing. Unlike traditional image automatic processing algorithms, this algorithm does not need to add complex rules manually. Still, it evaluates and filters through the training image collection and finally generates image automatic processing models by m-RNN. An image semantic segmentation algorithm is proposed based on multimodal attention and adaptive feature fusion. The main idea of the algorithm is to combine adaptive and feature fusion and then introduce data enhancement for small-scale multimodal light environment datasets by extracting the importance between images through multimodal attention. The model proposed in this paper can span the semantic differences of different modalities and construct feature relationships between different modalities to achieve an inferable, interpretable, and scalable feature representation of multimodal data. The automatic processing of light environment images using multimodal neural networks based on traditional algorithms eliminates manual processing and greatly reduces the time and effort of image processing.
(Copyright © 2022 Mujun Chen.)
The author declares that there are no conflicts of interest.