Treffer: Generating image captions based on deep learning and natural language processing.

Title:
Generating image captions based on deep learning and natural language processing.
Authors:
Prodduturu, Rohini1 (AUTHOR) proddutururohini563@gmail.com, Muthappagari, Bhargavi1 (AUTHOR), Gorantla, Jasmin1 (AUTHOR), Penikalapati, Manjusha1 (AUTHOR), Ayyannagari, Anulekha Sai1 (AUTHOR)
Source:
AIP Conference Proceedings. 2025, Vol. 3237 Issue 1, p1-10. 10p.
Database:
Academic Search Index

Weitere Informationen

Humans and computers are attempting to communicate because everything in today's society depends on systems like computers, mobile phones, etc. This is how our project is visualized. Our undertaking People with visual impairments can benefit from the creation of image captions. Computers are unable to distinguish objects, things, or activities with the same ease as humans. To recognize them, they require some training. The suggested method is used to identify activities or similar items. We offer several deep neural network-based models for creating captions for images, with a particular emphasis on CNNs that extract characteristics from the image. Using LSTM techniques, RNNs create captions based on the image's attributes. and examining how they affect the construction of sentences. Here, encoder-decoders are used to create a link between descriptions from natural language processing and visual information such as image features. The process of generating a caption's sequence is handled by the decoder, while the encoder extracts features. In order to determine which feature extraction and encoder model produces the best results and accuracy, we have also created captions for sample photos and compared them with one another. We also introduce Deep Voice, a text-to-speech system of production quality that uses only deep neural networks to generate captions based on visual attributes. The evaluation of our project will be conducted utilizing several machine learning methods and Python. [ABSTRACT FROM AUTHOR]