Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: Explainable Artificial Intelligence Approach towards Classifying Educational Android App Reviews Using Deep Learning

Title:
Explainable Artificial Intelligence Approach towards Classifying Educational Android App Reviews Using Deep Learning
Language:
English
Authors:
Kanwal Zahoor (ORCID 0000-0002-8000-7151), Narmeen Zakaria Bawany (ORCID 0000-0003-2975-6824)
Source:
Interactive Learning Environments. 2024 32(9):5227-5252.
Availability:
Routledge. Available from: Taylor & Francis, Ltd. 530 Walnut Street Suite 850, Philadelphia, PA 19106. Tel: 800-354-1420; Tel: 215-625-8900; Fax: 215-207-0050; Web site: http://www.tandf.co.uk/journals
Peer Reviewed:
Y
Page Count:
26
Publication Date:
2024
Document Type:
Fachzeitschrift Journal Articles<br />Reports - Research
DOI:
10.1080/10494820.2023.2212708
ISSN:
1049-4820
1744-5191
Entry Date:
2024
Accession Number:
EJ1449691
Database:
ERIC

Weitere Informationen

Mobile application developers rely largely on user reviews for identifying issues in mobile applications and meeting the users' expectations. User reviews are unstructured, unorganized and very informal. Identifying and classifying issues by extracting required information from reviews is difficult due to a large number of reviews. To automate the process of classifying reviews many researchers have adopted machine learning approaches. Keeping in view, the rising demand for educational applications, especially during COVID-19, this research aims to automate Android application education reviews' classification and sentiment analysis using natural language processing and machine learning techniques. A baseline corpus comprising 13,000 records has been built by collecting reviews of more than 20 educational applications. The reviews were then manually labelled with respect to sentiment and issue types mentioned in each review. User reviews are classified into eight categories and various machine learning algorithms are applied to classify users' sentiments and issues of applications. The results demonstrate that our proposed framework achieved an accuracy of 97% for sentiment identification and an accuracy of 94% in classifying the most significant issues. Moreover, the interpretability of the model is verified by using the explainable artificial intelligence technique of local interpretable model-agnostic explanations.

As Provided