Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: Evaluating Sentence-BERT-Powered Learning Analytics for Automated Assessment of Students' Causal Diagrams

Title:
Evaluating Sentence-BERT-Powered Learning Analytics for Automated Assessment of Students' Causal Diagrams
Language:
English
Authors:
Héctor J. Pijeira-Díaz (ORCID 0000-0003-4580-8997), Shashank Subramanya (ORCID 0009-0005-2485-681X), Janneke van de Pol (ORCID 0000-0003-2275-6397), Anique de Bruin (ORCID 0000-0001-5178-0287)
Source:
Journal of Computer Assisted Learning. 2024 40(6):2667-2680.
Availability:
Wiley. Available from: John Wiley & Sons, Inc. 111 River Street, Hoboken, NJ 07030. Tel: 800-835-6770; e-mail: cs-journals@wiley.com; Web site: https://www.wiley.com/en-us
Peer Reviewed:
Y
Page Count:
14
Publication Date:
2024
Document Type:
Fachzeitschrift Journal Articles<br />Reports - Research
Geographic Terms:
DOI:
10.1111/jcal.12992
ISSN:
0266-4909
1365-2729
Entry Date:
2024
Accession Number:
EJ1448407
Database:
ERIC

Weitere Informationen

Background: When learning causal relations, completing causal diagrams enhances students' comprehension judgements to some extent. To potentially boost this effect, advances in natural language processing (NLP) enable real-time formative feedback based on the automated assessment of students' diagrams, which can involve the correctness of both the responses and their position in the causal chain. However, the responsible adoption and effectiveness of automated diagram assessment depend on its reliability. Objectives: In this study, we compare two Dutch pre-trained models (i.e., based on RobBERT and BERTje) in combination with two machine-learning classifiers--Support Vector Machine (SVM) and Neural Networks (NN), in terms of different indicators of automated diagram assessment reliability. We also contrast two techniques (i.e., semantic similarity and machine learning) for estimating the correct position of a student diagram response in the causal chain. Methods: For training and evaluation of the models, we capitalize on a human-labelled dataset containing 2900+ causal diagrams completed by 700+ secondary school students, accumulated from previous diagramming experiments. Results and Conclusions: In predicting correct responses, 86% accuracy and Cohen's ? of 0.69 were reached, with combinations using SVM being roughly three-times faster (important for real-time applications) than their NN counterparts. In terms of predicting the response position in the causal diagrams, 92% accuracy and 0.89 Cohen's K were reached. Implications: Taken together, these evaluation figures equip educational designers for decision-making on when these NLP-powered learning analytics are warranted for automated formative feedback in causal relation learning; thereby potentially enabling real-time feedback for learners and reducing teachers' workload.

As Provided