Treffer: The Sensitivity of Value-Added Estimates to Test Scoring Decisions. EdWorkingPaper No. 25-1226

Title:
The Sensitivity of Value-Added Estimates to Test Scoring Decisions. EdWorkingPaper No. 25-1226
Language:
English
Authors:
Joshua B. Gilbert (ORCID 0000-0003-3496-2710), James G. Soland (ORCID 0000-0001-8895-2871), Benjamin W. Domingue (ORCID 0000-0002-3894-9049), Annenberg Institute for School Reform at Brown University
Source:
Annenberg Institute for School Reform at Brown University. 2025.
Availability:
Annenberg Institute for School Reform at Brown University. Brown University Box 1985, Providence, RI 02912. Tel: 401-863-7990; Fax: 401-863-1290; e-mail: annenberg@brown.edu; Web site: https://annenberg.brown.edu/
Peer Reviewed:
N
Page Count:
31
Publication Date:
2025
Document Type:
Report Reports - Research
Entry Date:
2025
Accession Number:
ED674118
Database:
ERIC

Weitere Informationen

Value-Added Models (VAMs) are both common and controversial in education policy and accountability research. While the sensitivity of VAMs to model specification and covariate selection is well documented, the extent to which test scoring methods (e.g., mean scores vs. IRT-based scores) may affect VA estimates is less studied. We examine the sensitivity of VA estimates to scoring method using empirical item response data from 23 education datasets. We show that VA estimates are frequently highly sensitive to scoring method, holding constant students and items. While the various test scores are highly correlated, on average, different scoring approaches result in VA percentile ranks that vary by over 20 points, and over 50% of teachers or schools ranked in more than one quartile of the VA distribution. Dispersion in VA ranks is reduced with complete item response data and more consistent correlations between baseline and endline scores across scoring methods. We conclude that consideration of both measurement error and model uncertainty is necessary for appropriate interpretation of VAMs.

As Provided