covid
Buscar en
Psicología Educativa - Educational Psychology
Toda la web
Inicio Psicología Educativa - Educational Psychology Empirical recovery of argumentation learning progressions in scenario-based asse...
Información de la revista
Vol. 20. Núm. 2.
Páginas 109-115 (diciembre 2014)
Compartir
Compartir
Descargar PDF
Más opciones de artículo
Vol. 20. Núm. 2.
Páginas 109-115 (diciembre 2014)
Open Access
Empirical recovery of argumentation learning progressions in scenario-based assessments of English language arts
Replicación empírica de las progresiones de aprendizaje de la capacidad para argumentar en una evaluación basada en escenarios de competencias de lectoescritura
Visitas
4227
Peter W. van Rijna,1,
Autor para correspondencia
pvanrijn@etsglobal.org

Corresponding author.
, E. Aurora Grafb, Paul Deaneb
a Educational Testing Service Global, Amsterdam, The Netherlands
b Educational Testing Service, Princeton, U.S.A
Este artículo ha recibido

Under a Creative Commons license
Información del artículo
Abstract

We investigate methods for studying learning progressions in English language arts using data from scenario-based assessments. Particularly, our interest lies in the empirical recovery of learning progressions in argumentation for middle school students. We collected data on three parallel assessment forms that consist of scenario-based task sets with multiple item formats, where students randomly took two of the three assessments. We fitted several item response theory models, and used model-based measures to classify students into levels of the argumentation learning progression. Although there were some differences in difficulty between parallel tasks, good agreement was found among the classifications of the parallel forms. Overall, we managed to recover empirically the order of the levels in the argumentation learning progression as they were assigned to tasks of the assessments by the theoretical framework.

Keywords:
Learning progressions
Writing assessment
Item response theory
Parallel forms
Resumen

En este trabajo se investigan métodos para estudiar las progresiones de aprendizaje de competencias de lecto-escritura utilizando evaluaciones basadas en escenarios. En particular, nos interesa poder replicar las progresiones en el aprendizaje de la capacidad para argumentar en estudiantes de enseñanza secundaria obligatoria. Se han recogido datos aplicando tres formas paralelas de una prueba que consiste en conjuntos de tareas basadas en escenarios con preguntas de distinto formato; cada estudiante respondió a dos de estas tres formas, que fueron asignadas aleatoriamente a cada uno de ellos. Se han ajustado a los datos varios modelos de teoría de respuesta al ítem y se han utilizado medidas basadas en esta teoría para clasificar a los estudiantes en los niveles correspondientes de la progresión en el aprendizaje de la capacidad para argumentar. Aunque se han detectado algunas diferencias en las tareas de las formas paralelas, se ha encontrado un grado razonable de acuerdo entre las clasificaciones realizadas en base a las formas paralelas de la prueba. En general, se ha replicado empíricamente el orden de los niveles de la progresión en el aprendizaje de la argumentación, tal y como fueron asignados los niveles a las tareas de la prueba en el marco teórico.

Palabras clave:
Progresiones de aprendizaje
Evaluación de la escritura
Teoría de la respuesta al ítem
Formas paralelas
El Texto completo está disponible en PDF
References
[Adams et al., 1997]
R. Adams, M. Wilson, W.-C. Wang.
The multdimensional random coefficients multinomial logit model.
Applied Psychological Measurement, 21 (1997), pp. 1-23
[Alonzo and Gotwals, 2012]
Alonzo, A., & Gotwals, A. (Eds.) (2012). Learning progressions in science: Current challenges and future directions. Rotterdam, The Netherlands: Sense.
[Altman, 1991]
D.G. Altman.
Practical statistics for medical research.
Chapman and Hall, (1991),
[Arieli-Attali et al., 2012]
Arieli-Attali, M., Wylie, E.C., & Bauer, M.I. (2012, April). The use of three learning progressions in supporting formative assessment in middle school mathematics. Paper presented at the annual meeting of the American Educational Research Association (AERA), Vancouver, CA.
[Avalon et al., 2007]
M.E. Avalon, L.S. Meyers, B.W. Davis, N. Smits.
Distractor similarity and item-stem structure: Effects on item difficulty.
Applied Measurement in Education, 20 (2007), pp. 153-170
[Bennett, 2010]
R. Bennett.
Cognitively based assessment of, for, and as learning (CBAL): A preliminary theory of action for summative and formative assessment.
Measurement, 8 (2010), pp. 70-91
[Black et al., 2011]
P. Black, M. Wilson, S.-Y. Yao.
Road maps for learning: A guide to navigation of learning progressions.
Measurement: Interdisciplinary Research and Perspectives, 9 (2011), pp. 71-123
[Carr and Alexeev, 2011]
M. Carr, N. Alexeev.
Fluency, accuracy, and gender predict developmental trajectories of arithmetic strategies, Journal of Educational Psychology.
, 103 (2011), pp. 617-631
[Cizek and Bunch, 2007]
G.J. Cizek, M.B. Bunch.
Standard setting: A guide to establishing and evaluating performance standards on tests.
Sage, (2007),
[Clements and Sarama, 2004]
D.H. Clements, J. Sarama.
Learning trajectories in mathematics education.
Mathematical Thinking and Learning, 6 (2004), pp. 81-89
[Common, 2010]
Common Core State Standards Initiative (2010). Common core state standards for Englishlanguage arts and literacy in history/social studies, science, and technical subjects. Retrieved from http://www.corestandards.org/ELA-Literacy/.
[Deane et al., 2011]
Deane, P., Fowles, M., Baldwin, D., & Persky, H. (2011). The CBAL summative writing assessment: A draft eighth-grade design (Research Memorandum 11-01). Princeton, NJ: Educational Testing Service.
[Deane and Song, 2014]
P. Deane, Y. Song.
A case study in principled assessment design: Designing assessments to measure and support the development of argumentative reading and writing skills.
Psicología Educativa, 20 (2014), pp. 99-108
[Dudycha and Carpenter, 1973]
A.L. Dudycha, J.B. Carpenter.
Effects of item format on item discrimination and difficulty, Journal of.
Applied Psychology, 58 (1973), pp. 116-121
[Duschl et al., 2011]
R. Duschl, S. Maeng, A. Sezen.
Learning progressions and teaching sequences: A review and analysis.
Studies in Science Education, 47 (2011), pp. 123-182
[Embretson and Gorin, 2001]
S.E. Embretson, J. Gorin.
Improving construct validity with cognitive psychology principles.
Journal of Educational Measurement, 38 (2001), pp. 343-368
[Fleiss et al., 2003]
J.L. Fleiss, B. Levin, M. Paik.
Statistical methods for rates and proportions.
3rd ed, Wiley, (2003),
[Haberman, 2013]
Haberman, S. (2013). A general program for item-response analysis that employs the stabilized Newton-Raphson algorithm (ETS Research Report 13-32). Princeton, NJ: Educational Testing Service.
[Mislevy et al., 2003]
Mislevy, R.J., Almond, R.G., & Lukas, J.F. (2003). A brief introduction to evidence centered design (Research Report 03-16). Princeton, NJ: Educational Testing Service.
[Mislevy and Haertel, 2006]
R.J. Mislevy, G.D. Haertel.
Implications of evidence-centered design for educational testing.
Educational Measurement: Issues and Practice, 25 (2006), pp. 6-20
[Muraki, 1992]
E. Muraki.
A generalized partial credit model: Application of an EM algorithm.
Applied Psychological Measurement, 16 (1992), pp. 159-176
[Newstead et al., 2006]
S.E. Newstead, P. Bradon, S.J. Handley, I. Dennis, J.S.B.T. Evans.
Predicting the difficulty of complex logical reasoning problems.
Thinking & Reasoning, 12 (2006), pp. 62-90
[OECD, 1999]
OECD (1999). Measuring student knowledge and skill. a new framework for assessment. Retrieved from http://www.oecd.org/edu/school/programmeforinternationalstu-dentassessmentpisa/33693997.pdf.
[OECD, 2013a]
OECD (2013a, March). PISA 2015: Draft reading literacy framework. Retrieved from http://www.oecd.org/pisa/pisaproducts/Draft%20PISA%202015%20Reading%20Fra-mework%20.pdf.
[OECD, 2013b]
OECD (2013b, March). PISA 2015: Draft science framework. Retrieved from http://www. oecd.org/pisa/pisaproducts/Draft%20PISA%202015%20Science%20Framework%20.pdf.
[Reckase, 2009]
M. Reckase.
Multidimensional item response theory.
Springer, (2009),
[Rupp et al., 2006]
A.A. Rupp, T. Ferne, H. Choi.
How assessing reading comprehension with multiple-choice questions shapes the construct: A cognitive processing perspective.
Language Testing, 23 (2006), pp. 441-474
[Smith et al., 2006]
C. Smith, M. Wiser, C. Anderson, J. Krajcik.
Implications of research on children's learning for standards and assessment: A proposed learning progression for matter and the atomic-molecular theory.
Measurement: Interdisciplinary Research and Perspectives, 4 (2006), pp. 1-98
[Song et al., 2013]
Song, Y., Deane, P., Graf, E.A., & van Rijn, P.W. (2013). Using argumentation learning progressions to support teaching and assessments of English language arts (R & D Connections No. 22). Princeton, NJ: Educational Testing Service.
[Steedle and Shavelson, 2009]
J. Steedle, R. Shavelson.
Supporting valid interpretations of learning progression level diagnoses.
Journal of Research in Science Teaching, 46 (2009), pp. 669-715
[Van der Schoot, 2001]
Van der Schoot, F.C. J. A. (2001, April). The application of an IRT-based method for standard setting in a three-stage procedure. Paper presented at the annual meeting of the National Council of Measurement in Education (NCME), New Orleans, LA.
[Wainer and Thissen, 1996]
H. Wainer, D. Thissen.
How is reliability related to the quality of test scores? What is the effect of local dependence on reliability?.
Educational Measurement: Issues and Practice, 15 (1996), pp. 22-29
[Way et al., 1988]
W. Way, T.N. Ansley, R.A. Forsyth.
The comparative effects of compensatory and noncompensatory two-dimensional data on unidimensional IRT estimates.
Applied Psychological Measurement, 12 (1988), pp. 239-252
[West et al., 2012]
West, P., Rutstein, D., Mislevy, R., Liu, J., Levy, R., DiCerbo, K., … Behrens, J. (2012). A Bayesian network approach to modeling learning progressions. In A. Alonzo & A. Gotwals (Eds.), Learning progressions in science: Current challenges and future directions (p. 257-292). Rotterdam, The Netherlands: Sense.
[Wilmot et al., 2011]
D. Wilmot, A. Schoenfeld, M. Wilson, D. Champney, W. Zahner.
Validating a learning progression in mathematical functions for college readiness.
Mathematical Thinking and Learning, 1 (2011), pp. 259-291
[Wilson, 2009]
M. Wilson.
Measuring progressions: Assessment structures underlying a learning progression.
Journal of Research in Science Teaching, 46 (2009), pp. 716-730
[Yen, 1993]
W. Yen.
Scaling performance assessments: Strategies for managing local item dependence.
Journal of Educational Measurement, 30 (1993), pp. 187-213
[Zwick et al., 2001]
R. Zwick, D. Senturk, J. Wang, S.C. Loomis.
An investigation of alternative methods for item mapping in the National Assessment of Educational Progress.
Educational Measurement: Issues and Practice, 20 (2001), pp. 15-25

Correspondence concerning this article should be addressed to Peter W. van Rijn. ETS Global. Strawinskylaan 929. 1077XX Amsterdam, The Netherlands.

Descargar PDF
Opciones de artículo