Evaluating the validity and applicability of automated essay scoring in two massive open online courses

Authors

  • Erin Dawna Reilly University of Texas at Austin
  • Rose Eleanore Stafford University of Texas at Austin
  • Kyle Marie Williams University of Texas at Austin
  • Stephanie Brooks Corliss University of Texas at Austin

DOI:

https://doi.org/10.19173/irrodl.v15i5.1857

Keywords:

massive open online courses, assessment, automated essay scoring systems

Abstract

The use of massive open online courses (MOOCs) to expand students’ access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated essay scoring (AES) systems that allow students to engage in critical writing and free-response activities. However, there is a lack of research investigating the validity of such systems in MOOCs. This research examined the effectiveness of an AES tool to score writing assignments in two MOOCs. Results indicated that some significant differences existed between Instructor grading, AES-Holistic scores, and AES-Rubric Total scores within two MOOC courses. However, use of the AES system may still be useful given instructors’ assessment needs and intent. Findings from this research have implications for instructional technology administrators, educational designers, and instructors implementing AES learning activities in MOOC courses.

Published

2014-10-03

How to Cite

Reilly, E. D., Stafford, R. E., Williams, K. M., & Corliss, S. B. (2014). Evaluating the validity and applicability of automated essay scoring in two massive open online courses. The International Review of Research in Open and Distributed Learning, 15(5). https://doi.org/10.19173/irrodl.v15i5.1857