Evaluating the use of automated writing evaluation programs as tools for formative feedback in English as a Second Language postgraduate students
Santana, Ana Isabel Hibert
Automated Writing Evaluation (AWE) programs use natural language processing techniques to analyse texts and score them according to pre-defined parameters. Most of these programs also offer feedback functions, providing students with feedback on various aspects of writing. These programs have been increasingly used in English as a second language (ESL) and English as a foreign language (EFL) classrooms to provide formative feedback to students, especially in the context of academic writing. However, existing research into the use of AWE programs has focused more on the end product of revision with AWE programs and whether the use of these can provide quantifiable gains in error reduction and holistic scores. Little research has investigated how these programs can be integrated into the writing process or what pedagogical approaches result in these technologies being incorporated into the classroom in ways that help the students develop their writing skills. This results in two major gaps in current literature on the use of AWE programs: 1) there is little information regarding how students engage with the feedback they receive and how they decide which feedback to use, and 2) scores and error rates can only give a superficial, post-hoc understanding of the effects of AWE in revision, but tells little about the depth of the changes made. The research showcased in this thesis seeks to address these gaps by presenting the results of two studies designed to improve our knowledge of how students use AWE programs. In the first study, screen captures and think-aloud protocols were used to record the interactions of 11 ESL postgraduate students with an AWE program during four revision sessions. The recordings were analysed to identify self-regulation strategies and decision-making strategies used by students when engaging with the AWE feedback. In the second study, a web program was created to collect texts before and after the use of an AWE program for revision, collecting a total of 30 texts suitable for analysis. The texts were compared before and after revision with an AWE program to understand the extent of the changes made during revision by analysing uptake rates and linguistic markers of proficiency to quantify the effects AWE feedback had on the revision of these texts. Results from both studies suggest students are selective in their use of AWE feedback, drawing from a variety of sources of previous knowledge about English grammar and academic writing conventions to decide whether to accept or reject AWE feedback and this selectiveness results in low feedback uptake and little changes to the texts. These sources include feedback from teachers and mentors, previous exposure to academic texts and knowledge gained from previous English and composition classes. Successful integration of AWE programs into ESL/EFL classrooms should therefore take into account how students engage with the AWE feedback and use that knowledge in pedagogical strategies that scaffold student use of AWE programs and help them develop the cognitive and metacognitive skills need to successfully navigate the feedback they receive.