Mobile-assisted pronunciation learning with feedback from peers and/or automatic speech recognition: a mixed-methods study
Abstract
Although social networking apps and dictation-based automatic speech recognition (ASR) are now widely available in mobile phones, relatively little is known about whether and how these technological affordances can contribute to EFL pronunciation learning. The purpose of this study is to investigate the effectiveness of feedback from peers and/or ASR in mobile-assisted pronunciation learning. 84 Chinese EFL university students were assigned into three conditions, using WeChat (a multi-purpose mobile app) for autonomous ASR feedback (the Auto-ASR group), peer feedback (the Co-non-ASR group), or peer plus ASR feedback (the Co-ASR group). Quantitative data included the pronunciation pretest, posttest, and delayed posttest, and students’ perception questionnaires, while qualitative data included students’ interviews. The main findings are: (a) all three groups improved their pronunciation, but the Co-non-ASR and the Co-ASR groups outperformed the Auto-ASR group; (b) the three groups showed no significant difference in perception questionnaires; and (c) the interviews revealed some common and unique technical, social/psychological, and educational affordances and concerns about the three mobile-assisted learning conditions.
Link to publication in Taylor & Francis Online