Logo

CRITERIA FOR ASSESSING WRITTEN COMPETENCE IN PRE-SERVICE ENGLISH TEACHERS: A RUBRIC-BASED FORMATIVE AND SUMMATIVE ASSESSMENT FRAMEWORK ALIGNED WITH THE CEFR

Authors

  • Guzal Turaeva

    Namangan State University, Doctoral student
    Author

Keywords:

written competence; writing assessment; rubric-based assessment; formative assessment; summative assessment; educational evaluation

Abstract

This paper presents an empirical synthesis of peer-reviewed research examining criteria for assessing students’ written competence within rubric-based formative and summative assessment frameworks. Drawing on systematic reviews, meta-analyses, and empirical studies in educational assessment and writing research, the paper identifies core, evidence-based criteria commonly used to operationalize written competence, including task fulfillment, organization and coherence, content development, language use, vocabulary, and mechanics. The synthesis further examines how these criteria are embedded in analytic rubrics and how their use differs in formative versus summative assessment contexts. Findings indicate that rubrics are most effective when criteria are clearly defined, aligned with instructional goals, and integrated into formative feedback cycles that support revision and self-regulation. In summative contexts, rubric effectiveness is strongly associated with rater training, calibration procedures, and alignment with instructional practices, which collectively enhance reliability and validity. The paper argues for a hybrid assessment approach in which the same rubric-based criteria support both learning-oriented formative assessment and accountability-driven summative assessment. By aligning assessment criteria with instructional practices, educators can improve the fairness, transparency, and educational impact of writing assessment. The study contributes to assessment theory by clarifying how written competence can be systematically and meaningfully evaluated in educational settings and offers practical implications for educators, researchers, and assessment designers

References

1. Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148.

2. Cooper, H. (2017). Research synthesis and meta-analysis: A step-by-step approach (5th ed.). SAGE.

3. Dikli, S. (2006). An overview of automated scoring of essays. Journal of Technology, Learning, and Assessment, 5(1), 1–35.

4. Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99(3), 445–476. https://doi.org/10.1037/0022-0663.99.3.445

5. Gough, D., Oliver, S., & Thomas, J. (2012). An introduction to systematic reviews. SAGE.

6. Hyland, K. (2016). Teaching and researching writing (3rd ed.). Routledge.

7. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144.

https://doi.org/10.1016/j.edurev.2007.05.002

8. Lea, M. R., & Street, B. V. (1998). Student writing in higher education: An academic literacies approach. Studies in Higher Education, 23(2), 157–172.

9. Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited. Educational Research Review, 9, 129–144.

10. Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859

11. Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. https://doi.org/10.1007/BF00117714

12. Weigle, S. C. (2002). Assessing writing. Cambridge University Press.

Downloads

Published

2026-01-29