Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/11417
Title: Reliability in the assessment of program quality by teaching assistants during code reviews
Authors: Scott, MJ
Ghinea, G
Keywords: Programming;Code review;Code inspection;Grading;Quality;Assessment;Reliability;Agreement;Consistency
Issue Date: 2015
Publisher: ACM
Citation: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education , 346-346, (2015)
Abstract: It is of paramount importance that formative feedback is meaningful in order to drive student learning. Achieving this, however, relies upon a clear and constructively aligned model of quality being applied consistently across submissions. This poster presentation raises concerns about the inter-rater reliability of code reviews conducted by teaching assistants in the absence of such a model. Five teaching assistants each reviewed 12 purposely selected programs submitted by introductory programming students. An analysis of their reliability revealed that while teaching assistants were self-consistent, they each assessed code quality in different ways. This suggests a need for standard models of program quality and rubrics, alongside supporting technology, to be used during code reviews to improve the reliability of formative feedback.
URI: http://bura.brunel.ac.uk/handle/2438/11417
DOI: http://dx.doi.org/10.1145/2729094.2754844
ISSN: https://www.academia.edu/11693860/Reliability_in_the_Assessment_of_Program_Quality_by_Teaching_Assistants_During_Code_Reviews
https://www.academia.edu/11693860/Reliability_in_the_Assessment_of_Program_Quality_by_Teaching_Assistants_During_Code_Reviews
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf140.85 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.