Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/11417
Full metadata record
DC FieldValueLanguage
dc.contributor.authorScott, MJ-
dc.contributor.authorGhinea, G-
dc.coverage.spatialVilnius, Lithuania-
dc.coverage.spatialVilnius, Lithuania-
dc.date.accessioned2015-09-28T15:56:56Z-
dc.date.available2015-07-06-
dc.date.available2015-09-28T15:56:56Z-
dc.date.issued2015-
dc.identifier.citationProceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education , 346-346, (2015)en_US
dc.identifier.issnhttps://www.academia.edu/11693860/Reliability_in_the_Assessment_of_Program_Quality_by_Teaching_Assistants_During_Code_Reviews-
dc.identifier.issnhttps://www.academia.edu/11693860/Reliability_in_the_Assessment_of_Program_Quality_by_Teaching_Assistants_During_Code_Reviews-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/11417-
dc.description.abstractIt is of paramount importance that formative feedback is meaningful in order to drive student learning. Achieving this, however, relies upon a clear and constructively aligned model of quality being applied consistently across submissions. This poster presentation raises concerns about the inter-rater reliability of code reviews conducted by teaching assistants in the absence of such a model. Five teaching assistants each reviewed 12 purposely selected programs submitted by introductory programming students. An analysis of their reliability revealed that while teaching assistants were self-consistent, they each assessed code quality in different ways. This suggests a need for standard models of program quality and rubrics, alongside supporting technology, to be used during code reviews to improve the reliability of formative feedback.en_US
dc.language.isoenen_US
dc.publisherACMen_US
dc.sourceProceedings of the 20th Annual ACM Conference on Innovation and Technology in Computer Science Education-
dc.sourceProceedings of the 20th Annual ACM Conference on Innovation and Technology in Computer Science Education-
dc.subjectProgrammingen_US
dc.subjectCode reviewen_US
dc.subjectCode inspectionen_US
dc.subjectGradingen_US
dc.subjectQualityen_US
dc.subjectAssessmenten_US
dc.subjectReliabilityen_US
dc.subjectAgreementen_US
dc.subjectConsistencyen_US
dc.titleReliability in the assessment of program quality by teaching assistants during code reviewsen_US
dc.typeConference Paperen_US
dc.identifier.doihttp://dx.doi.org/10.1145/2729094.2754844-
pubs.publication-statusPublished-
pubs.publication-statusPublished-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf140.85 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.