Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/11417
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Scott, MJ | - |
dc.contributor.author | Ghinea, G | - |
dc.coverage.spatial | Vilnius, Lithuania | - |
dc.coverage.spatial | Vilnius, Lithuania | - |
dc.date.accessioned | 2015-09-28T15:56:56Z | - |
dc.date.available | 2015-07-06 | - |
dc.date.available | 2015-09-28T15:56:56Z | - |
dc.date.issued | 2015 | - |
dc.identifier.citation | Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education , 346-346, (2015) | en_US |
dc.identifier.issn | https://www.academia.edu/11693860/Reliability_in_the_Assessment_of_Program_Quality_by_Teaching_Assistants_During_Code_Reviews | - |
dc.identifier.issn | https://www.academia.edu/11693860/Reliability_in_the_Assessment_of_Program_Quality_by_Teaching_Assistants_During_Code_Reviews | - |
dc.identifier.uri | http://bura.brunel.ac.uk/handle/2438/11417 | - |
dc.description.abstract | It is of paramount importance that formative feedback is meaningful in order to drive student learning. Achieving this, however, relies upon a clear and constructively aligned model of quality being applied consistently across submissions. This poster presentation raises concerns about the inter-rater reliability of code reviews conducted by teaching assistants in the absence of such a model. Five teaching assistants each reviewed 12 purposely selected programs submitted by introductory programming students. An analysis of their reliability revealed that while teaching assistants were self-consistent, they each assessed code quality in different ways. This suggests a need for standard models of program quality and rubrics, alongside supporting technology, to be used during code reviews to improve the reliability of formative feedback. | en_US |
dc.language.iso | en | en_US |
dc.publisher | ACM | en_US |
dc.source | Proceedings of the 20th Annual ACM Conference on Innovation and Technology in Computer Science Education | - |
dc.source | Proceedings of the 20th Annual ACM Conference on Innovation and Technology in Computer Science Education | - |
dc.subject | Programming | en_US |
dc.subject | Code review | en_US |
dc.subject | Code inspection | en_US |
dc.subject | Grading | en_US |
dc.subject | Quality | en_US |
dc.subject | Assessment | en_US |
dc.subject | Reliability | en_US |
dc.subject | Agreement | en_US |
dc.subject | Consistency | en_US |
dc.title | Reliability in the assessment of program quality by teaching assistants during code reviews | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.doi | http://dx.doi.org/10.1145/2729094.2754844 | - |
pubs.publication-status | Published | - |
pubs.publication-status | Published | - |
Appears in Collections: | Dept of Computer Science Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Fulltext.pdf | 140.85 kB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.