Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/11593
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSigweni, B-
dc.contributor.authorShepperd, M-
dc.coverage.spatialNanjing-
dc.coverage.spatialNanjing-
dc.date.accessioned2015-11-13T12:40:58Z-
dc.date.available2015-11-13T12:40:58Z-
dc.date.issued2015-
dc.identifier.citationEASE '15 Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, 32, Nanjing, China, (April 27 - 29, 2015)en_US
dc.identifier.isbn978-1-4503-3350-4-
dc.identifier.urihttp://dl.acm.org/citation.cfm?id=2745832-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/11593-
dc.description.abstractContext: In recent years there has been growing concern about conflicting experimental results in empirical software engineering. This has been paralleled by awareness of how bias can impact research results. Objective: To explore the practicalities of blind analysis of experimental results to reduce bias. Method : We apply blind analysis to a real software engineering experiment that compares three feature weighting approaches with a na ̈ıve benchmark (sample mean) to the Finnish software effort data set. We use this experiment as an example to explore blind analysis as a method to reduce researcher bias. Results: Our experience shows that blinding can be a relatively straightforward procedure. We also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety (i.e., the degree of trimming). Conclusion: Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and easy to implement method that supports more objective analysis of experimental results. Therefore we argue that blind analysis should be the norm for analysing software engineering experiments.en_US
dc.language.isoenen_US
dc.publisherACMen_US
dc.sourceProceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering-
dc.sourceProceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering-
dc.subjectResearcher Biasen_US
dc.subjectBlind analysisen_US
dc.subjectSoftware engineering experimentationen_US
dc.subjectSoftware e ort estimationen_US
dc.titleUsing blind analysis for software engineering experimentsen_US
dc.typeConference Paperen_US
dc.identifier.doihttp://dx.doi.org/10.1145/2745802.2745832-
pubs.publication-statusPublished-
pubs.publication-statusPublished-
Appears in Collections:Computer Science
Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf263.8 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.