Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/1853
Title: Reliability and validity in comparative studies of software prediction models
Authors: Myrtveit, I
Stensrud, E
Shepperd, MJ
Keywords: Software metrics;Cost estimation;Cross-validation;Empirical methods;Arbitrary function approximators;Machine
Issue Date: 2005
Publisher: IEEE
Citation: IEEE Transactions on Software Engineering 31(5): 380 - 391, May 2005
Abstract: Empirical studies on software prediction models do not converge with respect to the question "which prediction model is best?" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models.
URI: http://bura.brunel.ac.uk/handle/2438/1853
DOI: http://dx.doi.org/10.1109/TSE.2005.58
ISSN: 0098-5589
Appears in Collections:Computer Science
Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
01438374.pdf561.03 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.