Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/1854
Title: Making inferences with small numbers of training sets
Authors: Kirsopp, C
Shepperd, MJ
Keywords: software development management
Issue Date: 2002
Publisher: IEEE
Citation: IEE Proceedings - Software, 149(5): 123-130
Abstract: A potential methodological problem with empirical studies that assess project effort prediction system is discussed. Frequently, a hold-out strategy is deployed so that the data set is split into a training and a validation set. Inferences are then made concerning the relative accuracy of the different prediction techniques under examination. This is typically done on very small numbers of sampled training sets. It is shown that such studies can lead to almost random results (particularly where relatively small effects are being studied). To illustrate this problem, two data sets are analysed using a configuration problem for case-based prediction and results generated from 100 training sets. This enables results to be produced with quantified confidence limits. From this it is concluded that in both cases using less than five training sets leads to untrustworthy results, and ideally more than 20 sets should be deployed. Unfortunately, this raises a question over a number of empirical validations of prediction techniques, and so it is suggested that further research is needed as a matter of urgency.
URI: http://bura.brunel.ac.uk/handle/2438/1854
DOI: http://dx.doi.org/10.1049/ip-sen:20020695
ISSN: 1462-5970
Appears in Collections:Computer Science
Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
01049201.pdf825.74 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.