Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/4376
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWesterman, SJ-
dc.contributor.authorCribbin, T-
dc.contributor.authorCollins, J-
dc.date.accessioned2010-05-27T09:01:45Z-
dc.date.available2010-05-27T09:01:45Z-
dc.date.issued2010-
dc.identifier.citationAmerican Society for Information Science and Technology, 61(8): 1535-1542, Aug 2010en
dc.identifier.issn1532-2882-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/4376-
dc.identifier.urihttp://onlinelibrary.wiley.com/doi/10.1002/asi.21361/abstracten
dc.description.abstractTwo studies are reported that examined the reliability of human assessments of document similarity and the association between human ratings and the results of n-gram automatic text analysis (ATA). Human interassessor reliability (IAR) was moderate to poor. However, correlations between average human ratings and n-gram solutions were strong. The average correlation between ATA and individual human solutions was greater than IAR. N-gram length influenced the strength of association, but optimum string length depended on the nature of the text (technical vs. nontechnical). We conclude that the methodology applied in previous studies may have led to overoptimistic views on human reliability, but that an optimal n-gram solution can provide a good approximation of the average human assessment of document similarity, a result that has important implications for future development of document visualization systems.en
dc.language.isoenen
dc.publisherWiley-Blackwellen
dc.titleHuman assessments of document similarityen
dc.typeResearch Paperen
dc.identifier.doihttp://dx.doi.org/10.1002/asi.21361-
Appears in Collections:Computer Science
Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf251.64 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.