Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/28947
Title: Machine Impostors Can Avoid Human Detection and Interrupt the Formation of Stable Conventions by Imitating Past Interactions: A Minimal Turing Test
Authors: Müller, TF
Brinkmann, L
Winters, J
Pescetelli, N
Keywords: Turing test;experimental semiotics;convention;reciprocity;interaction history;language evolution;communication;human–machine interaction
Issue Date: 24-Apr-2023
Publisher: Wiley on behalf of Cognitive Science Society (CSS)
Citation: Müller, T.F. et al. (2023) 'Machine Impostors Can Avoid Human Detection and Interrupt the Formation of Stable Conventions by Imitating Past Interactions: A Minimal Turing Test', Cognitive Science, 47 (4), e13288, pp. 1 - 24. doi: 10.1111/cogs.13288.
Abstract: Interactions between humans and bots are increasingly common online, prompting some legislators to pass laws that require bots to disclose their identity. The Turing test is a classic thought experiment testing humans’ ability to distinguish a bot impostor from a real human from exchanging text messages. In the current study, we propose a minimal Turing test that avoids natural language, thus allowing us to study the foundations of human communication. In particular, we investigate the relative roles of conventions and reciprocal interaction in determining successful communication. Participants in our task could communicate only by moving an abstract shape in a 2D space. We asked participants to categorize their online social interaction as being with a human partner or a bot impostor. The main hypotheses were that access to the interaction history of a pair would make a bot impostor more deceptive and interrupt the formation of novel conventions between the human participants. Copying their previous interactions prevents humans from successfully communicating through repeating what already worked before. By comparing bots that imitate behavior from the same or a different dyad, we find that impostors are harder to detect when they copy the participants’ own partners, leading to less conventional interactions. We also show that reciprocity is beneficial for communicative success when the bot impostor prevents conventionality. We conclude that machine impostors can avoid detection and interrupt the formation of stable conventions by imitating past interactions, and that both reciprocity and conventionality are adaptive strategies under the right circumstances. Our results provide new insights into the emergence of communication and suggest that online bots mining personal information, for example, on social media, might become indistinguishable from humans more easily.
Description: Supporting Information is available online at: https://onlinelibrary.wiley.com/doi/10.1111/cogs.13288#support-information-section .
URI: https://bura.brunel.ac.uk/handle/2438/28947
DOI: https://doi.org/10.1111/cogs.13288
ISSN: 0364-0213
Other Identifiers: ORCiD: James Winters https://orcid.org/0000-0003-2982-2991
e13288
Appears in Collections:Dept of Life Sciences Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2023 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS). This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.1.21 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons