Human Behavior and Emerging Technologies
We’re thrilled to share that our manuscript “A (Mid)Journey Through Reality: Assessing Accuracy, Impostor Bias, and Automation Bias in Human Detection of AI-Generated Images” has been accepted for publication in Human Behavior and Emerging Technologies (IF: 3; AR: 16%; Q1 in Psychology, 96th p.; Q1 in Computer Science, 88th p.).
This interdisciplinary project bridges psychology and computer science to explore how (and if…) people detect AI-generated images, with important implications for media literacy, policy, and AI safety. Specifically, it builds on and evolves our earlier work “GenAI Mirage: The Impostor Bias and the Deepfake Detection Challenge” (https://doi.org/10.1016/j.fsidi.2024.301795), extending those findings with a larger empirical study that was primarily designed to validate the impostor bias, the tendency for people to systematically distrust (or misattribute) the authenticity of images in the age of generative AI.
(As such, this paper both deepens and empirically tests the hypotheses we proposed in the first study.)
We began structuring this research almost two years ago, and I’m very delighted to make it finally see the light.
Last but not least, a big thank you to all the participants!
Authors: Mirko Casu, Luca Guarnera, Ignazio Zangara, Pasquale Caponnetto, Sebastiano Battiato.
