“We aim to mitigate the risks associated with false information on individual and public health,” Germani said. “The emergence of AI models like GPT-3 sparked our interest in exploring how AI influences the information landscape and how people perceive and interact with information and misinformation.”
Researchers investigated 11 topics they deemed susceptible to disinformation – including climate change, COVID-19, vaccine safety and 5G technology. To do this, study authors collected AI-generated tweets, comprised of false and true information, along with samples of real tweets related to the same topics.
Researchers then conducted a survey using the tweets—respondents were asked to determine whether or not the blurbs contained accurate information, and whether they were written by a human or AI. The experiment found that respondents were more capable of determining disinformation in “organic false tweets” – meaning tweets written by humans but still including false information – than the inaccurate statements of “synthetic false” tweets, which were the ones written by GPT-3.