Despite recent advances in understanding the capabilities and limits of generative artificial intelligence (GenAI) models, we are just beginning to understand their capacity to assess and reason about the veracity of content. We evaluate multiple GenAI models across tasks that involve the rating of, and reasoning about, the credibility of information. The information in our experiments comes from content that subnational U.S. politicians post to Facebook. We find that GPT-4o outperforms other models, but all models exhibit only moderate agreement with human coders.
Elected officials occupy privileged positions in public communication about important topics—roles that extend to the digital world. In the same way that public officials stand to lead constructive online dialogue, they also hold the potential to accelerate the dissemination of low-factual and harmful content. This study aims to explore and explain the sharing of low-factual content by examining nearly 500,000 Facebook posts by U.S. state legislators from 2020 to 2021. We validate a widely used low-factual content detection approach in misinformation studies, and apply the measure to all of the posts we collect.
We study the roles of elected officials in the dissemination of misinformation on Twitter. This is a particularly salient online population since elected officials serve as primary sources of information for many stakeholders in the public, media, government, and industry. We analyze the content of tweets posted from the accounts of over 3,000 U.S. state lawmakers throughout 2020 and 2021. Specifically, we identify the dissemination of URLs linked to unreliable content.