In Generative AI We Trust

dc.bibliographicCitation.issue1
dc.bibliographicCitation.volume8
dc.contributor.authorKuznetsova, Elizaveta
dc.contributor.authorMakhortykh, Mykola
dc.contributor.authorVziatysheva, Victoria
dc.contributor.authorStolze, Martha
dc.contributor.authorBaghumyan, Ani
dc.contributor.authorUrman, Aleksandra
dc.date.accessioned2025-02-21T13:17:11Z
dc.date.available2025-02-21T13:17:11Z
dc.date.issued2024
dc.date.updated2025-02-18T04:28:05Z
dc.description.abstractThis article presents a comparative analysis of the potential of two large language model (LLM)-based chatbots—ChatGPT and Bing Chat (recently rebranded to Microsoft Copilot)—to detect veracity of political information. We use AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ + -related debates. We compare how the chatbots respond in high- and low-resource languages by using prompts in English, Russian, and Ukrainian. Furthermore, we explore chatbots’ ability to evaluate statements according to political communication concepts of disinformation, misinformation, and conspiracy theory, using definition-oriented prompts. We also systematically test how such evaluations are influenced by source attribution. The results show high potential of ChatGPT for the baseline veracity evaluation task, with 72% of the cases evaluated in accordance with the baseline on average across languages without pre-training. Bing Chat evaluated 67% of the cases in accordance with the baseline. We observe significant disparities in how chatbots evaluate prompts in high- and low-resource languages and how they adapt their evaluations to political communication concepts with ChatGPT providing more nuanced outputs than Bing Chat. These findings highlight the potential of LLM-based chatbots in tackling different forms of false information in online environments, but also point to the substantial variation in terms of how such potential is realized due to specific factors (e.g. language of the prompt or the topic).
dc.description.sponsorshipOpen Access funding enabled and organized by Projekt DEAL.
dc.description.sponsorshipBundesministerium für Bildung und Forschunghttp://dx.doi.org/10.13039/501100002347
dc.description.sponsorshipWeizenbaum-Institut e.V. (1789)
dc.identifier.doi10.1007/s42001-024-00338-8
dc.identifier.urihttp://resolver.sub.uni-goettingen.de/purl?fidaac-11858/3292
dc.language.isoeng
dc.relation.issn2432-2717
dc.relation.journalJournal of Computational Social Science
dc.rightsL::CC BY 4.0
dc.subject.ddcddc:320
dc.subject.ddcddc:400
dc.subject.fieldlinguistics
dc.subject.fieldpoliticalscience
dc.subject.fielddigitalhumanities
dc.titleIn Generative AI We Trust
dc.title.alternativeCan Chatbots Effectively Verify Political Information?
dc.typearticle
dc.type.versionpublishedVersion
dspace.entity.typePublication

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
42001_2024_Article_338.pdf
Size:
2.03 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
5.84 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections