May 14, 2024

News Collective

Complete New Zealand News World

Misinformation generated by ChatGPT may be more convincing than human misinformation

Misinformation generated by ChatGPT may be more convincing than human misinformation

This content was published on Jun 28, 2023 – 18:44


Science Book, June 28 (EFE). The ChatGPT-3 chatbot and other generative AI tools can inform and mislead social network users more effectively than humans, according to a study published today by Science Advances.

A team led by the University of Zurich used the ChatGPT-3 version of a study with 679 participants, which revealed that the participants had more trouble distinguishing between human-made and automated chat tweets.

In addition, they also had trouble determining which AI-generated messages were accurate and which were not.

Since its launch in November 2022, widespread use of ChatGPT has raised public concern about the potential spread of misinformation online, especially on social media platforms, the authors recall.

Since these types of tools are relatively new to the public domain, the team decided to dig deeper into the different aspects of using them.

They recruited 697 English-speaking people from the United States, the United Kingdom, Canada, Australia and Ireland, between the ages of 26 and 76, for the study.

The task was to evaluate human-generated and GPT-3 tweets that contain accurate and inaccurate information about topics such as vaccines, autism, 5G technology, covid-19, climate change and evolution, which are often subject to misconceptions from the public. .

For each subject, the researchers collected man-made Twitter messages and instructed the GPT-3 model to generate others, some containing correct information and others inaccurate.

Study participants had to judge whether the messages were true or false and whether they were generated by a human or GPT-3.

See also  Despite the "boycott", the National Electoral Institute committed to cancel the mandate: Lorenzo Cordova

The results, which are summarized in the publication, indicated that subjects were better able to identify human-generated misinformation and the accuracy of honest tweets generated by GPT-3.

However, they were also more likely to consider the misinformation generated by GPT-3 to be accurate.

The authors conclude, “Our findings raise important questions about the potential uses and abuses of GPT-3 and other advanced AI text generators and the implications for information dissemination in the digital age.” EFE

cr / icn

© EFE 2023. Redistribution and redistribution of all or part of the contents of the Efe Services is expressly prohibited, without the prior and express consent of Agencia EFE SA