Research from Zurich shows: This is why AI-generated fake news is extremely dangerous

Researchers from the University of Zurich have investigated the opportunities and risks of ChatGPT. Alarming conclusion: due to the danger of AI-generated disinformation campaigns, politicians must act.

ChatGPT has a knack for creating coercive disinformation. In a new study from the University of Zurich (UZH), participants had more difficulty recognizing fake news from AI-generated tweets than from human-written tweets.

At the same time, the participants understood tweets from ChatGPT better, according to the study published Wednesday evening in the journal Science Advances.

The study participants were unable to reliably distinguish which tweets were written by ChatGPT and which were written by humans. “GPT-3 is a double-edged sword,” the study says.

For the study, researchers at the University of Zurich asked 697 participants between the ages of 26 and 76 to guess whether a tweet was written by a real Twitter user or by ChatGPT (GPT-3). In addition, the participants were asked whether the tweets were true or not.

Topics covered included the corona pandemic and vaccine safety, climate change and homeopathic cancer treatments.

The participants recognized the misinformation written by people with a 92 percent probability, but only created by ChatGPT with an 89 percent probability.

In addition, human-compiled tweets took longer on average for participants to determine whether a tweet was correct or not. This shows, according to the research, that GPT-3 informs “more efficiently” than humans.

These results indicate that information campaigns created by GPT-3 (but evaluated by trained people) would be more effective in crisis situations, for example, as the UZH writes in a statement about the study.

However, the findings also showed the risks of the AI-generated content. Policy makers are therefore recommended to respond with strong and ethical regulation to counter the potential threat of these disruptive technologies.

“The results show how crucial proactive regulation is to avert potential harm from AI-driven disinformation campaigns.”

Recognizing the risks of generative AI is crucial “to protect public health and maintain a robust and reliable information ecosystem in the digital age”.

(dsc/sda)

source: watson

follow:
Maxine

Maxine

I'm Maxine Reitz, a journalist and news writer at 24 Instant News. I specialize in health-related topics and have written hundreds of articles on the subject. My work has been featured in leading publications such as The New York Times, The Guardian, and Healthline. As an experienced professional in the industry, I have consistently demonstrated an ability to develop compelling stories that engage readers.

Related Posts