Categories: Health

Research from Zurich shows: This is why AI-generated fake news is extremely dangerous

Researchers from the University of Zurich have investigated the opportunities and risks of ChatGPT. Alarming conclusion: due to the danger of AI-generated disinformation campaigns, politicians must act.

ChatGPT has a knack for creating coercive disinformation. In a new study from the University of Zurich (UZH), participants had more difficulty recognizing fake news from AI-generated tweets than from human-written tweets.

At the same time, the participants understood tweets from ChatGPT better, according to the study published Wednesday evening in the journal Science Advances.

The study participants were unable to reliably distinguish which tweets were written by ChatGPT and which were written by humans. “GPT-3 is a double-edged sword,” the study says.

For the study, researchers at the University of Zurich asked 697 participants between the ages of 26 and 76 to guess whether a tweet was written by a real Twitter user or by ChatGPT (GPT-3). In addition, the participants were asked whether the tweets were true or not.

Topics covered included the corona pandemic and vaccine safety, climate change and homeopathic cancer treatments.

The participants recognized the misinformation written by people with a 92 percent probability, but only created by ChatGPT with an 89 percent probability.

In addition, human-compiled tweets took longer on average for participants to determine whether a tweet was correct or not. This shows, according to the research, that GPT-3 informs “more efficiently” than humans.

These results indicate that information campaigns created by GPT-3 (but evaluated by trained people) would be more effective in crisis situations, for example, as the UZH writes in a statement about the study.

However, the findings also showed the risks of the AI-generated content. Policy makers are therefore recommended to respond with strong and ethical regulation to counter the potential threat of these disruptive technologies.

“The results show how crucial proactive regulation is to avert potential harm from AI-driven disinformation campaigns.”

Recognizing the risks of generative AI is crucial “to protect public health and maintain a robust and reliable information ecosystem in the digital age”.

(dsc/sda)

source: watson

Share
Published by
Maxine

Recent Posts

Terror suspect Chechen ‘hanged himself’ in Russian custody Egyptian President al-Sisi has been sworn in for a third term

On the same day of the terrorist attack on the Krokus City Hall in Moscow,…

1 year ago

Locals demand tourist tax for Tenerife: “Like a cancer consuming the island”

class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…

1 year ago

Agreement reached: this is how much Tuchel will receive for his departure from Bayern

class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…

1 year ago

Worst earthquake in 25 years in Taiwan +++ Number of deaths increased Is Russia running out of tanks? Now ‘Chinese coffins’ are used

At least seven people have been killed and 57 injured in severe earthquakes in the…

1 year ago

Now the moon should also have its own time (and its own clocks). These 11 photos and videos show just how intense the Taiwan earthquake was

The American space agency NASA would establish a uniform lunar time on behalf of the…

1 year ago

This is how the Swiss experienced the earthquake in Taiwan: “I saw a crack in the wall”

class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…

1 year ago