Categories: Technology

Zurich scientist warns against AI – this is how “ChatGPT” falsifies quotes and sources

The popular OpenAI chatbot turns out to be a scientific fake news catapult – and ultimately a threat to democracy?
Author: Daniel Schurter

A data scientist from Zurich caused a stir on Twitter in recent days. The reason: It warns about the risks and side effects of ChatGPT. This article summarizes the main findings.

“If we can’t agree on the facts, how are we supposed to make decisions and do politics together?”

What happened?

If you don’t use ChatGPT for entertainment, but for serious purposes, expect some nasty surprises. This is evident from research by Teresa Koebacka.

She has a PhD in Physics from ETH and works independently as one Data scientist – based in Zurich.

The scientist pointed out disturbing findings on Twitter last week.

«Today I asked ChatGPT about the topic I wrote my PhD on. There were reasonable-sounding statements and reasonable-looking quotes. So far so good – until I checked the quotes for accuracy. And it got scary when I asked about a physical phenomenon that doesn’t exist.”

What exactly is wrong with ChatGPT?

Almost everything that should not go wrong when writing a scientific text:

  • Artificial intelligence (AI) can falsify scientific sources almost perfectly.
  • And so smart that even experts in the field in question have trouble recognizing the associated incorrect information as such.
  • The AI’s creations, known as “data hallucinations,” can also, but not only, be traced back to the questions people ask.

Teresa Kubacka describes that she left the chat with the AI ​​”with an intense feeling of creepiness”:

“I just experienced a parallel universe of plausible-sounding, non-existent phenomena, confidently supported by citations from non-existent research.”

The data scientist’s conclusion and “the moral of the story”, as she herself writes:

«Please DO NOT ask ChatGPT to give you factual, scientific information. It will produce an incredibly plausible sounding hallucination. And even a qualified expert will have a hard time figuring out what’s wrong.”

How could that happen?

Apparently, the AI ​​has already learned to “cheat,” writes a colleague on futurezone.at.

You need to know how Teresa Kubacka went about condemning the fantasizing chatbot.

“She had ChatGPT write an essay about it and then used a trick to ask the chatbot for the sources (here she had to tell the chatbot to pretend to be a scientist). Kubacka then took a close look at the sources the program subsequently spewed out. She had to realize that the references apparently didn’t exist. »

When the data scientist analyzed ChatGPT’s text output, she noticed several forgeries. At one point, the researcher who is said to have written the scientific “paper” cited by the AI ​​existed, but the scientific work did not exist.

Another time there was a similar researcher at a university with a similar name, but he was doing research in a completely different field. And with further references by the AI, it turned out that neither the researchers nor the referenced work existed.

Kubacka then repeated the experiment on a similar topic, but it was “slightly more specific” and found that “everything ChatGPT spat out from scientific sources was bogus.”

Watch thisthat the ChatGPT developers transparently point out the risks: the comment about possible misinformation is also on the OpenAi website – only that does not calm the AI ​​critics.

“ChatGPT is incredibly limited, but good enough in some things to give a misleading impression of size.

It is a mistake to rely on it for something important. It’s a taste of progress; we still have a lot of work to do in terms of robustness and veracity.”

Why isn’t this just a fire hazard for students?

Austrian journalist Barbara Wimmer notes:

“I wonder if humanity is ready for such AI systems. When even experts can’t judge at first glance whether something the AI ​​spews out is really plausible.”

As is well known, the credibility of the (journalistic) media has declined due to the spread of fake news via social media, says Wimmer. She is afraid that something similar could also flourish in science.

Data scientist Teresa Kubacka writes that she is deeply concerned about what this chatbot misbehavior means for our society. Then:

“Scientists may be careful enough not to use such a tool, or at least correct it on the spot, but even if you are an expert, no expert can know everything. We are all ignorant in most areas, in with a few exceptions.”

And she draws the following scenario:

“People will use ChatGPT to ask things beyond their expertise just because they can. Because they are curious and need an answer in an available form that is not protected by paywalls or difficult language.

We are fed hallucinations indistinguishable from the truth, written without grammatical errors, supported by hallucinated evidence, and subjected to the first scrutiny. Using similar models, how can we distinguish a real pop science article from a fake one?”

In fact, there are increasing critical voices from the US, the home country of ChatGPT and Co., pointing to a much greater danger from AI: it could be used as a weapon to destroy trust in the rule of law and ultimately in democracy.

American investor Paul Kedrosky tweeted:

“I’m so disturbed by what I’m suddenly seeing everywhere with ChatGPT these past few days. College and high school essays, college applications, legal documents, coercion, threats, programming, etc.: all fake, all highly believable.

The American journalist Luke Zaleski writes that he believes that ChatGPT and related should be withdrawn immediately. And if the technology is ever introduced again, then only with strict restrictions such as text length, etc. It would also be conceivable to use “some kind of watermark” to recognize the AI-generated content.

“As I’ve said elsewhere, it’s a disgrace to OpenAI to unleash this pocket nuke on an unprepared society with no restrictions.”

Then the American journalist asks rhetorically:

Is anyone else witnessing the continued destruction of the rule of law, truth, reality and democracy in America as an ongoing, interconnected phenomenon that unfolds every day in a complex attack, but in fact amounts to a hostile takeover by the US to create an oligarchy in honor?”

And that brings us to the man who played a key role in funding the development of ChatGPT and other AI applications at OpenAi: Elon Musk.

Critics of the right-wing multibillionaire and new Twitter boss suspect he is working to destabilize democratic societies.

PS: Is Microsoft pulling the emergency brake?

Microsoft has invested $1 billion in OpenAI to boost the development of artificial intelligence. In concrete terms, this concerns the commercialization of AI applications that run on Microsoft’s Azure platform.

The technical background: The AI ​​applications require enormous computing power. These can be made available all over the world by a large cloud operator such as Microsoft – for a fee.

For the time being, ChatGPT – as a beta version – is still available for free for interested parties. That should change at a later date, because Microsoft is a profit-oriented company and wants to make a profit from it.

The engineer who specializes in machine learning Vicky Boykis states that this is exactly why ChatGPT will not turn society upside down. Microsoft will hide the technology behind a paid programming interface (API). The beta version is a way for the Windows group to hedge against risk by making OpenAI the responsible actor when the AI ​​does something strange/unethical. And it’s “a fantastic opportunity to collect training data.”

Sources

Author: Daniel Schurter

Source: Watson

Share
Published by
Ella

Recent Posts

Terror suspect Chechen ‘hanged himself’ in Russian custody Egyptian President al-Sisi has been sworn in for a third term

On the same day of the terrorist attack on the Krokus City Hall in Moscow,…

1 year ago

Locals demand tourist tax for Tenerife: “Like a cancer consuming the island”

class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…

1 year ago

Agreement reached: this is how much Tuchel will receive for his departure from Bayern

class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…

1 year ago

Worst earthquake in 25 years in Taiwan +++ Number of deaths increased Is Russia running out of tanks? Now ‘Chinese coffins’ are used

At least seven people have been killed and 57 injured in severe earthquakes in the…

1 year ago

Now the moon should also have its own time (and its own clocks). These 11 photos and videos show just how intense the Taiwan earthquake was

The American space agency NASA would establish a uniform lunar time on behalf of the…

1 year ago

This is how the Swiss experienced the earthquake in Taiwan: “I saw a crack in the wall”

class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…

1 year ago