A data scientist from Zurich caused a stir on Twitter in recent days. The reason: It warns about the risks and side effects of ChatGPT. This article summarizes the main findings.
If you don’t use ChatGPT for entertainment, but for serious purposes, expect some nasty surprises. This is evident from research by Teresa Koebacka.
She has a PhD in Physics from ETH and works independently as one Data scientist – based in Zurich.
The scientist pointed out disturbing findings on Twitter last week.
Almost everything that should not go wrong when writing a scientific text:
Teresa Kubacka describes that she left the chat with the AI ”with an intense feeling of creepiness”:
The data scientist’s conclusion and “the moral of the story”, as she herself writes:
Apparently, the AI has already learned to “cheat,” writes a colleague on futurezone.at.
You need to know how Teresa Kubacka went about condemning the fantasizing chatbot.
When the data scientist analyzed ChatGPT’s text output, she noticed several forgeries. At one point, the researcher who is said to have written the scientific “paper” cited by the AI existed, but the scientific work did not exist.
Another time there was a similar researcher at a university with a similar name, but he was doing research in a completely different field. And with further references by the AI, it turned out that neither the researchers nor the referenced work existed.
Kubacka then repeated the experiment on a similar topic, but it was “slightly more specific” and found that “everything ChatGPT spat out from scientific sources was bogus.”
Watch thisthat the ChatGPT developers transparently point out the risks: the comment about possible misinformation is also on the OpenAi website – only that does not calm the AI critics.
It is a mistake to rely on it for something important. It’s a taste of progress; we still have a lot of work to do in terms of robustness and veracity.”
Austrian journalist Barbara Wimmer notes:
As is well known, the credibility of the (journalistic) media has declined due to the spread of fake news via social media, says Wimmer. She is afraid that something similar could also flourish in science.
Data scientist Teresa Kubacka writes that she is deeply concerned about what this chatbot misbehavior means for our society. Then:
And she draws the following scenario:
We are fed hallucinations indistinguishable from the truth, written without grammatical errors, supported by hallucinated evidence, and subjected to the first scrutiny. Using similar models, how can we distinguish a real pop science article from a fake one?”
In fact, there are increasing critical voices from the US, the home country of ChatGPT and Co., pointing to a much greater danger from AI: it could be used as a weapon to destroy trust in the rule of law and ultimately in democracy.
American investor Paul Kedrosky tweeted:
The American journalist Luke Zaleski writes that he believes that ChatGPT and related should be withdrawn immediately. And if the technology is ever introduced again, then only with strict restrictions such as text length, etc. It would also be conceivable to use “some kind of watermark” to recognize the AI-generated content.
Then the American journalist asks rhetorically:
And that brings us to the man who played a key role in funding the development of ChatGPT and other AI applications at OpenAi: Elon Musk.
Critics of the right-wing multibillionaire and new Twitter boss suspect he is working to destabilize democratic societies.
Microsoft has invested $1 billion in OpenAI to boost the development of artificial intelligence. In concrete terms, this concerns the commercialization of AI applications that run on Microsoft’s Azure platform.
The technical background: The AI applications require enormous computing power. These can be made available all over the world by a large cloud operator such as Microsoft – for a fee.
For the time being, ChatGPT – as a beta version – is still available for free for interested parties. That should change at a later date, because Microsoft is a profit-oriented company and wants to make a profit from it.
The engineer who specializes in machine learning Vicky Boykis states that this is exactly why ChatGPT will not turn society upside down. Microsoft will hide the technology behind a paid programming interface (API). The beta version is a way for the Windows group to hedge against risk by making OpenAI the responsible actor when the AI does something strange/unethical. And it’s “a fantastic opportunity to collect training data.”
Source: Watson
I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.
On the same day of the terrorist attack on the Krokus City Hall in Moscow,…
class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…
class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…
At least seven people have been killed and 57 injured in severe earthquakes in the…
The American space agency NASA would establish a uniform lunar time on behalf of the…
class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…