This online tool can recognize AI-generated texts, but the developers warn that state hackers are taking down notorious ransomware gangs

Man with a magnifying glass examines pages.
Let ChatGPT do their homework quickly? The developer company behind the hyped AI chatbot, OpenAI, is making a tool available for free that claims to recognize computer-generated text.

Despite all the enthusiasm, there are many reservations about the text robot ChatGPT. For example, he could help with cheating at school or college. But in the future, a new tool should be able to distinguish between AI-generated texts and content written by humans.

The makers of the writing software ChatGPT are now trying to get a grip on the consequences of their invention. The developer company OpenAI has released a program that has to distinguish whether a text is written by a human or a computer.

How does it work?

To the «AI Text Classification» go here, which requires prior registration on the OpenAi website.

The developers point out limitations:

After checking, the text is classified as “very unlikely”, “unlikely”, “unclear” (unclear if so), “possible” or “likely” (probably) AI generated.

The online tool displays this message when it can find AI-generated text.

The detection still works poorly, as OpenAI admitted in a blog post on Tuesday. In test runs, the software correctly identified texts written by a computer in 26 percent of cases.

At the same time, however, nine percent of human-formulated texts were wrongly assigned to a machine. It is therefore recommended not to rely primarily on the assessment of the “classifier” when assessing the texts.

Can the tool be used to detect fraud?

ChatGPT is so good at mimicking human speech that there are concerns, among other things, that it could be used to cheat schoolwork or create large-scale disinformation campaigns.

In education in particular, there is discussion about how the content of a text robot can be exposed. It seems certain: with a classic plagiarism scanner, with which you can effectively check the authenticity of texts, you will not get any further. These scanners only check whether the text or parts of it already exist in other sources. However, ChatGPT’s AI writer produces unique texts that have never been precisely worded.

The developers of OpenAI point out that their online tool is not suitable for unequivocally recognizing texts generated by the AI. Accordingly, it is not suitable for exposing cheating in housework and the like.

“The results can be useful in deciding whether a document has been created using AI, but should not be the only piece of evidence. The model is trained on human-written text from various sources. These are not necessarily representative of all types of texts written by humans.”

The AI ​​text rating is intended to encourage discussion about the distinction between human-written and AI-generated content, the developers said.

And what about texts written in collaboration between man and machine? The effectiveness of the tool has not been studied in detail before this.

American journalists (BleepingComputer) have already tested the AI ​​text classification and come to the conclusion that the results are largely inconclusive.

The success rate should increase as the tool is trained with even more data. It is currently not a reliable tool for detecting AI-generated content.

What about privacy?

The American company states that anyone who inserts a (copied) text for automatic checking on the OpenAI website and forwards it to the server agrees to the terms of use and data protection rules. That means, among other things:

Why should AI-generated texts be treated with caution?

ChatGPT is artificial intelligence based software trained on massive amounts of text and data to mimic human speech. OpenAI made ChatGPT publicly available last year, sparking admiration for the software’s capabilities and concerns about fraud.

The system convinces above all by the linguistic quality of its answers. At the same time, users cannot rely on ChatGPT to actually answer truthfully and get the facts straight. Critics of the system are also annoyed that the AI ​​system cannot cite sources for its statements.

With OpenAI, the use of a kind of digital watermark for ChatGPT in a conversation that would be invisible to the human eye. A special verification software would then signal whether it is an AI text or not.

Where’s Google?

The hype around ChatGPT is now also scaring the hell out of the competition. The Google group Alphabet in particular sees this as a serious threat to its own business model. Google has also been developing software that can write and speak like a human for years, but has not yet published it.

Now the internet company has employees test a chatbot that works similar to ChatGPT, broadcaster CNBC reports Wednesday evening. An internal mail states that an answer to ChatGPT has priority. Google is also experimenting with a question-and-answer version of its Internet search engine.

Sources

(dsc/sda/awp/dpa)

More about ChatGPT and artificial intelligence

Soource :Watson

follow:
Amelia

Amelia

I am Amelia James, a passionate journalist with a deep-rooted interest in current affairs. I have more than five years of experience in the media industry, working both as an author and editor for 24 Instant News. My main focus lies in international news, particularly regional conflicts and political issues around the world.

Related Posts