Categories: Technology

This is one of the biggest dangers of AI chatbots that you need to be aware of

Custom chatbots based on ChatGPT are all the rage. But there are significant risks in ‘GPTs’ created by third parties, as a California IT security expert warns.
Daniel Schurter

A non-representative survey among friends shows that many people are now experimenting with their own chatbots. Personal and professional.

It also sounds tempting: such a chatbot can free users from annoying “busy work” and create more or less creative texts and images.

Where is the problem?

Recently, OpenAI went one step further: there is now a kind of app store for chatbots.

The GPT Store allows paying customers with a ChatGTP Plus subscription to use chatbots from other users and publish them themselves. These bots, which build on ChatGPT’s capabilities, are called “GPTs.” And the third-party developers who create them are the ‘builders’.

However, just because the GPTs are accessible on OpenAI’s platform does not mean that they provide the same level of security and data protection as ChatGPT itself.

The good news: Not all data entered during chats is accessible to third-party developers. OpenAI states in its data protection FAQ that the chat content itself is largely protected.

The OpenAI website says:

“For now, the developers do not have access to the specific conversations with their GPTs to protect user privacy. However, OpenAI is considering future features that will provide builders with analysis and feedback mechanisms to improve their GPTs without compromising privacy.”

In addition, builders can also connect their own programming interfaces (API) to their bots to extend functionality. And according to OpenAI, please note: “Here, portions of your chats are shared with the third-party API provider, which is not subject to OpenAI’s data protection and security obligations.”

OpenAI does not independently review the privacy and security practices of these developers, it points out. This comes with a clear warning to users:

“Only use APIs if you trust the provider.”

The IT security expert Alastair Paterson, CEO of the American company Harmonic Security, published a blog post on the subject last week. Already in the title, the expert warns about a wolf in sheep’s clothing. And:

“If I were an attacker, I could create an app that asks you to upload documents, presentations, code, or PDFs and it might look relatively harmless. The app could even encourage you to post customer data, intellectual property or other sensitive material, which could then be used against employees or companies.”
To person
Californian Alastair Paterson is an IT entrepreneur in the cybersecurity sector. He is currently CEO of Harmonic Security, which he co-founded in 2023 “to help companies accelerate AI adoption without worrying about the security and privacy of their sensitive data.” Before that, in 2011, he co-founded the cybersecurity company Digital Shadows, a “digital risk protection software” that aims to enable third parties to mitigate reputation-damaging content on the Internet and other cyber risks.

In a statement to the American medium Dark Reading, an OpenAI spokesperson tried to reassure:

“To ensure GPTs meet our guidelines, we have introduced a new verification system in addition to the existing security measures we have built into our products. The verification process includes both human and automated verifications. Users can also report GPTs.”

Of course, OpenAI is far from the first company with an app store. However, the question is whether the controls are as strict as those of Apple, Google and others.

In the two months since OpenAI launched customizable GPTs, more than 3 million new bots have been created. It appears to be “a very simple verification process” to publish an app, Paterson points out.

The fact that users must report problematic chatbots to OpenAI so they can be monitored and removed if necessary illustrates a common attitude among Silicon Valley tech companies: protecting privacy and the security of user data is not given the same weight as rapid growth.

At OpenAI, the risk of misuse of the ChatGPT technology is consciously accepted. And we users are the guinea pigs. Like Google, Meta and Co. successfully demonstrated.

AI-generated disinformation
Cyber ​​attacks, disinformation and disinformation are increasingly causing problems for the economy and are developing into a global challenge: this is evident from this year’s ‘Global Risk Report 2024’, which the World Economic Forum (WEF) published together with Zurich Insurance. . The report is based on a survey of approximately 1,400 experts, political and industry leaders.

In this context, IT security expert Adenike Cosgrove from the company Proofpoint points out the specific dangers of AI-generated false information: “Cybercriminals can use AI tools to create convincing phishing emails and fake images and make fraudulent phone calls, which means they are even better at misleading their potential victims with social engineering attacks.

Sources

  • harmonious safety: Wolf in sheep’s clothes? Security Implications of OpenAI’s GPT Store (January 11)
  • darkreading.com: OpenAI’s new GPT Store may pose data security risks

Daniel Schurter

Source: Watson

Share
Published by
Ella

Recent Posts

Terror suspect Chechen ‘hanged himself’ in Russian custody Egyptian President al-Sisi has been sworn in for a third term

On the same day of the terrorist attack on the Krokus City Hall in Moscow,…

1 year ago

Locals demand tourist tax for Tenerife: “Like a cancer consuming the island”

class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…

1 year ago

Agreement reached: this is how much Tuchel will receive for his departure from Bayern

class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…

1 year ago

Worst earthquake in 25 years in Taiwan +++ Number of deaths increased Is Russia running out of tanks? Now ‘Chinese coffins’ are used

At least seven people have been killed and 57 injured in severe earthquakes in the…

1 year ago

Now the moon should also have its own time (and its own clocks). These 11 photos and videos show just how intense the Taiwan earthquake was

The American space agency NASA would establish a uniform lunar time on behalf of the…

1 year ago

This is how the Swiss experienced the earthquake in Taiwan: “I saw a crack in the wall”

class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…

1 year ago