A non-representative survey among friends shows that many people are now experimenting with their own chatbots. Personal and professional.
It also sounds tempting: such a chatbot can free users from annoying “busy work” and create more or less creative texts and images.
Where is the problem?
Recently, OpenAI went one step further: there is now a kind of app store for chatbots.
The GPT Store allows paying customers with a ChatGTP Plus subscription to use chatbots from other users and publish them themselves. These bots, which build on ChatGPT’s capabilities, are called “GPTs.” And the third-party developers who create them are the ‘builders’.
However, just because the GPTs are accessible on OpenAI’s platform does not mean that they provide the same level of security and data protection as ChatGPT itself.
The good news: Not all data entered during chats is accessible to third-party developers. OpenAI states in its data protection FAQ that the chat content itself is largely protected.
The OpenAI website says:
In addition, builders can also connect their own programming interfaces (API) to their bots to extend functionality. And according to OpenAI, please note: “Here, portions of your chats are shared with the third-party API provider, which is not subject to OpenAI’s data protection and security obligations.”
OpenAI does not independently review the privacy and security practices of these developers, it points out. This comes with a clear warning to users:
The IT security expert Alastair Paterson, CEO of the American company Harmonic Security, published a blog post on the subject last week. Already in the title, the expert warns about a wolf in sheep’s clothing. And:
In a statement to the American medium Dark Reading, an OpenAI spokesperson tried to reassure:
Of course, OpenAI is far from the first company with an app store. However, the question is whether the controls are as strict as those of Apple, Google and others.
In the two months since OpenAI launched customizable GPTs, more than 3 million new bots have been created. It appears to be “a very simple verification process” to publish an app, Paterson points out.
The fact that users must report problematic chatbots to OpenAI so they can be monitored and removed if necessary illustrates a common attitude among Silicon Valley tech companies: protecting privacy and the security of user data is not given the same weight as rapid growth.
At OpenAI, the risk of misuse of the ChatGPT technology is consciously accepted. And we users are the guinea pigs. Like Google, Meta and Co. successfully demonstrated.
In this context, IT security expert Adenike Cosgrove from the company Proofpoint points out the specific dangers of AI-generated false information: “Cybercriminals can use AI tools to create convincing phishing emails and fake images and make fraudulent phone calls, which means they are even better at misleading their potential victims with social engineering attacks.
Sources
- harmonious safety: Wolf in sheep’s clothes? Security Implications of OpenAI’s GPT Store (January 11)
- darkreading.com: OpenAI’s new GPT Store may pose data security risks
Source: Watson

I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.