ChatGPT will soon know if your daughter likes jellyfish – and remember your user data

The chatbot ChatGPT should be able to remember information about its users in the future. The function will initially be tested in a small group, developer company OpenAI announced on Tuesday.

For example, software remembers that you have a daughter who likes jellyfish, explained the developer company OpenAI. If you then ask ChatGPT to design a birthday card for the child, a jellyfish wearing a party hat may come into the picture. However, it will take some time before everyone can benefit from the new feature: the feature is initially being tested in a small group.

To ensure that ChatGPT remembers information about users in the future, you can ask the chatbot to do so. At the same time, the software itself can attempt to extract knowledge about the user from conversations with it. “ChatGPT’s memory improves the more you use it”emphasizes OpenAI.

Precautions planned

At the same time, the feature could raise new fears that software knows too much about its users. OpenAI wants to take precautions. ChatGPT does not automatically remember sensitive information relating to, for example, health, but only at the user’s request.

You can also query what the software knows about you and delete all or individual information. The memory function is intended to make the chatbot more useful. There are temporary chats for conversations without personalization. The information from this will not be used for further training of the software. The memory function can also be turned off completely.

Benefits for companies

OpenAI also sees benefits for memory capacity when used in enterprises. This allows the software to remember the format in which you prefer to receive summaries of meetings at work. Or she can remember the style in which you write your texts and use that to make suggestions for wording.

ChatGPT is the AI ​​chatbot that started the artificial intelligence hype a year ago with expectations ranging from a digital land of milk and honey for all to fears of humanity being wiped out. AI chatbots such as ChatGPT are trained with enormous amounts of information and can formulate texts at the language level of a human. The principle behind this is that they estimate word by word how a sentence should proceed. A disadvantage of this is that the software can sometimes give completely wrong answers, even if it was only based on correct information. (yam/sda/dpa)

Source: Watson

follow:
Ella

Ella

I'm Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.

Related Posts