In the US, major companies block ChatGPT: now Novartis, Post and Co. are also responding.

Is the AI ​​service writing longer texts a welcome relief or a trap that could siphon sensitive company information? A survey shows how large Swiss companies deal with it.
Benjamin Weinmann / ch media
ChatGPT wallpaper

There are tasks like this: “Drafting a new employment contract for this manager including a bonus of 20,000 francs.” Or, “Write a response to this internal sexual harassment complaint.” Or: “Write the notice of termination to this employee due to non-performance.”

The quickly infamous ChatGPT service takes on any writing task assigned to it, with sometimes better, sometimes worse results. Despite the still unstable quality, the artificial intelligence of the company Open AI represents a salvation in everyday work for some people who have difficulty writing texts.

US banks block the feature

Only: Can you entrust everything to the online program? Also internal trade secrets and sensitive personnel information? And is it even possible to get serious with the texts? In the US, several well-known large companies answer these questions with no. As the Bloomberg news agency reports following an analysis by market research firm Gartner, several Wall Street companies such as Bank of America, Goldman Sachs, Wells Fargo and Citigroup have already blocked the chatbot. The fear of an outflow of sensitive information by lazy employees seems too great.

FILE - The Bank of America logo is seen on a branch, Oct.  14, 2022, in Boston.  The Bank of America reports earnings on Tuesday, April 18, 2023. (AP Photo/Michael Dwyer, File)

According to Gartner’s research, only 3 percent of companies surveyed have responded in such a prohibitive manner. However, more than half of them are in the process of formulating new guidelines for dealing with ChatGPT.

And in Switzerland? CH Media asked several major local companies about their reaction to the intelligent application, which, according to a survey by the social media platform Fishbowl, is already being used by 40 percent of employees in the US to write emails or reports – often without informing the administrator. .

ZKB does not want to use the chat robot

The result: the subject is on the table for the requested companies – from Coop to Migros, Novartis, Roche, Swisscom, SBB to the Post of Swiss. And for that reason, measures have already been taken.

FOR THE DEBATE IN THE ZURCHER CANTON COUNCIL ON THE STATE GUARANTEE FOR ZKB, ON MONDAY APRIL 14, 2014 WE WILL GIVE YOU THE FOLLOWING ARCHIVE PICTURE - The logo of the Zuercher Kantonalbank ZKB ba ...

The Zürcher Kantonalbank is restrictive. Spokesman Marco Metzler says that it is not disclosed which apps and websites are blocked at the bank. But: “ChatGPT is not used for business purposes at the bank and is not available for that purpose.” The service is therefore not accessible to ZKB employees.

Coop also seems skeptical. “The subject is currently being intensively discussed internally,” said spokesperson Caspar Frey. So-called cloud solutions, such as ChatGPT, can only be used by retailers “with permission”, according to Frey. In addition, employees are made aware of the use of such applications. Regardless of the provider, there are fundamental security concerns when data is processed in a cloud. For example, translations of business and confidential documents are only carried out by the internal Coop language service, says Frey.

Post tinkerers with ChatGPT

Nestlé, UBS, Logitech and Swiss Re don’t even want to comment on ChatGPT. Other companies are more ambivalent. For example, pharmaceutical giant Novartis has given all employees guidelines for the responsible use of online tools such as ChatGPT. “Employees are asked not to enter any personal, business or confidential information into ChatGPT and other AI tools,” said Novartis spokesperson Satoshi Sugimoto.

ARCHIVE -- ABOUT NOVARTIS YEARS -- The Novartis logo at a Novartis factory in Stein, Monday, September 3, 2018. (KEYSTONE/Georgios Kefalas)

It sounds similar at Lufthansa subsidiary Swiss: “The internal guidelines expressly prohibit the entry of sensitive and customer-related information,” says aviation spokesman Michael Pelzer.

Sometimes the companies are also very open to the chat robot. Of course there are instructions that you should not enter internal and confidential data outside your own system, says Post spokesman Stefan Dauner. But: “We motivate employees to try such applications to find out how they can use them to increase their own productivity and the quality of their work,” says Dauner.

To ensure that this balancing act, which is delicate in the field of data protection, succeeds, Swiss Post provides its staff with appropriate training. “We are convinced that this path makes more sense than banning new applications per se.”

Roche relies on engineering controls

It sounds similar at ABB and Roche. Artificial intelligence systems can be a useful tool for employees to create content, access information and get quick answers to their questions, says Roche spokeswoman Sileia Urech. “We are actively experimenting with the use of services such as ChatGPT, Google Bard and similar technologies.”

The pharmaceutical company also relies on guidelines in which staff are made aware of data protection issues. But of course that’s not enough: “We have put technical controls in place to prevent the accidental use of sensitive data in unauthorized tools.”

HSG professor Isabelle Wildhaber advocates guidelines, but also training for the correct use of ChatGPT.

According to Isabelle Wildhaber, professor of private and corporate law at the University of St.Gallen, it took a while for Swiss companies to understand the scope of ChatGPT. “Now it seems to be clear for most of the implications that artificial intelligence can have in day-to-day work.”

« … otherwise you will lose contact »

She advocates an open approach to ChatGPT. “Blocking makes no sense, then the application will simply be used on the private mobile phone.” According to Wildhaber, training in its correct use would be much more important.

Nevertheless, guidelines are certainly needed. Because such programs use partially entered information to constantly evolve. She gives a purely theoretical example: “For example, if a Nestlé manager enters information about a new product innovation that has yet to be classified, it could appear in the response to a corresponding question from a competitor such as Unilever.” It is therefore good to be careful and handle the risks carefully. “But in the future you can’t do without it, otherwise you’ll lose contact.” (aargauerzeitung.ch)

Should Swiss companies block access to ChatGPT for their employees due to security concerns?
ChatGPT: As an artificial intelligence, ChatGPT cannot access the internet or systems of Swiss companies unless they are explicitly made accessible. Therefore, there is no security risk to companies by using ChatGPT as a language model for internal purposes such as text generation. However, it is important to note that ChatGPT is an AI-based technology that can potentially be exploited due to its flexibility and openness. Companies must ensure that their employees use ChatGPT responsibly and ethically, especially when creating text that represents the company or contains sensitive information. It is therefore advisable to develop policies and training for the use of ChatGPT in the company to ensure that the technology is used in a meaningful and responsible manner. However, if a company is concerned that ChatGPT may be being used improperly by its employees, it can limit or block access to the technology.

Soource :Watson

follow:
Amelia

Amelia

I am Amelia James, a passionate journalist with a deep-rooted interest in current affairs. I have more than five years of experience in the media industry, working both as an author and editor for 24 Instant News. My main focus lies in international news, particularly regional conflicts and political issues around the world.

Related Posts