The boss of the ChatGPT inventor OpenAI sees the risk of spreading false information using artificial intelligence – and has spoken out in favor of strict regulation. Just because of the massive resources required, there will only be a few companies that can be pioneers in training AI models, Sam Altman said at a U.S. Senate hearing in Washington on Tuesday. They should be under close supervision.
Altman’s OpenAI sparked the current AI hype with the text engine ChatGPT and the software that can generate images from text descriptions.
ChatGPT formulates texts by estimating word for word the probable continuation of a sentence. A consequence of this procedure is that the software not only comes up with correct information, but also completely incorrect information – but that no difference is recognizable to the user. Because of this, there is a fear that their skills could be used, for example, to produce and spread misinformation. Altman also expressed this concern during the hearing.
Altman suggested creating a new government agency that could put AI models to the test. A series of security tests should be provided for artificial intelligence, such as whether it can propagate independently. Companies that do not comply with the prescribed standards must withdraw their license. The AI systems must also be able to be checked by independent experts.
Altman recognized that AI technology could eliminate some jobs through automation in the future. At the same time, however, it has the potential to create “much better jobs”.
During the hearing in a Senate subcommittee, Altmann did not rule out the possibility that OpenAI programs could also be available with advertisements instead of subscriptions as is currently the case. (sda/dpa)
Source: Watson

I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.