In the future, the Facebook group Meta will warn users if they publish AI videos or audio files that appear deceptively real. If they don’t do this, they could face consequences, Meta’s political boss Nick Clegg warned in a blog post on Tuesday.
Based on the information, the group wants to highlight content created with the help of artificial intelligence in its services. In addition to Facebook, Meta also includes Instagram and WhatsApp.
The request to users is necessary because it is not yet as common to integrate invisible watermarks into AI software for creating videos and sound files as it is with photos, Clegg explains. At the same time, he stated that there were ways to remove these watermarks. But Meta is working to make this more difficult. The idea is to integrate the watermarks directly into the creation of the files so that they cannot be deactivated. The group is also developing technology to automatically recognize content generated with AI.
A few weeks ago, automated calls with a deceptively realistic-sounding imitation of President Joe Biden’s voice caused alarm in the US. The message of the calls was not to participate in the Democratic Party primaries in the state of New Hampshire. The incident fueled concerns that attempts could be made in the coming months to influence the outcome of the November presidential election by deceptively distributing real AI counterfeits.
Meta is part of an industry coalition that aims to develop standard technologies for labeling files generated using AI. Based on this, Meta AI wants to highlight images in the Facebook, Instagram and Threads apps in all supported languages. The group also offers AI software for creating images based on text specifications. These files contain both visible marks and invisible watermarks. (sda/dpa)
Source: Watson

I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.