X actually has very clear terms of use that apply to all users of the platform. For example, the rules say: “X prohibits synthetic, manipulated or out-of-context media that may mislead or confuse people and cause harm.” Nevertheless, in recent days, pornographic content of pop singer Taylor Swift has spread that should never have been published.
Deepfakes of the 34-year-old circulated on X for several hours, showing her in sexually suggestive and explicit poses. The images were created by artificial intelligence and posted from an account that has since been blocked. According to The Verge, the images spread within 19 hours and were viewed more than 27 million times and liked 260,000 times before the account was blocked. It is not yet known exactly why the images have not been blocked for so long.
X itself has not yet responded to the faux pas. However, a spokesperson for Meta told CNN: “The content violates the guidelines and will be removed. In addition, action is being taken against the bills.”
Yet the images continue to spread. Some of the photos can also be found on other social media platforms such as Facebook and Reddit. “This is a prime example of how AI can cause harm online without sufficient guardrails to protect public spaces,” Ben Decker, head of digital research firm Memetica, told CNN. Social media platforms wouldn’t have sophisticated plans to monitor all content, especially if artificial intelligence had a hand in it too.
According to various American media, Taylor Swift also became aware of the deepfakes and has already taken legal action.
(sav)
Source: Watson

I am Dawid Malan, a news reporter for 24 Instant News. I specialize in celebrity and entertainment news, writing stories that capture the attention of readers from all walks of life. My work has been featured in some of the world’s leading publications and I am passionate about delivering quality content to my readers.