On Wednesday, yet another case of deepfake took over the internet. Inappropriate and explicit pictures of popstar Taylor Swift, created using artificial intelligence, began circulating on X. The source of these images is still unclear, but what's clear is that they violate Swift’s privacy.
One of the explicit AI deepfake images, shared by @Real_Nafu, depicted the singer in a disturbing scenario where she was being violated by Chiefs fans at a game. This caused immense shock among netizens and sparked widespread outrage online. While the singer herself has not commented on this matter, her fans are coming out in support of the artist and condemning such practices.
Her fans, Swifties, are actively defending her against offensive AI-generated images that have surfaced online. They started to tweet 'Taylor Swift AI' which became trending overnight with unrelated posts to take action to bury this topic and expressed their support for Swift, who has now become a victim of this disturbing trend.
Fans went on to comment, "Man this is so wrong and inappropriate." One more said, "whoever making those taylor swift ai pictures going to hell." One commented,"Whoever is making this garbage needs to be arrested. What I saw is just absolutely repulsive, and this kind of s*it should be illegal." A fourth one spoke out and said, "we NEED to protect women from stuff like this."
In recent years, while the use of AI has seen a surge, not all of it has been used for good purposes. It has also raised ethical concerns regarding how it can be misused to harm and exploit others. Many celebrities have already faced the brunt of its negative side. Scarlett Johansson had taken legal action against an AI app that used her identity for an online advertisement. Similarly, many celebs have been victims of AI-generated explicit images over the past few years like Bella Thorne, Miley Cyrus, Jennifer Lawrence, Addison Rae, Demi Lovato, Kristen Stewart, Dakota Johnson, Rihanna to name a few.
Unfortunately, the legal system has not yet addressed this emerging threat of AI-generated explicit content. According to reports by MSNBC, "there is no such [federal] crime that covers AI-generated nudes," This statement in itself is extremely concerning, given the extensive harm it can bring to a person's mental health, emotional well-being, and overall reputation in personal and professional spheres.