Deepfake nudes of the pop star are appearing all over social media. We all saw 
this coming.

Taylor Swift Is Living Every Woman’s AI Porn Nightmare


By Janus Rose NEW YORK, US 26 January 2024, 5:50am
https://www.vice.com/en/article/qjvajd/taylor-swift-is-living-every-womans-ai-porn-nightmare


AI-generated nudes of Taylor Swift are appearing all over the internet, and 
there’s no evidence that they will be stopping anytime soon.

The images are spreading across social media platforms after seemingly being 
posted to a celebrity nudes site, which placed its watermark in the corner of 
each image. And as always, the tech companies behind those platforms are 
struggling to crack down on the abuse.

On X—which used to be called Twitter before it was bought by billionaire 
edgelord Elon Musk—the account which initially posted the AI nudes has been 
suspended. But the images are still widely available via a quick search of the 
platform—as are various “nudify” apps that allow users to virtually undress 
women with generative AI. On Thursday afternoon, the Swift images were still 
being shared widely by various accounts with blue checks—a meaningless label 
that used to indicate verified accounts, but is now given to anyone who pays to 
subscribe to Musk’s platform.

This is not especially surprising given that Musk has virtually eliminated the 
platform’s moderation staff since buying the company. Other platforms have 
followed suit, eliminating staff positions in charge of combating hate speech 
and misinformation. X did not respond to Motherboard's request for comment 
beyond its automatic boilerplate email reply.

Reddit, which banned non-consensual nude deepfakes after they initially 
proliferated there in 2018, has also been taking down posts and moderating 
users who share the images, a spokesperson told Motherboard. The site uses a 
combination of automated tools and human review to accomplish this; it appears 
to have largely worked, although Motherboard found examples of the images being 
shared on small subreddits. The images were also reportedly circulating on 
Instagram; Motherboard couldn't find examples of the viral AI nudes in a 
search, but did uncover other pornographic, non-nude deepfakes featuring Swift. 
Meta said in a statement that it bans sexually explicit images on its platforms 
using automated tools.

“This content violates our policies and we’re removing it from our platforms 
and taking action against accounts that posted it,” the Meta spokesperson said. 
“We’re continuing to monitor and if we identify any additional violating 
content we’ll remove it and take appropriate action.”

This was all easy to see coming. Taylor Swift is probably the single most 
visible woman of this century, so it makes sense the chart-topping pop star 
would be the most obvious target for AI-generated nastiness. Last month, a 
viral TikTok scam imitated Swift’s voice and likeness to advertise a fake offer 
for free cookware sets. The Swift brand is simply too big and too ubiquitous to 
not be defiled by mouth-breathing internet goblins who get off by publicly 
fantasizing about having sex with female celebrities.

This doesn’t bode well for other, less-famous women and femme-presenting 
people, either. In many ways, this is a nightmare scenario for anyone whose 
bodies are routinely sexualized and exploited—and especially for teenagers who 
are most likely to be harmed by AI nudes. Most recently, teenage girls in New 
Jersey reported that bullies had begun spreading AI-generated nudes of them in 
their school. And there have been various other incidents where abusers have 
used “nudify” apps to generate explicit pictures of classmates and online 
influencers.

The sad but entirely predictable proliferation of non-consensual AI porn is 
just one obvious consequence of the rise of AI-generated content, enabled by 
large corporations which profit from AI tools with virtually zero 
accountability. Indeed, deepfakes originated explicitly to create AI porn of 
real women, a malignant strain of the technology's DNA that has yet to be 
excised.

Companies like OpenAI, Meta, and Microsoft have attempted to mitigate the 
production of harmful content with filters that block users from generating 
abusive and illegal stuff. But this has proven to be a cat-and-mouse game, 
largely because the systems themselves are trained on billions of 
nonconsensually obtained images scraped from the internet.  For example, one of 
the largest datasets used to train AI systems, LAION, was recently found to 
contain over 3,000 explicit images of child sex abuse. This was met with a 
complete lack of surprise from AI researchers, who have been warning about this 
for years. In addition, AI models are far from scarce, and it's easy for 
someone with a bit of know-how to tweak an existing one to produce all kinds of 
abusive and illegal images.

More recently, lawmakers across the US have proposed bills that would 
explicitly make it illegal to generate nonconsensual AI nudes. But experts note 
the bills seem mostly geared toward protecting the intellectual property rights 
of rich celebrities like Swift, and it remains to be seen whether these laws 
would do anything to protect the rest of us.

Jordan Pearson contributed reporting to this article.

_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to