1. Instagram
  2. TikTok
  3. YouTube

    Now reading: We’re in the era of the AI-generated fake celebrity nude

    Share

    We’re in the era of the AI-generated fake celebrity nude

    Deepfake porn of stars are becoming more realistic and easier to produce, but the law is nowhere near catching up.

    Share

    After the release of ChatGPT and Lensa in late 2022 and the public access to the codes behind AI technology since, there’s been a rise in AI-generated images and videos across the internet. No longer just the playtoy of Silicon Valley tech bros and computer nerds, compelling AI content – from generated selfies and fake popstar interviews or song covers to images of the Pope seemingly wearing a Moncler puffer and Selena Gomez attending a MET Gala she never attended – is more ubiquitous than ever. 

    But not all of this publicly AI-generated content is innocent, harmless fun. “This video is fake,” reads a rare tweet from Elite and Strange Way of Life star Manu Rios in response to an AI video of him completely naked and pleasuring himself that went viral on Twitter. “[The] internet is a fucking scary and weird place,” he added.  

    Deepfake nudes of celebrities are nothing new. After the celebrity sex tape leaks of the 00s, whereby stolen videos of Pamela Anderson, Paris Hilton and Kim Kardashian made their way across dodgy internet links, the culture spread into entire blogs dedicated to sharing shoddily edited celebrity porn where star’s faces, cut from movie stills or paparazzi shots, were digitally pasted onto the bodies of pornstars. It didn’t matter that they weren’t real; the fantasy was there. With the rise in social media spaces – specifically less censored ones such as Twitter and, previously, Tumblr – these images moved from the corners of the internet to a more global stage. 

    In ‘The Fappening’ of 2014 – a scandal in which many female celebrities’ iCloud accounts were hacked and their intimate photos released – deepfakes of some celebrities were shared into the mix, purporting to be one of the real images stolen. In 2018, a Reddit board had been creating images with AI for months where Taylor Swift, Maisie Williams, Scarlett Johannson and more were edited into porn before the platform put in a rule banning what they referred to as “involuntary pornography”. Just recently, an AI-generated Rolling Stone cover of the often-deepfaked Henry Cavill with an oiled body and a very revealing singlet came up on the feed of British TV presenter Lorraine Kelly who reposted it with the caption “crikey”. 

    “There is a whole ecosystem around tech and this harmful content which means no one is being held responsible.”

    This problem has not just been affecting celebrities either. In 2018, an investigative journalist reported being both doxxed and edited into deepfake porn in order to silence her during the Kathua rape case in Kashmir, India. In the past few years, since-banned platforms such as DeepNude and a bot on the Telegram app have used AI to “undress” women in photos uploaded to them too. After Megan Fox joked about all her AI-generated selfies on Lensa coming out naked and overtly sexual, others reported that the app made “cursed images of a cartoon-like cleavage and tiny waist”, even if selfies that showed nothing lower on the body than the shoulders were uploaded. 

    “The difference between old deepfakes and this generative AI technology is that it previously required a lot of technical skill,” says Melissa Heikkilä, the senior reporter on AI for MIT Technology Review. “Now, you have apps where you can put one photo of a person in and create any sort of video or put their face onto any sort of body. With social media like TikTok, it’s then a lot easier for these things to be shared and spread in a viral way.” A study in 2019 found that of the 14,000 known deepfake videos on the internet at the time, 96% were porn. With access to technology that can create this content only becoming more widely accessible and much easier to use in the past four years, this is likely a much larger problem now than it was then.

    While legislation has been put forward in the past, little movement has been made. The AI image of the Pope, as well as fake images of Donald Trump being pinned to the ground and arrested, seemingly caused the rich and powerful to worry though, perhaps realising that the proliferation of the AI technology they created for their own advantage, often at the expense of paying creatives, could now also be used by anyone on the internet to create incendiary images of the 1% too. While only three states in America have any sort of laws in place in regards to doctored imagery online, a bill by New York Democrat congressman Joe Morelle entitled the Preventing Deepfakes of Intimate Images Act was introduced to the House floor in May 2023 that would make the sharing of non-consensually made AI-generated porn illegal as well as providing new legal avenues for those impacted. In the UK, it was reported in June 2023 that the Prime Minister was considering legislation where AI-generated images or videos would have to be labelled as such, while the EU is calling on search engines and social media to enact labels before laws come into place.

    “It’s difficult to regulate because how do you track down who is making this stuff and who is spreading it illegally?” Mellissa asks. Even if one country manages to ban AI-generative porn or enforces a labelling of deepfakes policy, unless there is global legislation in place, these images can still be made elsewhere and be viewed worldwide. “Labelling is the regulation people are thinking about the most right now, but the bills are all still pending, and nothing is currently stopping this from happening.” 

    The suggested legislation also only targets the users of these platforms as opposed to the companies allowing generative AI content to be made without limits. “Companies offering and hosting this technology have a lot of responsibility, and they’re not doing enough to filter these out,” Melissa says. “You can find deepfake porn so easily on the internet, and banks like Mastercard and Visa still allow payments to go through to these companies. There is a whole ecosystem around tech and this harmful content which means no one is being held responsible.” Some generative image-making programs — such as Dall-E, Midjourney and Stable Diffusion — have been implementing changes such as removing porn from their AI’s training data, blocking certain requests and specific words and scanning the created images before they’re shown to the user, although there are still sites that have no boundaries. Additionally, the codes of Stable Diffusion were made publicly available, meaning it can be downloaded and modified to create a new program without restrictions.

    The depressing silver lining is, though, that while this has been an issue that has been facing women for a few years now, as this technology increasingly moves into the public sphere and affects celebrities and the powerful and prominent, the more likely something is going to be done about it. “I think one day someone really powerful will, for example, sue a tech company for this, and that could really change things,” Mellissa says. Money really does talk.

    The responsibility and blame here should be held to tech giants that are carelessly leading the charge in AI development under the guise of progress without a thought for the damage it can cause to people’s lives. But we also have to remember that generative AI does not create images out of nothing; it feeds from the deep dark corners of creepy sites and the very public squares of social media, and a culture of entitlement to other people’s bodies that has been proliferated online over decades. The last thing we should be doing – whether out of blind horniness or in an attempt to call it out – is to continue to share these images non-consensually. 

    Loading