Taylor Swift and the Rise of Artificial Intelligence
In early February, nude photos of Taylor Swift were posted to X (formerly known as Twitter). The usual argument followed: women shouldn’t take nude pictures of themselves, and, if they do, they shouldn’t share them in a way that could be hacked or leaked. The catch was that these weren’t actually pictures of Taylor Swift. Someone used Artificial Intelligence (AI) to create these photos and then shared them as if they were the real thing.
Artificial intelligence is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings”. But AI doesn’t simply exist. It is created by humans. It has to have a large amount of material to analyze to “learn”. This source material is known as “training data” and is often obtained through a process called “data scraping”. Data scraping is exactly what it sounds like – the process of collecting data or information from a website. AI uses that data to give us results when we ask it a question or give it a prompt.
There is plenty of debate about how to use AI. As with all new technology, there are pros and cons. AI can be a helpful tool, from spotting fake news to saving the bees. People with intellectual property to protect, however, often see AI as theft. For those of us working with people who have been exploited online, we see a threat to the safety and privacy of our children and ourselves.
Take, for example, the nude images of Taylor Swift. There are plenty of photos of both Taylor Swift and nude women available online. So, an AI platform was able to easily find and combine those images to create something new when asked. Taylor Swift did not consensually take or share nude photos of herself. However, there are nude photos of “Taylor Swift” on the internet without her consent. And Taylor Swift is not the only victim here. The person whose body was used in these photos – without their consent – is also a victim.
This is known as online sexual abuse and includes any type of sexual harassment, exploitation, or abuse that takes place online. It can include sending unwanted nude photos or videos, performing sexual acts on a webcam without the consent of the viewer, sharing private images or videos without consent, or grooming children for abuse on or offline. Even though online sexual abuse doesn’t take place in person, that does not make the abuse any less harmful. Additionally, each time the images and videos are shared online, those people are revictimized.
Much like Taylor Swift, there are endless photos of children online. Unfortunately, the same is also true of Child Sexual Abuse Material (CSAM), what most of us know as child pornography. What AI did to create nude photos of Taylor Swift, it can also do to create images of children being sexually abused.
Since these artificially created images are made of something that already exists, the AI platforms that create CSAM images had to have been trained on photos of real children being abused. Although data scraping should differentiate between ethical and unethical sources of material, it often doesn’t. Governments around the world are already paying close attention to this new issue and attempting to hold AI creators accountable. In the meantime, though, children are being exploited every single day in a new and terrible way.
A photo of any child’s face can be added to the body of another to create a very realistic picture of a child being subjected to sexual abuse. And once those images are on the internet, they are unlikely to come down. The FBI has already warned of an increase in AI-generated sextortion schemes. A quick Google search will show arrests for possession of AI-generated CSAM were made in multiple countries in 2023, with convictions from Australia to Charlotte, NC.
We cannot stop AI or internet predators, but we can give them less material to work with. Learning and teaching internet safety is key. First Light strongly recommends exercising caution when posting photos of your children on social media. If your children are online, teaching them good habits can also help keep them safe. We’ve created a list of safety tips for you to share and use with your kids.
While we may not be able to stop online predators, we can do our best to keep our children’s images out of the hands of people who would use AI to manipulate them in harmful ways. If you would like to talk to someone about keeping your children safe online, please reach out to us. And if you or someone you know has experienced this type of abuse and you need to talk, our helpline is available 24/7 at 864.231.7273.