What The Taylor Swift AI Picture Controversy Tells Us About Tech

31-01-2024
#tech
Instagram logo for Matt Bristow's blog LinkedIn logo for Matt Bristow's blog Logo to click to give feedback on Matt Bristows blog.
Brain icon to indicate ability to summarise blog with AI.

Summarise with AI

AI summary

Well this f***king sucks to write about.

As a fan of both the NFL and Taylor Swift who also works for a company that helps produce synthetic media (I like to stay unpredictable), I was unfortunately front row for the latest, and probably darkest, controversy around AI. 

As a fan of being a decent human being, I decided to write about it to share where I think tech should improve, and also to share some resources that may help in the fight against AI deepfake pornography.

If you missed this whole debacle and are shocked by the random phrase “deepfake pornography” at the end of that sentence there, let me catch you up.

Last week, The Platform Formerly Known As Twitter was flooded with AI generated sexual images of Taylor Swift, obviously without Taylor’s consent. 

Given that 98% of all deepfakes (images that have been digitally manipulated to replace one person's likeness convincingly with that of another) are used for pornographic purposes, this is unfortunately not a huge surprise. Weaponizing sexual degradation is a pretty oft selected method of attack for the kind of people who belong in 2024’s version of Mos Eisley’s cantina. 

The hope (and I use this word loosely here, the situation is obviously horrific) is that someone with the cultural sway of Taylor Swift will be able to enact real change in the fight against non consensual deepfakes.

But it’s not enough to just hope and pray that Taylor Swift has enough clout to be able to get the government to do their job, there’s also a bunch of learnings that tech should take away from this incident, and I wanted to start with one that I haven’t seen much talking about.

Content moderation is important and X is terrible at it

Now, this whole sordid affair isn’t solely X’s fault, but the spread of the images was largely happening on X, so it bears mentioning. 

Elon is famously anti content moderation unless the content is making fun or disagreeing with his personal view of ethics and morality, which considering he once tried to trade a sexual favours for a horse and called a man trying to rescue children a “pedo guy”, feels like more a moving target than a steel clad Bushido code of honour.

He also famously laid off a bunch of content moderators, believing them to be part of the “woke mindvirus” and also probably the reason that he has more children that don’t speak to him than he does companies.

X was so so slow at removing these images (one of the posts had 45 million views before it was taken down) that fans took matters into their own hands and started flooding X with positive Taylor Swift media, to drown out the AI pictures.

I will talk more about the community effort a little later in the blog, but a salient point is that it shouldn’t have gotten to that point. People shouldn’t be relying on community efforts in lieu of actual moderation. Adding more content to drown out bad content is a worse strategy for all than just removing the bad content.

This also isn’t a free speech or artistic licence issue. As we hit an era where anyone can create a pornographic image of anyone else with a click of a button and a lack of a soul, human-led content moderation is going to become more important, not less.

By lacking proper safeguards, Elon’s X allowed the damage to be done, and they should (and hopefully will) be held accountable.

So if X provided the platform, what about the source?

Synthetic media creators need safeguards.

The people responsible for generating these images weren’t some cabal of hi tech cyber warriors powered by undiagnosed personality disorders and cybernetic implants. 

In fact, it’s posited that they actually used pretty well known AI image generators to create the images, being able to create hundreds in minutes.

As the march of AI continues ever upwards (or downwards depending on your views on what the coming years have in store), we are close to being able to create realistic naked images of anyone on the planet. 

Let that sink in.

A lot of the swamp scum on X who argue that these images somehow fall under artistic licence, and are no different to a drawing, are missing two things.

One is that even if it was a drawing, that’s still really weird dude, your mother should have raised you better than that.

Two, the speed and accuracy with which these images can be created is astonishing and is a huge factor in why they are dangerous. We're now at the point were 34 million AI-generated images are generated a day. It’s akin to saying we don’t need speed limits because a horse and cart can only go 20 miles an hour whilst simultaneously building a supersonic car.

The speed, spread and accuracy of these kinds of images matter, and there is a good argument to be made that the human race just isn’t ready for the kind of power that freely distributed synthetic media creators wield.

Community support works.

I have made a fair few LOTR and Star Wars references in this blog, and whilst I recognise the seriousness of the topic, metaphors are the way I communicate because I have a hard time processing emotions and feelings outside of them. Probably need to look into that.

That being said, Swifties remind me of the Army of the Dead from Lord Of The Rings (if that offends you, rest assured I built a Taylor Swift game for you to enjoy as penance)

A never ending wave of relentless loyalty and admiration, they jumped to Taylor Swift’s defence in a pretty breathtaking display of unity, and were as successful as you could probably be whilst X’s own DNA is fighting against your effort. 

I would argue this level of support is a last line of defence kind of thing, rather than something that should be taken as a lesson. 

For example, if you are just a regular person, you wouldn’t have an army of millions of people able to flood social media for you, which brings me on to my next point.

Non-consensual deepfakes are a huge problem and have been for a while.

Taylor Swift is probably the most recognisable and famous person on the planet.

And this still happened to her.

Unfortunately, this has already been happening to girls and women around the world. 

There have been numerous cases where deepfakes have been employed to generate non consensual pornography as a form of revenge or humiliation. We are only now hearing about it on such a widespread scale because Taylor Swift is such a prominent figure.

I originally believed that the biggest risk that AI posed was the embedding of pre-existing prejudices into our machines, but it appears that I was giving our species a tad too much credit there. 

The burgeoning danger is people using AI to degrade and attempt to humiliate their fellow people.

So what can we do?

People suck. Here are some resources that might help.

Support and bolster organisations like StopNCII.org

As I mentioned, it’s not just big celebrities who are targeted in this way with AI generated imagery. 

This class of attack falls under the category of Non-Consensual Intimate Image Abuse (NCIIA), and StopNCII.org are doing what they can to fight both the rise of deepfakes and other types of revenge porn.

Whilst providing a tonne of free resources, they also have a tool where you can create “a digital fingerprint – called a hash – of the image(s)/video(s) on your device. A hash will be sent from your device, but not the image/video itself.” which then can be used to crawl the Internet for any images you may not want out there.

It’s a tough fight, but organisations like this are super important so definitely check them out and if you are feeling particularly touched, you can donate to the SWGfL, the parent charity that manages StopNCII.org.

Vote with your feet, leave X and platforms that show clear commitment to not helping with NCIIA

Facebook, Instagram, TikTok, Reddit and Pornhub are partnered with StopNCII.org to receive hashed data and crawl their sites for NCIIA.

X isn’t.

Elon Musk has made it abundantly clear he has no interest in content moderation, and the only way to combat that form of stubbornness is to vote with your feet and leave X. Half the user base already did, and those people (speaking from experience) are absolutely fine. 

Taylor Swift’s fans showed what is possible when you work together to combat bad actors. Now we need to apply that to removing power from those like Musk who are trying to centralise it and bend it to their will. 

Any social media platform’s lifeblood is its user base, and there is great power in that. Use it wisely.

Be careful where you post photos

I’m lucky.

I work for a synthetic media company that takes this stuff seriously. 

Currently, with Colossyan you can only create synthetic media featuring avatars who wilfully uploaded their likeness, or your own likeness.

Other synthetic media companies don’t have this backstop. A lot of them you can upload any photo you want and create media based off of that photo. 

We are probably heading to a post truth world, and users need to be extra vigilant about their digital likeness online. This may seem trivial now, but in the next five years could be one of the most important parts of staying safe online.

Logo to click to leave a comment on this blog.

Load comments

Comments

No comments yet, be the first!

Name

20
Message

250
Post comment