Though terrible, Swift’s deepfakes did perhaps more than anything else to raise awareness about the risks and seem to have galvanized tech companies and lawmakers to do something.
“The screw has been turned,” says Henry Ajder, a generative AI expert who has studied deepfakes for nearly a decade. We are at an inflection point where the pressure from lawmakers and awareness among consumers is so great that tech companies can’t ignore the problem anymore, he says.
First, the good news. Last week Google said it is taking steps to keep explicit deepfakes from appearing in search results. The tech giant is making it easier for victims to request that nonconsensual fake explicit imagery be removed. It will also filter all explicit results on similar searches and remove duplicate images. This will prevent the images from popping back up in the future. Google is also downranking search results that lead to explicit fake content. When someone searches for deepfakes and includes someone’s name in the search, Google will aim to surface high-quality, non-explicit content, such as relevant news articles.
This is a positive move, says Ajder. Google’s changes remove a huge amount of visibility for nonconsensual, pornographic deepfake content. “That means that people are going to have to work a lot harder to find it if they want to access it,” he says.
In January, I wrote about three ways we can fight nonconsensual explicit deepfakes. These included regulation; watermarks, which would help us detect whether something is AI-generated; and protective shields, which make it harder for attackers to use our images.
Eight months on, watermarks and protective shields remain experimental and unreliable, but the good news is that regulation has caught up a little bit. For example, the UK has banned both creation and distribution of nonconsensual explicit deepfakes. This decision led a popular site that distributes this kind of content, Mr DeepFakes, to block access to UK users, says Ajder.
The EU’s AI Act is now officially in force and could usher in some important changes around transparency. The law requires deepfake creators to clearly disclose that the material was created by AI. And in late July, the US Senate passed the Defiance Act, which gives victims a way to seek civil remedies for sexually explicit deepfakes. (This legislation still needs to clear many hurdles in the House to become law.)
But a lot more needs to be done. Google can clearly identify which websites are getting traffic and tries to remove deepfake sites from the top of search results, but it could go further. “Why aren’t they treating this like child pornography websites and just removing them entirely from searches where possible?” Ajder says. He also found it a weird omission that Google’s announcement didn’t mention deepfake videos, only images.
#Google #finally #action #curb #nonconsensual #deepfakes