We recently attended the Society for Computers & Law event ‘Generative AI & Deepfakes: Understanding the Illusion’.
Among plenty of jokes about none of the presenters being deepfakes themselves, the event helped illuminate the Good, the Bad and the Ugly of this legal wild west (although the EU and UK are shining their sheriff's badges through the AI Act and the Online Safety Act).
The Good
Deepfake is often associated with the negative aspects of Generative AI, as explored further below. The event did however remind us that the technology is not always used for nefarious purposes.
Using deep learning to generate video or audio that is fake, hence the portmanteau deepfake, has many applications that are good or creative. The most famous recent example being the multi-lingual skills it bestowed on David Beckham to aid the fight against Malaria.
We may also want to create deepfakes of ourselves, so we can roam around in games with a photorealistic animated avatar, and perhaps even interact with Nigel Farage and Keir Starmer as they grief each other on Minecraft.
The Bad
The political parody of the Minecraft deepfake brings us close to the line where deepfakes become bad. Fake video and audio are powerful tools for disinformation and deepfake technology makes it easier to produce. Deepfake audio of a candidate purporting to rig an election is considered to be the tip of the iceberg for its use in manipulating elections.
Politics isn't the only area of concern. We are all more vulnerable to financial scams and cyber breaches as a result of deepfakes. There was a call at the event for more training for individuals in businesses to receive training in spotting such scams, much as most organisations do with phishing scams today.
Scarier still is the use of the technology to produce pornographic material. The 2019 DeepTrace report finding that 96% of deepfake content online were pornographic and non-consensual was often quoted at the event and addressing this issue is an obvious priority.
The Ugly
The ugly truth is that there does not seem to be one simple answer to regulating deepfake technology and that we'll need to learn to live with its good and bad aspects.
The EU's AI Act and the UK's Online Safety Act add to a complex patchwork of IP, data protection, libel and criminal laws, that can be used to address deepfake technology. The event highlighted that enforcing those laws will be difficult given the speed of dissemination of content and the often hidden identities of those putting it out.
While regulators and public bodies have a role in enforcing those laws, living with deepfakes is likely to involve social media and video platforms, fact-checking bodies and individuals being more forensic about the content they host, analyse and believe.