Monday 21st July 1969 was a hot day in Washington D.C. The temperature peaked at 35ºC that afternoon and thunderstorms were on the way. In the cool of the Oval Office, President Nixon was reading a pre-prepared script into a live broadcast TV camera for the Six O’Clock News. It was a message that no-one wanted to hear:
“Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace. These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery.”
Except that he didn’t make the speech, the Apollo 11 mission was a success, the astronauts returned safely and the ‘Contingency Speech’ as it was known was never used. But if you want to see Nixon making the speech that never was, have a look at a recent experiment by researchers from the Center for Advanced Virtuality at The Massachusetts Institute of Technology (MIT). https://moondisaster.org
This is a good example of a new phenomenon set to hit your social media accounts soon – Deepfakes. This one was an experiment by legitimate researchers at a prestigious university, but many are less benign.
So, what are ‘Deepfakes’? And why should people who care about democracy be worried?
In simple terms, Deepfake technology enables anyone with a computer and Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do. Based on the pre-existing phrase ‘deep learning’, deepfakes first emerged on the internet in late 2017, powered by an innovative new deep learning method known as Generative Adversarial Networks (GANs). A GAN trained on photographs can generate new photographs that look authentic to human observers.
Have a look at the faces below – all reasonable enough? None of them is real.
Unsurprisingly, as with many tech initiatives, Deepfakes were initially seized upon by the always inventive and resourceful online porn industry. But they have come a long way since transposing A-list actors heads onto pornstars bodies.
A recent example from the US that attracted media attention. During the ad break in ESPN’s popular documentary series ‘The Last Dance’, in April this year State Farm Insurance debuted a TV commercial that appeared to show footage from 1998 of ESPN analyst Kenny Mayne making startlingly accurate predictions about the year 2020. It would have caused a major ‘water cooler moment’ had the country not been in lockdown, but it was a Deepfake designed to get the viewer’s attention and it worked. The commercial startled, amused and fascinated ESPN’s viewers. What viewers should have felt, though, was worried.
The New York Times took the view that the ESPN commercial ‘Hints at Advertising’s Deepfake Future’ and that with the pandemic shutting down production, companies would increasingly ask ad agencies to make up digitally altered footage.
But it goes further than just untrustworthy TV advertising. Recently deepfakes have given us President Obama apparently using an expletive to describe President Trump and Mark Zuckerberg ‘admitting’ that Facebook’s true goal is to manipulate and exploit its users.
But, many argue, Deepfakes are really only an internet novelty. Not so, unfortunately. They are rapidly becoming a weapon of choice for destructive social and political forces. The rise in Deepfakes presence on the internet is rapid. At the start of 2019 it was estimated that 7,964 deepfakes were online (of which most were pornography) but, by December that year, the number had almost doubled.
So, starting from the darkest corners of the internet, Deepfake technology is starting to make its presence felt in the mainstream. The harm that could be done to political debate and democratic processes if entire populations can be shown fake videos that they believe are real is obvious.
As with most ‘fake news’, timing is key. Releasing a damaging Deepfake video 48 hours before an election might be perfect in terms of sowing seeds of doubt and preventing the truth to emerge completely. The Deepfake could be a major social and political weapon of our age, weaponizing information in a way that takes full advantage of the dynamics of the social media ecosystem that prizes ‘traffic’ above everything.
In a recent report, Washington’s Brookings Institution summed up the range of political and social dangers that deepfakes pose as: ‘distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.’
Contributing technical editor: Chhavi Chauhan, PhD, Ethical Artificial Intelligence Policy Expert.