Deepfakes are the evolution of fake news and are equally as dangerous

Deepfakes could be fake news on steroids.

As we’re well aware, establishing what is real and what is fake online has become increasingly difficult over the last few years.

As social media has grown, so too has the ability to deliberately spread false information to a potential global audience.

As Winston Churchill is falsely (and ironically) attributed to saying, “A lie can travel halfway around the world before the truth has put its boots on.”

Indeed philosophers will remind us that often there is no objective truth, but merely different interpretations of events.

Epistemological definitions aside, it looks like the pinnacle of ‘fake news’ is hopefully behind us.

That is to say, the articles that contain not an ounce of truth and which are designed to manipulate the network effect of social media and psychology of social sharing are on the wane.

Facebook, Twitter et al are all aware of the ramifications of fake news and how these news sites played – and continue to play – a prominent role in the amplification of it.

The introduction of new features by Facebook to stop fake news from spreading on its platform seem to be working according to a recent independent report [pdf] by Stanford University.

The report also says Twitter’s attempts at combatting fake news have not.

Nevertheless, it seems that ‘The Great Fake News Era’ of 2015-18 is coming to a slow and drawn-out end as people become savvier on what they share and as the social media giants continue to bow to pressure from governments around the world.

Deepfake technology: Fake news 2.0

In the era of manipulated images and, declining-but-still-very-present, fake news, there is a new kind of technology on the horizon. One that can be potentially more problematic than the other two combined.

Deepfake is a technology based on machine learning which is used to produce or alter video to make it look like something happened that did not.

The term deepfake is an amalgamation of both ‘deep learning’ and ‘fake’ and is a manipulation method that’s been used in the studios of Hollywood for years.

Perhaps one of the earliest uses (and hence the leading image in this article) was in the film Forrest Gump when the hero of the story met John F. Kennedy and alerted him to the fact that he had to “go pee.”

Superman, Star Wars and many other films since have used this technology to enhance the cinematic experience.

Like most technology, the barriers to entry are coming down as consumer technology becomes more powerful and begins integrating with machine learning algorithms.

Video is usually more difficult to manipulate than photographs because it’s moving imagery so, unlike a photograph, there a multiple frames to edit.

Of course, video has never been manipulation-proof completely. Anyone who knows how to use editing software can chop a clip together to make it seem someone said something they didn’t. This still goes on online regularly.

But to actually put words in the mouth of another person who is face-forward on camera was exclusively reserved for the CGI studios of Hollywood. Not anymore.

Deepfakes explained in a nutshell

Remember a couple of years ago when everyone got infatuated with Snapchat filters, especially the dog filter that gave you loppy ears, a snout and particularly long tongue?

This is a form of deepfake technology and they come in two varieties:

  • Superimposing someone’s face onto a body so it looks like they did something they never did
  • Taking a speech and manipulating it onto an unsuspecting person’s face so it looks like they said it

Deepfake technology has become fully automated. To make a deepfake video all that’s required are a selection of the images of the person’s face being replaced and the person’s face who is being superimposed.

This has lowered the barriers to anyone using this technology and assuming the person being deepfaked is famous a quick image search online provide all the headshots needed.

Deepfake technology is being pioneered by the porn industry

Much like online and streaming video was pioneered by the porn industry early on, it’s probably no surprise that deepfake tech is being used in the porn industry.

Superimposing a celebrity’s face onto another person’s body is a common use case. This is an infringement on someone’s identity and is, of course, a type of fake news story in of itself.

A number of porn sites claim to have banned deepfake videos uploaded to their sites. As too have Twitter and Reddit. The latter of which had two popular subreddits: r/Deepfakes and r/Celebfakes which have both been removed by the company.

Deepfakes are arguably the biggest technological danger when it comes to politics and diplomacy

Given how easy it is to create a deepfake video, it’s likely they will be used with negative political intent. This could be one of the biggest technological danger we face in politics.

Politicians are incredibly easy subjects for deepfakes because they are often standing face-forward on camera.

Deepfakes could be used to create fake sex scandals for political figures, business leaders, celebrities and well-known people in the public eye.

Or, in fact, they can be used as an excuse when someone gets caught doing what they shouldn’t be.

“It was a set up to smear me using deepfake video!” they might say.

Here’s an example of a deepfake video created by BuzzFeed using actor and comedian, Jordan Peele voicing over Barack Obama.

“Photoshop for audio”

You might say while Peele is a good impersonator of Obama, anyone who has heard the ex-president speak over the last ten years will know that it’s not his voice.

While most level-headed people could differentiate between an impersonator’s voice and Obama’s, a new technology that complements deepfake video and is perhaps equally as dangerous was announced a couple of years ago.

Like the Obama example, most deepfakes are created using an impersonation which means you need someone like Jordan Peele onhand to provide the necessary skill.

This could change however since the Adobe announcement of ‘Voco’, its new software program which has been dubbed “Photoshop for audio” because of its ability to replicate someone’s voice pitch-perfect.

Using a learning algorithm to analyse speech patterns, all Voco requires is 40 minutes of someone speaking to be able to recreate it.

Once that data is collected a user can simply type the words they want and match it to a deepfake video.

Below is the Adobe Voco announcement including Obama impersonator Jordan Peele once again.

Spotting deepfake videos

Adobe Voco is still in prototype phase and when it comes to deepfake video there are often tell-tale glitches you can see, particularly when a subject moves their head or tilts it to an angle.

For now, and as far as we know, someone can likely identify a deepfake video when shown to them. The issue we face moving forward is the software is getting better and making it more difficult to tell fact from fiction.

Technology in all its forms improves over time and it’s no different here. Soon it will be very difficult to tell a fake video from a real one which will create all kinds of problems especially when it comes to discovering that all-important question: what is real?

In the UK use of deepfakes for so-called revenge porn could be banned by the British government but trying to ban something that spreads on the internet is often counterintuitive.

Many people, rich, famous or otherwise, have discovered the power of the Streisand Effect when trying to prevent something from spreading online.

The future of deepfakes

Some have speculated that deepfake video could create more upheaval than the fake news sites of 2015-18.

They have the potential to cause chaos among country states and governments as well as business leaders, celebrities and everyday people like us.

Once the tech is so good you can’t tell the difference between a deepfake video and a real one they can be weaponised to destroy reputations and create friction among communities large and small.

Thankfully people are becoming more aware of how fake information spreads online. For the more rational thinkers, there’s a greater scepticism when it comes to believing what they read and see in social media and elsewhere.

A recent report found that young people are better at spotting fake news online than the older generation.

This is the generation that will likely have to deal with deciphering a deepfake video from a real one and it looks like they’re ready for it.

Let’s hope we all are.

Written by Stephen Davies

I’m an experienced strategist working at the intersection of public relations, digital marketing and social media based in London. You can work with me here or drop me a line here.


Influencer mass delusion and finding your zen in social media with Trey Ratcliff

warren buffett learning machine

Become a ‘learning machine’ like Warren Buffett