Is What You See Really What You Get?

Explore visual misinformation

The contents of this guide have been adapted from the Digital Enquirer Kit written by Tactical Tech and produced by GIZ.

Apps like Instagram, TikTok, YouTube, and WhatsApp give you the chance to find, post, and share photos and videos that matter to you. But when everyone can post anything (and can label them however they’d like) that means there’s a lot of information to sort through and make sense of. Especially when it’s visual, how can you be sure that what you see is reliable?

Photos and videos could be altered using complicated software and highly technical skills, but actually, visual misinformation can be very simple to create. For example, putting an inaccurate caption on a photo or cropping a person or landmark out of a photo to alter the context are common yet easy ways that disinformation is created.

And as you’ve seen on the news, misinformation can lead to negative consequences including causing confusion during a pandemic, putting people’s lives at risk, and influencing the results of an election.

In this Data Detox, you’ll explore some types of visual misinformation (from the simple to the more complex) as well as learn how to verify what you see.

Visual information requires special care and attention. While it may not be obvious at first glance whether an image or video is misleading, you can learn about different types of visual misinformation to help you stay critical...

Let’s go!


Cheap Fakes: Simple Yet Effective

The most common form of visual misinformation is a so-called cheap fake (aka a “shallow fake”). Cheap fakes are misleading content that doesn’t require sophisticated technology, but instead can be created by putting the wrong headline or caption on a photo or video or using outdated visuals to illustrate a current event.

Here are three typical types of cheap fakes:

Madonna Tweet

Madonna Tweet

Miscaptioned Images or Videos: A real but misleading image or video with explanatory text that falsely describes its context, origin, or meaning.

For example, politicians and celebrities used this photo to raise awareness of the fire in the Amazon in 2019—but it was actually taken in 1989. The focus shifted away from the disaster and onto the mis-dated photo. Mislabeled and out-of-context photos like this one can be a form of misinformation. Using them can discredit valid causes or cast doubt on legitimate claims.

Gates

Gates

Manipulated Images or Videos: The primary elements are true, but some details have been added or deleted to change the meaning.

In this example, someone digitally altered the photo by adding text to the sign that reads “Center for Global Human Population Reduction” to the Bill and Melinda Gates Foundation sign.

Cropped Images or Videos: People crop images and videos to change their original meaning and true context.

In this example, Ugandan climate activist Vanessa Nakate was cut out of an Associated Press photo with other activists. Can you imagine how removing Nakate can change the story told by the picture?

Click to view the images in larger resolutions: Amazon fire, Gates sign, climate activists.

Identify Deepfakes

Now that you know what cheap fakes are, explore another common form of visual misinformation known as deepfakes.

Deepfakes can take the form of videos, audio clips, and images that have been digitally altered, typically to replace someone’s face or movements or to alter their words.

Gif of elon musk baby

What’s wrong with this deepfake?

Match the problem to the description.

Arrow of theme misinformation

Blurred at the tip

Unusually full

Pixelated

The father’s nose is...

The baby’s teeth are...

The baby’s mouth looks...

What’s wrong with this deepfake?

The father’s nose is...

Unusually full

Pixelated

Blurred at the tip

The baby’s teeth are...

Unusually full

Pixelated

Blurred at the tip

The baby’s mouth looks...

Unusually full

Pixelated

Blurred at the tip

That deepfake was of Elon Musk's face overlaid on a baby. Now, compare it to the original:

Original gif of baby

Find out more: Check out The Glass Room’s Deepfake Lab to learn more about how deepfakes like this one of Elon Musk are made.

You don’t need to have advanced technical skills to make a deepfake. Popular apps like Instagram, Facebook, and Snapchat have integrated photo filters that allow you to swap faces with a friend, change your eye color, smooth your skin, or grow rabbit ears—to name a few. There are even apps that let you appear as someone else or as an animal while on a video conference. Deepfakes may not be perfect, but it can be very difficult to spot the subtle differences, especially in a video or image that is of low quality or just a few seconds long.

Deepfake Spotter Guide

Now that you know what a deepfake is, let’s look at some clues to help you quickly identify deepfakes, thanks to The Glass Room’s Deepfake Spotter Guide.

Deep Fake Spotter

Skin Color Mismatch: Look out for a difference in skin tone between the mask and target face. The face seems to be covered by a layer of different colors, showing edges or spots.

Visible Edges: The edges of the mask are visible, either as a sharp or blurred edge surrounding the face.

Face Occlusion: When objects pass in front of the face, the mask distorts or covers the object.

Blurred Face: The mask is blurred. There is a difference in sharpness or resolution between the mask and the rest of the image/video.

Flicker Effect: There’s a flicker between the original and deepfake faces. The algorithm can’t recognize the face and stops creating the mask for a moment.

Wrong Perspective: The deepfake has a different perspective from the rest of the video, or the source and target video differ in focal length.

Profile Borders: The side view of the face seems wrong. The deepfake mask is broken, less detailed, or incorrectly aligned.

Mismatched Expressions: The expressions on the deepfake face don’t match the target face. Facial features are not natural, blurry, and imperfectly aligned.

Blurred Backgrounds: The background is blurry. While it could be the lens focus, it could also hint to a filter or an edit.

If you spot at least two of these characteristics in an image or video you’re unsure about, you may have spotted a deepfake and can use research techniques to verify the details.

While the examples you’ve seen here have been light-hearted, most of the deepfakes on the internet are harmful, specifically targeting women. You can click to reveal some such stories below (content warning). Alternatively, you can skip this part and carry on below.

Bullying with multimedia

Tap to find out more

Arrow from theme misinformation

Click to find out more

Deepfake technology is being used by cyberbullies, like in this 2021 report from the BBC of a woman who created deepfake images of young cheerleaders to harass them. According to Safer Schools UK, “Deepfakes have been used in cases of cyberbullying to deliberately mock, taunt, or inflict public embarrassment on victims. The novel appearance of these images may distract from the real issue that they can be used to bully or harass children and young people.”

Finding your face in places you never consented to...

Tap to find out more

Arrow from theme misinformation

Click to find out more

A concern of women and activists for years has been how technology is being used to harass and harm women. In late 2021, MIT Technology Review looked into an app that swaps women’s faces into pornographic videos with a single click. Not naming the app for fear of inadvertently promoting it, they noted “From the beginning, deepfakes, or AI-generated synthetic media, have primarily been used to create pornographic representations of women, who often find this psychologically devastating. [...]the repercussions can stay with victims for life.”

Be sure to stay vigilant in the face of visual information and ask critical questions.

You’re Not Alone: Software like Sensity.AI can help you in verifying visual information and debunking deepfakes. The Cyber Helpline and Revenge Porn Helpline offer victims of cybercrime and online harm immediate advice and resources. Chayn is another great toolkit to check out if you’re a survivor seeking support.

Think Creatively!

As deepfake technology advances, it becomes harder to spot them based on visual clues... so think creatively!

  • Be alert to contextual clues and do research for verification.
  • Cross-check other credible sources before you share it.

Think Outside-the-Box: One way to verify creatively is to look closer at the context or the claims. For example, if a video appears to depict someone speaking at a conference, look up the list of speakers on the official conference website to see if the person in the video is actually listed as a speaker. Or if the speaker is saying something surprising or provoking, do some research to see if they’ve said the same or similar things in the past or if this seems out of character for them.

Don’t think it’s impossible to tackle visual misinformation; dive deeper and stay vigilant. Now that you know how to spot visual misinformation, learn to Verify a Photo’s Origins, 6 Tip to Steer Clear of Misinformation Online and find out how to tackle health misinformation in Health vs. Hoax: Immunize Yourself Against Health Misinformation Online.

Last updated on: 1/4/2022