Technology is advancing by the day, and while many of the advancements are positive, technology can be scary. In this case, we’re touching on deepfakes, the realistic videos fabricated using AI algorithms. The technology uses available online footage to blend audio and video, creating new content that never happened but looks believable. These fabricated videos have slowly increased in the last few years, and politics, especially election races, are a target.
DeepMedia, a company creating tools to detect synthetic media, estimates that around 500,000 voice and video deepfakes were shared on social media in 2023. What was once a costly process has now become possible with a few dollars and access to AI, and the result is a rising problem of convincing deepfakes.
Understanding How Deepfakes Manipulate Video and Audio
Our attention has been directed, quite rightly, to taking precautions on what we share. However, we need to use just as much caution when interpreting and evaluating what we see. Deefakes work by using AI algorithms that are trained using existing online footage. Since there are now many videos and audio clips online, AI has a lot to work with.
The game-changer is generative AI, which takes that learning and can make new footage. While certain synthetic content has dangerous consequences, there are some positive uses for generative AI, which include automated content production for marketing, routine task automation, and personalization. Even deepfakes have their benefits and can help us envision a potential future in order to take action today.
Do Deepfakes and Misinformation Weaponize AI for Social Engineering?
Although Facebook banned deepfakes a few years ago, they’re still a threat, particularly when it comes to misinformation, with one of the main reasons for using them being to manipulate public opinion. Outside of politics, one of the largest uses is creating celebrity endorsements. For example, Oprah Winfrey was used in an advertisement for a manifestation guide in a deepfake.
Another target is sowing confusion and misinformation by faking news footage. CNN, the BBC, and France24 have all been targets. That includes an investment initiative piece promoting a get-rich-quick scheme from a scam company. The latter is also an example of another aim of deepfakes, which is fraud. Scam companies misrepresent what they offer to take consumers’ money.
Protect Yourself From Deepfakes with a Critical Eye
It’s getting much tougher to spot deepfakes now that visual and audio mimicry is used. While technology companies and legal entities need to take action, individuals have a responsibility to be more critical when consuming content. Thankfully, digital literacy guides are helping people understand what they’re facing and what to look for. Three excellent starting points are:
- When watching videos, look for inconsistencies, check the source, and increase your skepticism and critical evaluation if something seems unbelievable.
- Look for supporting sources and check other outlets to see what’s being reported.
- If you have any doubts, don’t like, share, or engage with the content.
Awareness is the first step in preventing deepfakes from doing harm. A greater understanding of how this technology works is crucial. It means we can be more critical when we see synthetic media and get better at spotting attempts to manipulate us.
Spencer Hulse is the Editorial Director at Grit Daily. He is responsible for overseeing other editors and writers, day-to-day operations, and covering breaking news.