Deepfakes: the dark side of AI entertainment
Original BDO Cyber digest blog provided by BDO Singapore.Artificial Intelligence (AI) has revolutionised entertainment, bringing new life to movies, games and other media through stunning visual effects and animations. However, there is also a darker side to this technology. Deepfakes - AI-generated fake videos, images and audio - blur the line between real and fake media, raising major ethical and security concerns.
What are deepfakes ?
At their core, deepfakes are AI-generated fake media content. Using techniques such as deep learning and neural networks, computers analyse large datasets - like videos, images and audio - to replicate human appearances, expressions and even voices with impressive accuracy. Deepfake technology can swap faces in videos, alter speech or even create entirely new conversations that look and sound real, making it increasingly difficult to distinguish truth from fiction.
While this AI technology can have creative applications, such as digitally de-aging actors in movies or creating virtual characters in games, deepfakes are unfortunately often used in ways that harm others.
Deepfakes: what they are and how they’re made
Deepfakes are synthetic media designed to be indistinguishable from real people.
Examples of deepfakes:
• Face-swap app• Video of a celebrity or politician doing something they didn’t do
How deepfakes are made:
• Deepfakes use footage, pictures and audio recordings of real people to create a ‘map’ for generating fakes
Applications and risks:
• Hollywood uses deepfake technology to de-age or resurrect actors
• Scammers use these programmes to spread misinformation or extort people.
Adapted from https://www.bdo.com.sg/en-gb/blogs/bdo-cyberdigest/us.norton.com/blog/emerging-threats/what-are-deepfakes
The dark side of deepfakes
While deepfakes were initially developed for entertainment, they are now widely used to deceive people and cause harm. Here are some of the major concerns associated with them:Spreading false information: Deepfakes are powerful tools for spreading fake news. Fabricated videos of politicians or celebrities can make them appear to say or do things they never did, manipulating public opinion and promoting false information.
Identity theft: By replicating someone’s face or voice, deepfakes can facilitate identity theft. Criminals can create fake videos or audio clips of someone and use them for blackmail or fraud.
Reputation damage: Deepfakes can severely damage reputations. For instance, a fake video of a celebrity or a public official engaging in inappropriate behaviour can destroy careers and personal lives.
National security risks: Deepfakes can pose threats to national security. They could be used to impersonate government officials or issue fake orders, potentially destabilising public trust in institutions.
What are deepfakes used for ?
• Social engineering• Automated disinformation attacks
• Identity theft
• Financial fraud
• Scams and hoaxes
• Election manipulation
Adapted from https://www.fortinet.com/resources/cyberglossary/deepfake
Real-world examples
Deepfakes have already caused significant harm in various real-world scenarios:Fake celebrity videos: Numerous fake videos have surfaced showing celebrities endorsing products or making statements they never actually made. These videos often spread quickly on social media, deceiving fans and damaging the celebrity’s reputation.
Political manipulation: During elections or political crises, deepfake videos of politicians delivering speeches or interviews have been used to manipulate public opinion.
Voice scams: Scammers have leveraged deepfake audio to mimic the voices of company executives, tricking employees into transferring money to fraudulent accounts.
How to spot a deepfake
Visual artifacts: Deepfakes often contain subtle visual glitches, like blurred edges around the face, odd lighting or inconsistent shadows:• Glasses may disappear or reflect differently
• Features may be positioned incorrectly or appear to move
• The person’s hair and skin may look blurry
• The audio does not match the video
• The background may not make sense
• The lighting looks unnatural or strange
Unnatural movements: Deepfake videos may not perfectly mimic natural movements, especially around the eyes or mouth
Audio inconsistencies: The audio in a deepfake may sound slightly off, with unusual pauses or mismatched lip movements
Source checking: If the content appears suspicious, comparing it with original or verified footage of the person can reveal discrepancies.
Case studies
Case Study 1: Nancy Pelosi deepfake:A manipulated video of US House Speaker Nancy Pelosi was slowed down to make her appear intoxicated and slurring her words.
The video went viral, causing widespread misinformation and negatively impacting Pelosi’s public image.
Case study 2: Ukrainian Politician's Deepfake Scandal
A deepfake video of Ukrainian President Volodymyr Zelensky surfaced, falsely showing him ordering troops to surrender during the Russian invasion.
Authorities quickly debunked the video, but the damage was done as it briefly influenced public perception.
Case study 3: AI-generated voice scams
Scammers used AI-generated voice technology to impersonate CEOs in phone calls, deceiving employees into transferring money or disclosing sensitive information.
Use case examples of fraudulent deepfakes
Identity theft: Fraudsters attempt to mimic the images and voices of customers or staff to commit account takeovers and make unauthorised purchases
Crypto scam: Criminals create deepfake videos of company CEOs and advertise fake links falsely claiming to offer free cryptocurrency
Political campaign: Bad actors use fake campaigns and deepfakes to produce convincing videos and images to sway public opinion.
How can we combat deepfakes?
Deepfake technology is evolving fast, but there are some ways to manage its misuse:AI detection tools: New AI-powered tools are being developed to detect deepfakes. These tools look for inconsistencies in video or audio files that might go unnoticed, like unnatural facial movements.
Regulation and legislation: Governments are creating laws to address deepfakes. Some jurisdictions now have rules that penalise malicious deepfakes used for identity theft, fraud or defamation.
Public awareness: Raising awareness about deepfakes is essential. Individuals and institutions need to understand the risks and learn how to identify suspicious content.
Collaboration: Collaboration between tech companies, governments and researchers is crucial. Social media platforms like Facebook and YouTube are implementing measures to detect and remove harmful deepfakes.
How BDO can help
Deepfakes highlight both the incredible potential and the serious risks of AI: they open up exciting possibilities for creativity but also pose real threats to individuals and society.BDO’s experts can help businesses to develop effective ways to manage this technology’s risks, through detection tools, regulations and education. Please reach out to your local BDO firm to discuss how to work towards a safer and more trustworthy digital future.