Deepfakes and Synthetic Media: When Seeing Is No Longer Believing
In 2022, a video circulated showing Ukrainian President Volodymyr Zelenskyy telling his soldiers to surrender to Russia.[1] The video looked real—same voice, same mannerisms, same background. Except Zelenskyy never said it. It was a deepfake, and it was quickly debunked. But the next one might not be.
Welcome to the age where seeing is no longer believing. Descartes imagined an evil demon that could deceive you about everything. We've built something close: technology that can fabricate perfectly convincing evidence of events that never happened.
The Evolution of Synthetic Media
Deepfakes aren't new in concept—photo manipulation has existed since photography began. Stalin airbrushed political enemies out of photographs. Magazine covers have been retouched for decades. But something fundamental changed in the 2010s: the technology became accessible, automated, and nearly undetectable.
In 2017, a Reddit user posted face-swapped celebrity videos using deep learning algorithms.[4] The term "deepfake" was born—a portmanteau of "deep learning" and "fake." What started as a niche technical demonstration quickly became a crisis.
By 2024, anyone with a decent computer and freely available software could create convincing deepfakes. No special expertise required. The barrier to creating synthetic media collapsed from "requires a Hollywood special effects team" to "download this app."
How Deepfakes Work
The technology behind deepfakes is elegant and disturbing. Generative Adversarial Networks (GANs) pit two neural networks against each other: one generates fake content, the other tries to detect it. They compete in an evolutionary arms race, each improving until the fakes become indistinguishable from reality.
Feed the system enough video of someone's face, and it learns to map their expressions, lighting, and movements. It can then transplant that face onto another person's body, matching every subtle movement. The result looks real because, in a sense, it is—it's learned from thousands of real examples.
Voice cloning works similarly. Feed an AI a few minutes of someone's speech, and it can generate new sentences in their voice, matching tone, cadence, and accent. The technology has become so good that it can fool voice authentication systems designed to detect fraud.
Image generation has reached photorealistic quality. AI systems like Midjourney, DALL-E, and Stable Diffusion can create images of events that never happened, people who don't exist, and places that were never built. The images are often indistinguishable from photographs.
The Epistemic Crisis
Here's the problem: our entire system of knowledge relies on evidence. Courts use video recordings. Journalists verify events through photographs. Scientists document experiments with images. We believe things happened because we can see evidence they happened.
Deepfakes break this system. When any video could be fake, how do you prove anything? When any photograph could be AI-generated, what counts as evidence?
This isn't hypothetical. In 2019, a CEO was tricked into transferring $243,000 after receiving a phone call from someone using AI voice cloning to impersonate his boss.[2] The voice was perfect. The request seemed legitimate. The money was gone.
In 2023, researchers demonstrated they could create fake satellite imagery that fooled professional analysts.[3] Imagine the implications: fake evidence of troop movements, fabricated environmental disasters, synthetic proof of events that never occurred.
We're entering an era where the default assumption must be skepticism. That video of a politician saying something outrageous? Could be fake. That photograph of a disaster? Might be AI-generated. That voice on the phone? Could be cloned.
The Liar's Dividend
There's a darker implication called the "liar's dividend": when everything can be faked, real evidence can be dismissed as fake.
A politician caught on video saying something damaging can claim it's a deepfake. A company accused of wrongdoing can dismiss photographic evidence as AI-generated. An abuser can claim recordings of their behavior are synthetic.
The existence of deepfake technology provides plausible deniability for real evidence. You don't need to prove something is fake—you just need to raise doubt. In a world where perfect fakes exist, perfect doubt exists too.
This is Descartes' evil demon in practice. The demon doesn't need to deceive you about everything—just enough to make you uncertain about anything. When you can't trust video, audio, or images, what can you trust?
The Authentication Arms Race
The response has been an arms race between creation and detection. Researchers develop tools to detect deepfakes by analyzing subtle artifacts: unnatural blinking patterns, inconsistent lighting, pixel-level anomalies. But as detection improves, so does generation.
Some platforms have implemented content authentication systems. Adobe's Content Credentials embeds metadata proving an image's provenance. News organizations use blockchain-based verification. Cameras can cryptographically sign photos at capture.
But these solutions only work if widely adopted, and they can't retroactively verify existing content. The billions of images and videos already online remain unverifiable. And any authentication system can potentially be circumvented or spoofed.
The fundamental problem remains: in a digital environment, perfect copies are indistinguishable from originals. There's no physical artifact to examine, no original negative to verify against. Everything is bits, and bits can be manipulated.
Real-World Consequences
The implications extend beyond individual deception:
Political manipulation: Fake videos of candidates saying inflammatory things could influence elections. By the time they're debunked, the damage is done. Voters remember the fake more than the correction.
Financial fraud: Voice cloning enables sophisticated scams. Criminals can impersonate executives, family members, or authority figures with perfect vocal accuracy.
Relationship destruction: Deepfake pornography has been used for revenge, harassment, and blackmail. Victims face images of themselves in situations that never occurred, yet the images are convincing enough to destroy reputations.
Erosion of trust: Perhaps most insidiously, deepfakes erode trust in all media. When anything could be fake, people retreat to tribal epistemology—believing what their group believes, dismissing contradictory evidence as fabricated.
The Practical Evil Demon
Descartes' evil demon was omnipotent—it could deceive you about everything. Deepfakes are more limited but more real. They can't deceive you about your own thoughts, but they can deceive you about external events.
The demon doesn't need to be omnipotent. It just needs to be good enough, often enough, to make you uncertain. And that's exactly what deepfakes do.
You watch a video and think: "Is this real?" You see a photograph and wonder: "Could this be AI-generated?" You hear a voice on the phone and question: "Is this actually them?"
The evil demon isn't a supernatural being. It's an algorithm, trained on millions of examples, capable of generating convincing falsehoods on demand. And unlike Descartes' demon, it's real, accessible, and improving every day.
Living in the Deepfake Era
So how do we navigate this? Some strategies have emerged:
Source verification: Trust established institutions with verification processes. A video from a reputable news organization that has verified its source is more reliable than a random social media post.
Contextual analysis: Does the content make sense given what else you know? Is it consistent with the person's known positions and behavior? Deepfakes often fail contextual checks even when they pass visual ones.
Healthy skepticism: Extraordinary claims require extraordinary evidence. A shocking video should be treated with suspicion until verified through multiple independent sources.
Technical literacy: Understanding how deepfakes work helps identify them. Unnatural movements, inconsistent lighting, and audio-visual mismatches are common tells.
But these are imperfect solutions. They require effort, expertise, and time—resources most people don't have for every piece of content they encounter. And as the technology improves, even experts struggle to distinguish real from fake.
The Philosophical Implication
Deepfakes illustrate a fundamental philosophical problem: the gap between appearance and reality. Descartes worried that appearances might not match reality. Deepfakes make that worry concrete.
In philosophy, this is called the problem of perception. We don't experience reality directly—we experience our perceptions of reality. Usually, we assume our perceptions are reliable. But what if they're not?
Deepfakes show that our perceptions can be systematically manipulated. The video looks real, sounds real, and feels real—but it isn't. Your senses are giving you false information, and you have no way to detect it from the experience itself.
This is exactly the scenario Descartes described. An external force (not a demon, but an algorithm) creates false perceptions that are indistinguishable from real ones. You're not in a vat, but you might as well be—your information about the world is being manipulated, and you can't tell.
The Question of Truth
The deepfake era forces us to confront uncomfortable questions:
If we can't trust video evidence, what can we trust?
If photographs can be perfectly faked, how do we verify anything?
If voices can be cloned, how do we authenticate identity?
If our senses can be fooled, how do we know what's real?
These aren't abstract philosophical puzzles anymore. They're practical problems we face every day. The evil demon isn't a thought experiment—it's a technology we've created and deployed at scale.
Tomorrow, we'll explore another way technology makes us brains in vats: virtual reality, where we voluntarily choose synthetic experiences over real ones. But deepfakes are different—they're imposed deception, not chosen immersion. They're the evil demon we didn't ask for, operating in the background, making us uncertain about everything we see and hear.
Descartes sought certainty and found it only in his own existence. In the age of deepfakes, even that certainty is challenged. If your memories can be manipulated, your perceptions deceived, and your evidence fabricated, what's left to be certain about?
The answer, increasingly, is: very little. And that's the world we're learning to live in.
References
[1] "No, President Zelenskyy did not tell Ukrainian soldiers to surrender," Poynter Institute, March 17, 2022. https://www.poynter.org/tfcn/2022/no-president-zelenskyy-did-not-tell-ukrainian-soldiers-to-surrender/
[2] Catherine Stupp, "Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case," The Wall Street Journal, August 30, 2019. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
[3] Bo Zhao et al., "Deep Fake Geography? When Geospatial Data Encounter Artificial Intelligence," Cartography and Geographic Information Science, Vol. 48, No. 4, 2021. https://www.tandfonline.com/doi/full/10.1080/15230406.2021.1910075
[4] Alex Hern, "AI used to face-swap Hollywood stars into pornography films," The Guardian, January 25, 2018. https://www.theguardian.com/technology/2018/jan/25/ai-face-swap-pornography-emma-watson-scarlett-johansson-taylor-swift-daisy-ridley-sophie-turner-maisie-williams