What a Deepfake Is
A "deepfake" is a piece of media (photo, video, audio) that's been generated or altered by AI to show something that didn't actually happen. The name comes from "deep learning" (a kind of AI) plus "fake". Five years ago, making a convincing fake video took a Hollywood budget. Today, teenagers can do it on a phone for free.
Deepfakes aren't all bad. The same technology that powers deepfakes is behind harmless fun like video filters, dubbing foreign films, and generating art. What's changed is that the cost has fallen to zero, and the skill required has fallen to near zero too.
You can't avoid deepfakes. You just need to adjust the trust you give to images and videos, the way you already did for emails.
How They're Made
You don't need to understand the technical details. You do need to know what's possible, so you're not fooled when you see it.
AI-generated images come from tools like ChatGPT's image generator, Midjourney, or the image features inside Gemini. You type a description ("a photo of an older man sitting on a verandah holding a guitar") and the AI produces an image that looks like a photograph. The result can be indistinguishable from a real photo at a glance.
AI-generated video comes from tools like Sora, Runway, and Google Veo. You type a short description and get a short video clip. The results are good enough to fool many people on a quick scroll, though they often fall apart on close inspection.
AI voice cloning can copy someone's voice from as little as 3 to 10 seconds of audio. Facebook videos, voicemails, TikToks, anything with your voice can be used as a sample. The cloned voice can then say whatever the scammer types.
Face swaps take a real video and replace one person's face with another. This is the classic "deepfake" people picture, though it's now only one of several techniques.
It only takes a few seconds of your voice
If you've recorded a voicemail greeting, posted a TikTok, or left an Instagram video with you talking in it, there's already enough to clone your voice. This isn't cause for panic, it's cause for knowing about the "Hi Mum" scam we covered in Session 4, but now with Mum's voice on the phone, not just a text.
Where You'll See Them
Deepfakes are already part of everyday life online. Here's where they show up, and why.
Scam ads on Facebook and Instagram. Fake videos of celebrities or news anchors "endorsing" an investment scheme, a crypto platform, or a miracle product. The faces are real people. The voices are cloned. The endorsement is fake.
Political misinformation. Fake videos of politicians saying things they never said, timed around elections or major events. These spread fastest on WhatsApp, Facebook, and X.
"Hi Mum" voice scams. The text-message version of this scam has now evolved. The voice on the phone really does sound like your child, because it's been cloned from their social media.
Romance scams. The photos have been fake for years. Now the video calls are too, short clips generated in real time so the scammer can "prove" they're who they claim to be.
Harassment and image-based abuse. Someone's face being put on content they never consented to, often sexual. This is illegal in Australia and reportable to eSafety.
Harmless fun. Filters on Snapchat and TikTok. Movie dubbing. Music videos with surreal effects. Memes. The same technology.
The tech is neutral. How it's used is what matters. Your job is to know when it's being used against you.
Spotting AI-Generated Images
AI-generated images have gotten good fast, but they still have tells. The best images pass most casual inspection. The medium ones (which is still most of them) fail these checks.
Check the hands
Count the fingers. Look for weird bends, fused knuckles, or hands that don't quite match the wrist. Hands are still one of the hardest things for AI to get right.
Check the text in the image
Shop signs, book titles, product labels, name tags. AI images often render text as plausible-looking gibberish. If there's writing, zoom in and read it carefully.
Check the background
AI focuses on the main subject and loses quality on what's behind it. Look for warped buildings, duplicated cars, people with blurry faces, windows that don't quite align, trees that morph into each other.
Check the skin and teeth
AI skin is often too smooth, airbrushed-looking, even on older people. Teeth can look uncannily perfect, or the wrong count if you can see them clearly.
Check jewellery, glasses, and accessories
Earrings that don't match. Glasses with mismatched frames. Necklaces that disappear behind clothing and come out wrong on the other side. Watches with nonsense numbers.
Check the source
Where did you see it? A verified news outlet, a known person you follow, or a random account with no history? Source trumps appearance. A real photo from a real journalist is worth more than a convincing image from nowhere.
Reverse image search
Right-click on an image in a browser and choose "Search image with Google", or go to images.google.com and upload the file. You'll see where else the image has appeared, which often reveals it as AI-generated, or as a real photo taken from a different context years ago.
Spotting AI Video
Video deepfakes are harder to make than images, so they're not quite as common yet, and they fail in more obvious ways. But they're improving fast.
Watch the mouth
Lip-sync is one of the hardest things. Watch the mouth moving in slow motion or pause it. Does the shape of the mouth match the sound? Often you'll see the lips closed on a sound that should be open, or the other way round.
Watch the eyes
Blinking rates can be off. Eyes may not track objects naturally. Sometimes eyes look painted on rather than moving with the head.
Watch the edges of the face
In face-swap deepfakes, the jawline, ears, and hairline are where the swap struggles. Look for a slightly blurred edge, a weird colour change, or hair that "flickers" between frames.
Listen to the audio
AI voices often have a too-even rhythm. Real people pause, stumble, breathe, say "um". A voice with no imperfections, or with weirdly wrong emphasis ("today I went to the SHOP", stressed wrong), is suspicious.
Check the context
If a politician, celebrity, or public figure has "said" something surprising, check a major news outlet before believing it. Real news gets reported by multiple outlets within hours. Viral clips that never quite make the news are often fake.
Spotting AI Voice, Especially on a Phone Call
Voice cloning is the trickiest one to spot in real time. When the "voice of a loved one" calls you in distress, your brain doesn't want to audit the audio.
Ask a question only the real person would know
"What's the name of the first dog we had?" "Where did we go camping that time you got stung?" A cloned voice can't improvise answers to personal memories, only the real person can.
Listen for emotional flatness
Cloned voices often get the words right and the emotion slightly wrong. A crying voice that sounds like a calm voice running a sad filter. A voice in a panic that has no ragged breath.
Call them back, on a number you have
If your daughter is calling from an unknown number claiming to be in trouble, hang up, then call her on the number you already have. If she doesn't answer, call someone who'd know where she is before you send any money.
Agree a family code word in advance
Pick a word that only your family knows. If anyone claims to be you or a family member in a phone emergency, they have to say the code word. This is the single most effective defence against voice cloning, and it takes five minutes at the dinner table.
This one sounds silly until the day it matters. A family code word is the cheapest, simplest, strongest protection against every voice-cloning scam that exists. Pick it tonight.
Protecting Your Own Face, Voice, and Name
You can't make yourself un-cloneable. But you can reduce the amount of raw material available to people who'd misuse it.
Sensible defaults, not paranoia
Set your Facebook and Instagram profiles so only friends can see your posts. Public profiles are the richest source of voice and face samples.
Be mindful of the videos you post that include your voice. You don't have to stop posting, just know what's out there.
Photos of children, especially in school uniforms or named locations, are worth thinking about twice.
The family code word (from the section above). The single highest-leverage thing on this list.
If you're in a public role (government, teaching, community leadership), consider using a separate work-facing persona and a private family-facing one.
If someone makes a deepfake of you. This is harassment or image-based abuse, and Australia has strong laws against it.
Report to the platform (Facebook, Instagram, TikTok) using their "report" function.
Report to the eSafety Commissioner (esafety.gov.au). They have the power to order platforms to take content down.
If it's sexual or intimate, report to police as well. It's a criminal offence in Australia.
Misinformation and What to Trust Online
Step back for a minute. Deepfakes are just one tool in a much older problem: people sharing things online that aren't true. Most of what spreads on social media isn't deepfake, it's real images taken out of context, real videos from years ago being reposted as today's news, or confident posts by people who got something wrong and never corrected it.
The question isn't "is this image real?". The question is "is this image telling the truth?"
A three-step sanity check for anything that makes you angry, scared, or triumphant within five seconds of seeing it:
1. Slow down. The emotion is the hook, same as with scams. Notice the feeling. Don't share yet.
2. Check another source. Is a real news outlet (ABC, Guardian, SBS, a major regional paper) reporting this? If the story is real and significant, yes. If only social media is carrying it, be cautious.
3. Check the date. How old is this image or video? Old photos get recirculated constantly with new captions. Look for publication dates and reverse-image-search to check.
You don't have to do this for every meme. You do have to do it for anything before you share, because sharing is how misinformation spreads.
Try It Yourself
Put your detection skills to the test.
1. Generate an image. Open ChatGPT, Gemini, or Copilot and ask it: "Make me a realistic photo of an older Aussie woman at a community market in the Northern Territory." Look at the result. What's wrong with it? What would give it away if someone claimed it was a real photo?
2. Reverse image search. Save that image. Then upload it to images.google.com and see what comes back. Try the same with a real photo from your own phone. Compare the difference.
3. Discuss a family code word. Before you leave the room, decide on the word you'd use with your family. Write it down. Tell them tonight.
The lesson. Deepfakes look more real than you might expect, but they still have tells. Your eyes and your common sense, plus a reverse image search, will catch most of what you see in the wild. And one family code word will protect you from the scam that matters most.