Today, more of us than ever are turning to the internet and technologies like remote medicine for healthcare services and advice.
But given the explosion of AI-generated fake news and disinformation online, is this potentially putting our health at risk?
This is a worrying question we have to tackle head-on in the age of deepfakes – highly realistic AI-generated content showing things that never happened or people saying things they never said.
Most people first came across deepfakes as often viral, entertaining novelties like DeepFake Tom Cruise. But their darker side takes in political manipulation, election interference, conspiracy theories and non-consensual pornography.
However, some of the most nefarious uses are increasingly in healthcare, where accurate, factual information and trust are critically important to good outcomes.
From faked celebrity endorsements to harmful AI-generated medical advice, are deepfakes fueling a dangerous wave of misinformation and disinformation that could seriously damage our health?
How Can Deepfake Information Be Bad For Your Health?
Fake celebrity endorsements have always been a problem online—as Oprah has spoken about her frustration over fake ads for diet pills and supplements for many years. But in an age when virtually anyone can easily create a convincing video of anything they want in seconds, the problem is significantly amplified.
Tom Hanks is one of the high-profile figures who has felt the need to issue a public warning that deepfake video of him promoting “miracle cures and wonder drugs” are not genuine and could pose a threat to health.
If you wouldn’t necessarily trust Tom Hanks to give you medical advice anyway, what about a famous TV doctor? There have been several cases where deepfake videos of well-known professionals have been used to entice viewers into buying bootleg or illegal medicines for diabetes and high blood pressure.
Charities can be targets too – in Australia Diabetes Victoria said that footage of experts endorsing supplements as an alternative to medical treatment for diabetes was deepfake.
The people featured in deepfake health disinformation don’t even have to be real as long as they seem trustworthy. A wave of fake and potentially dangerous medical advice in TikTok videos prompted a New York Post investigation, which found that fake clips featuring entirely made-up people can be created with a few clicks on an app.
There are also fears that deepfakes could be used to spread misinformation about public health issues like pandemics and vaccinations. Disinformation spread by fake celebrities or healthcare professionals could impact compliance with public health advisories like mask-wearing or hand-washing, reducing their effectiveness.
Deepfakes threats to healthcare are particularly concerning because of the importance of trust. After all, a society that doesn’t trust healthcare professionals probably won’t stay healthy for long. If people genuinely have difficulty distinguishing between genuine medical advice and dangerous disinformation, the consequences for everybody could be severe.
Staying Healthy In The Age Of Deepfake Disinformation
The important thing to remember is that deepfake technology is as easy to spot and unconvincing now as it will ever be. Tomorrow, like all AI, it’s going to be much more sophisticated.
This is a scary prospect if you’re already wondering how you’re supposed to survive in a world where Oprah Winfrey and Tom Hanks are trying to kill you with bad advice.
Lawmakers in some areas are starting to get to grips with the sale of the challenge. The EU AI Act, China’s Generative AI Measures regulations, and various US state-level laws, all contain measures aimed at preventing harm.
But as with the explosion in so many other digital crimes—phishing scams, ransomware and the like—there’s a limit to what the law will do. As an international crime with a very low barrier to entry, it’s unlikely to be stamped out any time soon.
For most of us, the answer lies in ourselves, which means taking on the responsibility of developing our own critical thinking and deepfake-era survival skills.
This starts by understanding the threats. It’s possible that the employee of a Hong Kong business who transferred $25 million to scammers on the instructions of a deepfake of her CEO’s voice might not have done it if they’d known it was an increasingly common scam.
You also need to hone your skills at verifying information, checking sources for reliability, and looking out for technical signs that what you’re watching isn’t right. These often include unusual facial movements or audio and video that seem mismatched.
Navigating the mountain of online content, help and advice related to health, exercise and diet online was a minefield at the best of times. Thanks to deepfakes, it’s getting a whole lot tricker. But by understanding the threat and taking a few steps to help you establish fact from fiction, we can all help to protect ourselves and each other from harm.