Jeff Clune is an associate professor of computer science at the University of British Columbia and Canada CIFAR AI Chair at the Vector Institute.
Money-grabbing scams have been happening for longer than we can remember: con artistry, Ponzi schemes, online phishing, even snail mail gambits. But the advent of more sophisticated artificial intelligence will soon make these tricks far more common, complex and convincing. Knowing who or what to trust online is about to get much more challenging—and we had better be prepared.
Photographs and videos used to be one way of verifying if things are true. Now, AI programs have opened the door to forms of scamming that are much harder to detect. Criminals can simply use software like Midjourney or DALL-E 2 to generate any kind of realistic photos. AI can also mimic someone’s voice, and soon AI-generated videos will be able to clone their appearances and facial expressions too. There are already deepfakes of Elon Musk telling people to invest their money in crypto, Tucker Carlson promoting new trading platforms and Ron DeSantis appearing in an episode of The Office as Michael Scott. We are losing our core methods of verification in ways most of us could not even have imagined just three years ago.
We have to prepare to face elaborate scams that take advantage of our trust. Picture this: in the near future, you might receive a video call from a family member who is asking for money because they are in a tight spot. It probably would not seem possible that you are talking to a deepfake, but this type of trickery—though still rare—is already spreading. Scammers can easily find existing videos or podcasts of somebody online and train an AI on that voice and facial expressions to create an uncanny resemblance. Because it is already possible to imitate my voice on a voice call, I have created a secret password for my family so we can verify each others’ identity if we ever get suspicious of each other over the phone.
The quantity of AI scams will also increase. Back in the ’80s, people would occasionally receive spam letters in snail mail, but those attempts were few and far between because con artists did not want to pay for paper and postage. Email removed the cost of sending mail, and the number of attempted scams skyrocketed. Just like email did a few decades ago, AI is making scamming far easier and cheaper. This could enable disruptive levels of manipulation in the near future. Rather than spending hours trying to defraud one person or paying 1,000 scam employees to work 40-hour weeks, a single scammer could train an AI model to deceive 100 million people simultaneously.
AI-generated, human-sounding robo-calls will contact millions of people at once. Instead of listening to a recorded message, the person will carry on a back-and-forth conversation without realizing the person on the other end of the line is not human. AI scammers will impersonate police officers or civil servants and ask for donations to certain charities, for example. Such scams will also be politically motivated—like an AI calling every Canadian to feed them false information about a political figure, or dictators flooding information channels with fake videos, comments, likes and emojis that subtly, yet relentlessly, repeat the party line. Criminals will wield AI to disrupt finance markets by pumping up worthless penny stocks or driving down the value of other stocks with fake news.
For now, scammers still need to be tech-savvy to create powerful AI-driven schemes. But in technology, what is hard now will be easy soon. As we develop better and more user-friendly AI software, a 12-year-old kid might be able to orchestrate a worldwide phishing attempt from his or her bedroom. Once it becomes cheap, quick and easy to program something that is convincingly human, we are in trouble. It will become incredibly difficult to know which information is trustworthy. Unless we invent technological and sociological solutions to these problems, I am afraid that we will soon not be able to trust most of what we see online, and that makes me deeply concerned about the future of our information landscape.
As we prepare for AI scams to proliferate, the best advice I can offer is for people to seek out and hold onto the sources they trust most—whether that is the New York Times or a particular reporter. But even then they must make sure they are in fact getting information from that source. We can also protect ourselves from scams by better understanding what technology can and cannot do. From now on, we should always second-guess what we see in text, websites and photos. Deepfakes are still rare, and most videos are trustworthy, but that will likely change. If a video seems surprising, do an independent search to see if people are reporting that it is fake. And if you have been scammed, tell others how it happened, so that they do not also fall prey.
Scams work best when they pull on heartstrings—and humans tend to relate and respond to other humans, especially those they know and trust. The better AI can impersonate a flesh-and-blood person, the more turmoil we can expect. As the cost of deception slides towards zero, we will need to build our individual and societal immune systems to handle an army of new threats. As in the real world, an ounce of prevention is worth a pound of cure. The best vaccination for scams is knowledge and a sizable dose of skepticism.
We reached out to Canada’s top AI thinkers in fields like ethics, health and computer science and asked them to predict where AI will take us in the coming years, for better or worse. The results may sound like science fiction—but they’re coming at you sooner than you think. To stay ahead of it all, read the other essays that make up our AI cover story, which was published in the November 2023 issue of Maclean’s. Subscribe now.
The post AI-enabled scams will proliferate appeared first on Macleans.ca.