News
USA | Mar 7, 2023

Scammers turning to AI

/ Our Today

administrator
Reading Time: 3 minutes

Utilising the technology to aid illicit their activities

Scammers are using artificial intelligence (AI) to sound more like family members in distress to aid in their illicit activities.

What is most interesting is that people are falling for it and losing thousands of dollars. The Washington Post reported on the case of a man calling Ruth Card, who sounded just like her grandson, Brandon, admitting that he was in jail with no wallet or cellphone and needing cash for bail.

Card, 73, and her husband, Greg Grace, 75, dashed to their bank in Regina, Saskatchewan, and withdrew CND$3,000 (US$2,207 ), which is the daily maximum. They hurried to a second branch for more money but a bank manager pulled them into his office pointing out that another customer had got a similar call, which turned out to have been fake.

Card recalled the banker saying the man on the phone probably wasn’t their grandson, which is when they realised they had been duped.

Impersonation scams

As impersonation scams in North America rise, Card’s ordeal is indicative of a troubling trend. Technology is making it easier and cheaper for bad actors to mimic voices, convincing people, often the elderly, that their loved ones are in distress.

In 2022, imposter scams were the second most popular racket in America with over 36,000 reports of people being swindled by those pretending to be friends and family, according to data from the Federal Trade Commission. Over 5,100 of those incidents happened over the phone, accounting for over US$11 million in losses, FTC officials reported.

Advancements in AI have added a terrifying new layer, allowing bad actors to replicate a voice with just an audio sample of a few sentences. Powered by AI, a slew of cheap online tools can translate an audio file into a replica of a voice, allowing a swindler to make it “speak” whatever they type.

“It’s sort of the perfect storm … [with] all the ingredients you need to create chaos.”

Hany Farid, a professor of digital forensics at the University of California at Berkeley

Experts say federal regulators, law enforcement and the courts are ill-equipped to rein in the burgeoning scam. Artificially generated voice technology is making the ruse more convincing, as victims report reacting with visceral horror when hearing loved ones in danger.

AI voice-generating software analyses what makes a person’s voice unique, including age, gender and accent, and searches a vast database of voices to find similar ones and predict patterns. It can then re-create the pitch, timber and individual sounds of a person’s voice to create an overall effect that is similar.

Little legal precedent

The Washington Post reports that most victims have few leads to identify the perpetrator and it is difficult for the police to trace calls and funds from scammers operating across the world. What is most unfortunate is that there is little legal precedent for courts to hold the companies that make the tools accountable for their use.

Hany Farida, a professor of digital forensics at the University of California at Berkeley. (Photo: bids.berkeley.edu)

“It’s terrifying,” said Hany Farid, a professor of digital forensics at the University of California at Berkeley. According to him, “it’s sort of the perfect storm … [with] all the ingredients you need to create chaos”. Although imposter scams come in many forms, they essentially work the same way: a scammer impersonates someone trustworthy – a child, lover or friend – and convinces the victim to send them money because they are in distress.

Companies such as ElevenLabs, an AI voice synthesising start-up founded in 2022, transform a short vocal sample into a synthetically generated voice through a text-to-speech tool. ElevenLabs software can be free or cost between US$5 and US$330 per month to use with higher prices allowing users to generate more audio.

ElevenLabs burst into the news following criticism of its tool, which has been used to replicate voices of celebrities saying things they never did, such as Emma Watson falsely reciting passages from Adolf Hitler’s Mein Kampf.

ElevenLabs did not return a request for comment but, in a Twitter thread, the company said it’s incorporating safeguards to stem misuse, including banning free users from creating custom voices and launching a tool to detect AI-generated audio.

Comments

What To Read Next