Amanda Hardesty’s family recently fell prey to a sophisticated counterfeit abduction ruse. Her 19-year-old son received a distressing call from an individual purporting to have abducted his younger sister.
“What really caused him distress was when she spoke on the phone. It was unmistakably her voice, pleading with her brother for her life,” Hardesty recounted.
In reality, Hardesty’s daughter had not been kidnapped and was attending class at her high school during the incident. Scammers, employing advanced artificial intelligence (AI), had replicated her voice to deceive her family into believing she was in grave peril.
“Technology has bestowed many benefits upon society, but it has also introduced significant dangers,” she remarked.
Convinced that his sister’s life was in immediate danger, Hardesty’s son transferred $1,000 to an offshore account.
The incident was reported to the St. Louis County Police Department. Kirkwood police were also contacted by several individuals who had received deepfake calls from purported kidnappers. The situation has escalated to such an extent that the FBI’s St. Louis office recently issued a warning to local law enforcement agencies.
The Eureka Police Department has been investigating multiple similar cases. Captain Michael Werges offered straightforward advice to avoid falling victim to these scams.
“A simple precaution is to establish a family safe word. Everyone in the family should know it. If someone claiming to be a family member cannot provide the safe word, it’s a clear sign of deception,” Werges advised.
Crystal Welch’s uncle was another target of this scam. He received a call from a scammer claiming to have abducted Welch. The narrative provided by the caller was strikingly similar to other cases. The caller claimed his vehicle had been involved in an accident, but due to a large quantity of drugs in his possession, he did not want the police involved, leading to the supposed abduction of Welch, who allegedly insisted on contacting authorities.
Welch’s uncle heard what he believed to be her voice on the call.
“He answered, and it was my voice. He was utterly convinced. I was crying and begging for help,” Welch described.
Her uncle was instructed to drive to a Lowe’s Home Improvement store in Kirkwood and hand over money to men in a white van. As Welch’s voice grew increasingly frantic, the caller threatened her life. When the scammers demanded money, her uncle refused and terminated the call.
The traumatic experience has deeply affected Welch’s family, prompting her to speak out in hopes of preventing other families from falling victim to similar scams.
“How can someone be so inhumane as to violate others’ lives in such a manner and instill such fear in families? It’s absolutely abhorrent,” Welch expressed.
It is believed that scammers create these counterfeit voices using AI software and videos sourced from social media.
In addition to using a safe word, it is advised that anyone receiving such a call listen for unnatural pauses in the supposed voice of their loved one. Additionally, pay close attention to the voice’s rhythm and flow, and be wary of any irregular pronunciations.
This article was originally published on firstalert4. Read the orignal article.
FAQs
What is a deepfake?
A deepfake is a synthetic media, typically videos or audio, created using artificial intelligence to convincingly mimic real people’s likenesses and voices.
How can I protect my family from deepfake scams?
Establish a family safe word, stay informed about the latest scams, and teach family members to recognize the signs of deepfake voices.
Are there any legal actions taken against deepfake scammers?
Yes, legal actions are evolving to address the misuse of deepfake technology. Reporting incidents to law enforcement is crucial for taking down these scammers.
What should I do if I receive a suspicious call?
Stay calm, verify the caller’s identity through alternative means, and do not transfer money. Report the call to the authorities immediately.
How is technology evolving to combat deepfakes?
Advanced AI tools are being developed to detect deepfakes by analyzing audio and video for anomalies. Social media platforms are also enhancing privacy measures to reduce the availability of personal content for deepfake creation.