Deepfakes and malware: AI menu grows longer for threat actors, causing headaches for defenders

Artificial intelligence has dramatically improved a number of technologies in a short period of time. Unfortunately, these technologies are also being weaponized by an assortment of malicious actors.

One example of how this is becoming a significant problem can be found in the burgeoning field of voice phishing, fraudulent phone-based or video scams where attackers impersonate trusted humans. Real-time AI voice changer software is freely available in open-source repositories on GitHub, and security researchers have been taken aback at how effective these tools have become.

Tom Cross of GetReal Security spoke about credential re-sets during Cyphercon.

The presumption has been that voice biometrics will help identify what is fake and what is real. However, that may not necessarily be the case, according to Tom Cross, head of threat research at GetReal Security Inc., who has personally tested the latest AI voice changing tools.

“It not only sounds like me, it sounds like me to the most sophisticated voice biometrics that we’ve tried,” said Cross. “People just don’t understand what is possible right now.”

Targeting credential re-sets

Cross presented the results of his research at Cyphercon, a gathering this week in Milwaukee of cybersecurity researchers and chief information security officers. The event provided an inside-the-community look at what some of the country’s top security experts are seeing in the evolving threat landscape.

It is not a pretty picture. Cross detailed the results of his recent work to assess recent attacks against the credential lifecycle, noting that AI has fueled a noticeable rise in fraudulent voice and face tools.

Credential resets are currently the prime method being used by malicious actors, a trend exacerbated by the realities of remote work brought on by the 2020 Covid pandemic. Where employees were previously required to make a personal appearance in the office to prove their identity for recredentialing, the post-pandemic era has led to much larger numbers who never set foot in the office at all.

“The number of people working fully remote has more than doubled,” Cross noted. “We’re converting the human into a private key.”

One of the more significant hacks involving a credential reset occurred in 2023 when MGM Resorts Inc. and Caesars Entertainment Inc. were hit with a ransomware attack. The exploit, by a cybercriminal group known as Scattered Spider, targeted a third-party information technology vendor through a social engineering attack that persuaded a service desk engineer to reset authentication factors for high-privilege users.

Cross warned that AI-based tools are making it easier for cybercriminals to generate convincing social engineering attacks similar to the exploit that took down slot machines and reservation systems in Las Vegas three years ago. He recommended a number of control techniques for organizations to implement as soon as possible. These include requiring a signoff from two help desk employees for credential resets, doing a video conference call with the requester that prohibits virtual backgrounds or blurring, and having a manager in the org chart who can either join the call or vouch for them.

“They picked an employee who worked in IT, this person had super admin rights,” Cross said. “It doesn’t take that much information with what is publicly available to target this process.”

Leveraging deepfake video tools

The growing arsenal of AI deepfake tools extends to video as well. A wave of “digital arrest” scams is currently sweeping India, according to James McQuiggan, chief information technology officer at Quilligence and education director at the Florida Cyber Alliance.

James McQuiggan of Quilligence described the rise of AI video deepfake tools to attendees at Cyphercon.

He described how malicious actors use spoofed phone numbers to serve phony warrants often through a WhatsApp video channel. AI generated videos, with digitally crafted people impersonating real judges in some cases, appear on a user’s screen in believable courtroom settings threatening arrest for a variety of crimes unless payment is made.

“They keep them online so that they have to pay or they will be arrested,” McQuiggan said. “We’re seeing these fake arrest scams happening in India and other Asian countries. I won’t be surprised if we start seeing these [in the U.S.] in the next six months or even sooner.”

Rapid advancements in AI video tools will undoubtedly make this easier for threat actors. McQuiggan demonstrated a realistic 30-second deepfake video he created of a previous speaker at the conference that day that took him only four minutes to produce. Perhaps even more startling was a tool he showcased onstage called Decart Video AI that used his cell camera to transmit full-body movement in live action. It was someone else’s face and body in the video.

“Creating these deepfakes doesn’t take a lot of effort, especially when you’ve got the services out there,” McQuiggan told the Cyphercon audience. “Some cost only a couple of dollars.”

Self-coding malware appears

Though deepfakes are a visible manifestation of how malicious actors are using AI to up their game, threat researchers are also sounding alarms about a new class of software that quietly operates in the digital shadows.

Polymorphic AI malware is beginning to appear in autonomous and adaptive attacks, according to a recent threat report from Google Mandiant. The malware uses AI model application programming interfaces to generate malicious code on-demand during execution, with an ability to alter its signature and behavior to evade traditional, signature-based detection systems.

Google Mandiant researchers have found versions of the malware in Russian government-backed attacks against Ukraine. In a presentation on polymorphic AI, Security researcher Stephen Sam said that though the malware was unusual in its capability to self-modify “just in time,” it still had to follow basic rules of programming that ultimately made discovery possible.

“Persistent read-write in memory that isn’t backed by a file is a smoking gun,” Sam noted. “Even self-writing malware has to touch a disk eventually. We are in an arms race, but it is not a hopeless one. The shape-shifters are fast and smart, but they still have to play by immutable rules.”

The challenge facing cybersecurity professionals these days is that though coding protocols may force some threat actors to play by the rules, they are not likely to do so in the long-term, and they now have access to a powerful array of AI tools to make the lives of defenders even harder. This could ultimately lead to a shift in how security is viewed, where practitioners must think less like defenders and more like hackers themselves.

“Hacker CISO” Mishaal Khan delivered the keynote about open-source intelligence at Cyphercon in Milwaukee.

An example of this mindset can be seen in the career of Mishaal Khan, a self-described “Hacker CISO” who has cultivated a reputation for ethical hacking to help others by using information readily available online. Khan’s approach, known as OSINT or Open-Source Intelligence, has played a role in the resolution of hacks such as the ParkMobile cashless parking app breach, and smaller investigations have unmasked stalkers, sextortionists and people behind online bomb threats.

In his keynote appearance at Cyphercon this week, Khan described a philosophy that may prove to be the way forward for many seeking to combat AI-armed adversaries besieging networks and private citizens today.

“A lot of these breadcrumbs around the internet create a profile about you,” Khan said. “I feel I’m enabling and arming people to use their skills to do good. We bear a lot of responsibility to do things right. It’s a big task. Things are changing fast and we need to move accordingly.”

Image: SiliconANGLE/ChatGPT

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

 

Latest articles

Related articles