More

    What Cyber Experts Fear Most in 2026: AI-Powered Scams, Deepfakes, and a New Era of Cybercrime

    Recently, I asked several cybersecurity experts, including my colleagues here at PCMag, for their predictions on online privacy and safety in 2026. After all, 2025 was tumultuous, with scammers infiltrating social media platforms, hackers utilizing AI to automate various crimes-as-a-service businesses, and ransomware attacks hitting companies throughout the year. While not exactly surprising, most of the responses were pretty negative, to say the least.

    Why the doom and gloom? In short, generative AI has had a significant impact on the tech industry, and we are now starting to see its effects on security. It’s hard to know the true long-term impacts that unregulated, unrestricted generative AI tools will have on society, but for now, it’s definitely making crime a lot easier.

    Keep reading for our expert forecasts, followed by suggestions to stay safe in the new year.


    Scams Go High-Tech: AI’s Personal Touch

    “I’m concerned about AI’s influence on cybersecurity,” Lucas Hansen, founder of CivAI, a non-profit focused on raising public awareness of AI’s capabilities and risks, told me. 

    I spoke with Hansen earlier this year when gathering background research about deepfakes. He said that AI can quickly gather massive amounts of personal data about potential scam targets, including any photos or videos of them posted on the open web. The criminals collect the data, then use an LLM to create deepfake videos and voices of a target’s colleagues or loved ones. 

    The scammers then use deepfakes to make contact via phone calls or video chat. They ask for money, collect information for blackmail purposes, or compel the target to click a phishing link. Scammers will also use AI to craft highly personalized phishing emails or text messages to ensnare their victims.


    I’m concerned about AI’s influence on cybersecurity.
    – Lucas Hansen, founder of CivAI

    Spear phishing used to be the domain of state actors, and these crimes were typically reserved for high-profile targets, such as executives at financial institutions or government officials. After all, scammers could rack up a lot of billable hours while compiling a target’s dossier. Hansen told me that AI can do the same job much quicker, for free.

    “Now, everyone is an MVP target”, said Hansen. “Even if that’s all AI changes about cyber security, that’s an absolute disaster.”


    Big Brother Ads: Smarter and Creepier Than Ever

    I knew my online goose was cooked when ads for eye wrinkle creams and local gyms started appearing after I uploaded a photo of myself to a social media platform. Justyn Newman, senior security analyst at PCMag, says highly targeted ads like the ones I saw are just the beginning. 

    “As privacy continues to erode, and the cost of AI generation for businesses decreases, we’ll see some of the bigger players in the ad space attempt to roll out algorithm-based ads that generate based on a user’s habits.”

    These targeted ads are the result of years of customer data collection, and sometimes, data loss by companies. For example, after my personal inbox was overrun with scam and spam messages, I realized my email address was on the dark web because Tumblr lost it (and other data) in a 2013 data breach. 

    We witnessed the effects of invasive ads over the summer, as researchers at Black Hat showed that hackers have infiltrated ad networks and served advertisements for fake cryptocurrency, dating apps, gambling sites, and porn. It’s especially concerning because the fake, sometimes AI-generated ads made grand promises to attract older individuals or those who are lonely.

    “Criminals are preying on a vulnerable population,“ said Dr. Renee Burton, a researcher at Infoblox. “They just draw out their money again and again and again.” 


    Cybercrime Gets a Corporate Makeover

    One reason why hackers are able to pull off major crimes like those mentioned above is that, like hip-hop and punk rock, they have gone corporate. Many hackers now take lucrative jobs as freelancers or even full-time employees for large criminal organizations, and we expect more will follow the money next year.

    “Cybercriminals aren’t just guys in hoodies staring at laptops anymore, they’re employees working under the table in cubicles and offices for bosses who don’t have to lift a finger to make money,” said Alan Henry, managing editor for PCMag’s security team. “Cybersecurity has to adapt to that changing reality.”

    Henry noted that viruses and trojans are no longer the preferred methods of attack. Instead, “Ransomware is where it’s at now, and it earns big payouts for the criminals who use it to extort the organizations they target.”

    Thanks to years of lucrative ransomware schemes, modern-day cybercriminals are extremely wealthy, which means they can now hire the best hackers. A significant dip in the global tech job market means it’s not hard for these groups to find employees. There are numerous recent STEM graduates and junior employees seeking employment, and some are paying bills by accepting gigs on dark web job forums.

    Don’t worry, cybercriminals will continue to take legit jobs, too. After all, at RSAC, FBI security experts said they’re still seeing an uptick in North Korean hackers using AI deepfakes to get high-paying remote work jobs based in the US.


    AI: Your Digital Shield—or Secret Weapon for Hackers?

    It’s not all bad news, though. Many companies are using AI to detect scammers or shore up their network’s security defenses. I recently spoke with Aanchal Gupta, chief security officer at Adobe. She told me that in addition to deepfakes and phishing threats, companies should be wary of prompt injection attacks. 


    AI is here…As a security person, my goal is to secure it for our families and friends. It’s not one person’s job. We all have to join forces.
    – Aanchal Gupta, chief security officer at Adobe

    If you’re unfamiliar with this emerging threat, Gupta provided an example of prompt injection affecting the hiring process. She said that some companies use AI to triage resumes and encounter problems when people paste prompts in white text at the bottom of a page. These prompts may override an LLM’s original directive and shuffle the jobseeker’s resume to the top of the pile. It’s a pretty innocuous use of AI prompt injection, but that same technique can be applied to other documents, too. For example, a scammer could paste LLM prompts into emails, so if you use AI to summarize your messages, it could execute a totally different, much more nefarious command, like sending a summary of your messages back to the scammer.

    “AI is here,” said Gupta. “As a security person, my goal is to secure it for our families and friends. It’s not one person’s job. We all have to join forces.”

    In the meantime, companies like Adobe are using AI tools to counter adversarial AI attacks. “Don’t worry,” said Gupta. “We have come a long way from where we were when we started.”

    But as my colleague, PCMag lead security analyst Neil Rubenking, warns, there’s danger in becoming too dependent on AI for defense. “Security companies are focusing strongly on detecting scams and educating consumers, and most enlist AI to help. AI improves the scams, AI bolsters defenses…who wins? Right. AI wins.” 

    PCMag senior report Michael Kan agrees, adding, “Experts often say cybersecurity is a cat-and-mouse game. To me, it looks more like a house overrun with mice, holes in every corner, and a lumbering cat trying to spray AI as pesticide, except the mice have figured out how to spray it back. Game over, man, game over.”


    Lock the Doors Before 2026 Hits

    One thing is certain: Criminals will get better at using AI, so if you want to stay online, you’ll need to learn how to protect yourself. 

    To get started, I recommend bookmarking my cybersecurity checklist and ensuring that you and your family have taken steps to lock down your online accounts. Prevent people from creating deepfakes of you or family members by making your public photos and videos private on social media, and refrain from posting other people or your children without consent. 

    To scrub your digital footprint, try out a personal data removal service. To prevent online scammers from targeting you at home or the office, download identity theft protection software. Finally, for the latest security news and advice, subscribe to PCMag’s security newsletter, SecurityWatch.

    About Our Expert

     

    Latest articles

    Related articles