Picture imperfect: The risk of malicious JPGs

How cybercriminals exploit everyday images to deliver hidden threats

Takeaways

  • Cybercriminals are now leveraging everyday image formats like JPGs to deliver hidden malware, making it harder for users to detect threats.
  • Malware delivery methods are evolving, with attackers using AI and exploiting SaaS integrations and authentication tokens, according to recent threat reports.
  • Malicious images, such as JPGs, disguise harmful data within seemingly safe files, bypassing traditional security focused on emails and links.
  • Recent attacks involving MSI images disguised as legitimate files highlight the ongoing adaptation of cybercriminals in using images for compromise.

Malware is a moving target. Attackers are continually changing the way they infiltrate systems, compromise security, and exfiltrate data. They’re also using AI to automate high-velocity attacks, leveraging over-privileged SaaS integrations to expand their impact and harvesting tokens to neutralize multifactor authentication (MFA).

According to the SANS Internet Storm Center (ISC), meanwhile, there’s a new attack type coming into focus: malicious JPGs (or JPEGs). Here’s what companies need to know about how these problematic pictures work and what steps they can take to stay safe.

A brief history of image issues

Hiding malware inside images isn’t a new technique. In 2015, the Stegoloader malware used digital stenography to hide malicious code within a PNG image. In many respects, the vector was more interesting than impactful, since it required users to download a PNG image from a legitimate website. Given the relatively low number of users downloading PNGs and the efforts of site owners to secure published content, Stegoloader didn’t exactly take the malware world by storm.

It did, however, offer solid proof of concept. Not only was picture-based malware possible, the vector was unexpected. Users were busy looking out for more typical malware delivery pipelines, such as infected email attachments or phishing links. Pictures didn’t make the list, making them a clever pathway for compromise. 

Malicious JPGs — when what you see isn’t what you get

For users, images often seem like a safe bet. This is because most companies now offer regular security training to staff — training that prioritizes emails, files and (more recently) AI-enabled social engineering. Deepfakes are also a concern, but the risk focuses on the image itself, not what the picture might conceal. 

JPGs, however, are made of the same stuff as any other content: data. When arranged correctly and accessed using the right application, this data forms an image. The data itself, however, is subject to modification in the same way as any other source. For attackers, this offers an opportunity to ensure that what users see isn’t what they get.

The malicious MSI image

Over the past two months, security professionals have been tracking the movement of a malicious MSI image. First found at the beginning of February 2026, the image was part of a malicious email attachment. At first glance, the attachment looked like a script for creating a Chrome Injector, but infosec expert Xavier Mertens noticed a .bat file that appeared to be a GitHub fork.

Here’s how the attack played out:

  1. Once the regular script was complete, the file jumped to :EndScript, which was followed by a :show_msgbox call. 
  2. This led to a piece of Base-64 encode data executed through PowerShell and obfuscated by junk characters.
  3. The PowerShell script then fetched an image payload, which was a legitimate picture from MSI that had been modified.
  4. This modification pointed to another payload, which was a .Net program that implemented persistence through a scheduled task and used Telegram as a command-and-control center. 

Ultimately, Mertens determined that the image malware was carrying the XWorm trojan. Several weeks later, another expert found the same technique, this time with a JScript rather than a batch downloader.

The downscaling disconnect

Another growing image risk is tied (unsurprisingly) to AI: downscaling.

When images are used as part of AI queries, they’re often downscaled to reduce size. As noted by Tech Radar, however, malicious actors can use interpolation methods such as nearest neighbor, bilinear or bicubic resampling to create image artifacts that only appear when the image is downscaled.

AI tools read these artifacts as user inputs and attempt to carry out the hidden instructions. If companies are using public AI solutions or insecure internal tools, these compromised images could open the door for attackers to exfiltrate data or deploy advanced persistent threats (APTs)

How to prevent picture problems

Limiting the risk of malicious JPGs starts with familiar advice: Never open unknown attachments or images. While next-gen firewalls excel at catching potential threats, they’re not infallible. By opting to delete and report suspicious emails rather than taking the risk of opening suspicious image attachments, staff can virtually eliminate these issues.

It’s worth noting that this is a team effort. Companies must create a report-first infosec culture that encourages staff to share their concerns. If employees are told to simply ignore possible threats or are reprimanded for wasting time, this removes a key layer of security and increases the risk of unexpected attacks. 

When it comes to AI JPG downscaling, meanwhile, three actions can help frustrate malicious efforts. First, companies should restrict AI input dimensions, which reduces the need for downscaling. Next, users should always preview downscaled results for visual artifacts. This is another case for the report-first culture mentioned above — if staff aren’t sure, they should have a clear pathway for image reporting and evaluation. 

Finally, infosec teams should ensure that any internal AI models require explicit confirmation for sensitive tool calls. For example, if a user asks AI to modify a compromised image, that image might ask AI to call financial or HR tools and access sensitive data. With explicit confirmation rules in place, the model will not proceed without direct approval from users or IT professionals. 

Image protection: Don’t “pic” and choose

Malicious JPGs aren’t commonplace, but they are concerning. Staff don’t typically consider them a threat in attachments, and they’re not usually flagged as worrisome when used in AI prompts.

To ensure picture protection and reduce system risk, it’s not about picking and choosing. Instead, companies must create comprehensive strategies that include regularly refreshed security training along with consistent policies for issue reporting, image downscaling and AI tool calling. 

 

Latest articles

Related articles