More

    Llm-powered Attacks Advance Android Malware Evasion, Achieving 97% Detection Bypass

    The increasing sophistication of Android malware presents a constant challenge to detection systems, prompting researchers to explore the vulnerabilities of machine learning-based defenses. Tianwei Lan and Farid Naït-Abdesselam, from Université Paris Cité in France, along with their colleagues, now demonstrate a powerful new method for circumventing these systems, leveraging the capabilities of large language models. Their work introduces LAMLAD, a novel framework that generates subtle, yet effective, alterations to malware characteristics, successfully evading detection while maintaining malicious functionality. This research is significant because LAMLAD achieves remarkably high success rates in attacking real-world malware detectors, and importantly, the team also proposes a defence strategy that substantially improves system resilience against this new class of threat, offering a crucial step towards more robust mobile security.

    LLMs Enhance Android Malware Detection Robustness

    The increasing sophistication of Android malware poses a continuous challenge to detection systems, leading researchers to investigate the weaknesses of machine learning-based defenses. Scientists have now demonstrated a powerful new method for circumventing these systems, leveraging the capabilities of large language models.

    Their work introduces LAMLAD, a novel framework that generates subtle, yet effective, alterations to malware characteristics, successfully evading detection while maintaining malicious functionality. This research is significant because LAMLAD achieves remarkably high success rates in attacking real-world malware detectors, and the team also proposes a defence strategy that substantially improves system resilience against this new class of threat, offering a crucial step towards more robust mobile security.

    LLMs Evade Android Malware Detection Successfully

    Scientists have developed LAMLAD, a novel framework that leverages the power of large language models (LLMs) to bypass machine learning-based Android malware detectors. This work addresses the growing threat of sophisticated Android malware and the vulnerabilities of current detection systems to adversarial attacks, where malware is subtly altered to evade identification.

    The core of LAMLAD is a dual-agent architecture, comprising an LLM manipulator and an LLM analyzer, working in concert to craft evasive malware samples. The LLM manipulator generates realistic, functionality-preserving changes to malware features, while the LLM analyzer assesses these modifications and guides the process towards successful evasion, ensuring the altered malware remains operational. To enhance efficiency and contextual understanding, the team integrated retrieval-augmented generation (RAG) into the LLM pipeline, allowing the system to draw upon relevant information during the attack process.

    Experiments focused on commonly used malware analysis techniques, enabling stealthy attacks against widely deployed detection systems. Results demonstrate that LAMLAD achieves a high attack success rate, requiring minimal attempts to bypass security measures, highlighting its practical effectiveness. Recognizing the potential impact of this research, the team also proposed and evaluated an adversarial training-based defense strategy, which successfully reduces the attack success rate, substantially improving the robustness of malware classifiers.

    LLM-Powered Evasion of Android Malware Detection

    This research presents LAMLAD, a new framework for evaluating the vulnerabilities of machine learning-based Android malware detection systems, and demonstrates its ability to bypass these systems using the capabilities of large language models. The team developed a dual-agent system, where one language model creates realistic alterations to malware features and another guides the process to successfully evade detection, all while preserving the malware’s core functionality.

    Integrating a retrieval-augmented generation technique further improves the efficiency and contextual awareness of the system, allowing it to operate effectively. Evaluations against established malware detectors reveal LAMLAD achieves a high success rate in adversarial attacks, highlighting its practical effectiveness. Importantly, the researchers also investigated potential countermeasures, successfully demonstrating that training models with examples generated by LAMLAD can significantly improve robustness against this type of attack.

    👉 More information
    🗞 LLM-Driven Feature-Level Adversarial Attacks on Android Malware Detectors
    🧠 ArXiv: https://arxiv.org/abs/2512.21404

     

    Latest articles

    Related articles