More

    Social Media Becomes a Top Cyber Risk Vector in the AI Era

    Social media platforms became one of the primary vectors for global cyber risks in 2025 as automation and AI scaled fraud and identity theft. This shift represents a transition from email-based attacks to the exploitation of interpersonal trust and automated credential harvesting.

    The increase in these incidents results from the transition of social networks into repositories of digital identity and the persistent gap between risk awareness and preventive action. 

    “The discussion is no longer whether platforms, organizations, and users know the risk, but whether they are willing to modify behaviors, assume real costs, and redefine responsibilities,” says Fernando Guarneros, COO, IQSEC, to Expansión. 

    Siggi Stefnisson, Cyber Safety CTO, Gen, explains that scams have become more dangerous as they integrate into every aspect of digital life. Stefnisson says that these threats prey on human emotions, such as the need to shop on a budget or the hope for political change.

    During 2025, social media ceased to be exclusively for interaction and entertainment to consolidate as a high-risk environment for global organizations. This trend exacerbated the “mega leak” of June 2025, which consisted of a compilation of about 16 billion exposed credentials that were obtained primarily through malware of the infostealer type. This event did not originate from a single breach but from the aggregation of stolen access data from services such as Facebook, Telegram, and GitHub over several years.

    Research conducted by Gen indicates that cybercriminals blocked 2.55 billion threats between October and December 2024, which is a rate of 321 per second. Social engineering attacks accounted for 86% of all blocked threats during this period. The risk of encountering a threat rose to 27.7% in the final quarter of 2024. These figures suggest that platforms such as WhatsApp, Instagram, Facebook, and TikTok now concentrate a significant proportion of phishing attacks powered by AI, smishing, and highly personalized scams. 

    Modern Attack Patterns

    Modern operations often use phishing farms supported by automation and Robotic Process Automation (RPA). This software executes repetitive tasks at a high scale without human intervention. These tools allow actors to operate from multiple mobile devices and use compromised legitimate accounts to send personalized smishing campaigns, which often incorporate deepfakes of voice and video. These artificial audios simulate family members, executives, or public figures to induce fraudulent investments or pressure victims during emergencies. The use of these technologies has increased the success rate of vishing (voice phishing) by bypassing traditional identity verification methods that rely on auditory recognition.

    The distribution of threats across the social ecosystem is not uniform. Analysis of threat identification by platform indicates that Facebook accounts for 56% of the total, followed by YouTube with 24%, and X with 10%. Additionally, platforms such as Reddit and Instagram each represent 3% of these threats, which suggests that cybercriminals focus resources on platforms with high user density to optimize the efficacy of automated phishing and social engineering campaigns.

    In the segment of messaging applications, Telegram emerged as a high-risk environment. Despite the larger user base of WhatsApp, Telegram recorded six times more cyber threats. Criminals exploit the enhanced privacy features of the platform to hide activities from authorities. 

    Financial Scams and Regulatory Evolution

    Financial fraud and cross-border payment scams are also evolving through the integration of AI, but also due significant regulatory shifts. According to a report from the European Banking Authority and the European Central Bank, fraud across various payment methods resulted in a loss of €4.3 billion (US$5.07 billion) in the European Union during 2022. An additional loss of €2 billion (US$2.36 billion) was recorded in the first six months of 2023. Between October and December 2024, mobile phones served as the primary attack vector for these financial scams, which targeted both issuers and recipients in the global market.

    Business email compromise (BEC) remains the dominant threat to corporate entities, although the methods of execution have become more sophisticated. Bridget Pruzin, Head of Compliance and Risk Investigations and Analysis, Convera, says that free AI  tools allow fraudsters to increase their attempts exponentially while appearing more convincing. These AI-assisted BEC scams utilize ultra-polished, personalized content that mimics the tone and writing style of legitimate executives or vendors. 

    The barrier to entry for cybercriminals has lowered due to the availability of online tools for voice cloning and deepfakes. Pruzin says that voice cloning requires about 10 minutes of audio to replicate and manipulate the voice of a target effectively. This technology facilitates high-impact incidents such as those executed by the CryptoCore group, which used deepfake videos to steal more than US$7 million from victims during the US presidential election. These psychological tactics complement the deployment of mobile banking trojans such as DroidBot, ToxicPanda, and BankBot. 

    Regulatory expectations are shifting toward expanded reimbursement requirements for victims of fraud, which may create new risks for financial institutions. Pruzin warns that fraudulent refund schemes could emerge if reimbursement programs are not carefully structured to prevent exploitation by “professional refunders.” 

    The risk to businesses is substantial, as a survey from KPMG found that 45% of banks would terminate relationships with customers who are repeat victims of scams because they present too much risk. Consequently, companies must invest in fraud-resistant architecture that includes real-time monitoring and behavioral analytics. Until February 2025, only 28% of middle-market businesses have implemented automated fraud detection systems, which leaves a significant portion of the B2B sector vulnerable to business-shattering consequences.

    Behavioral Analysis and Demographic Vulnerabilities

    An investigation by Bitdefender in 2025 revealed that one out of seven participants was a direct victim of a scam. The average loss per person was about US$545. The data indicates that digital habits, rather than education levels, determine vulnerability.

    Younger populations are particularly exposed. About 20% of younger users reported being victims, which is more than double the rate of older age groups. This vulnerability results from overexposure rather than lack of knowledge. The frequent sharing of audio, video, routines, and personal data facilitates the creation of credible attacks, including telephone fraud based on voice cloning.

    One of the most concerning aspects of 2025 is the persistence of insecure habits among users and organizations that are aware of the risks. There is a significant gap between the awareness of threats and the implementation of sustained preventive behaviors. Social media has surpassed email as the main channel for fraud because it operates on interpersonal trust. Guarneros says that attackers no longer need to exploit a technical flaw; they only need to manipulate a social relationship or trigger an emotion such as urgency, fear, or curiosity.

    In addition, the Gen report also highlights the growth of “scam-yourself” attacks, such as ClickFix and FakeCaptcha. During the fourth quarter of 2024, Gen blocked attacks targeting 4.2 million individuals, representing a 130% increase from the previous quarter. These campaigns use psychological manipulation to deceive people into copying and executing malicious code themselves. This method often leads to financial fraud, malware infections, or account takeovers without triggering automated security alerts.

    In 2026, the risks associated with AI-powered devices and systems will mark the next frontier for cybercrime. The exploitation of the digital identity as a currency of exchange will continue unless there are clear consequences for the exploitation of social trust. Organizations are thus urged to transition toward zero-trust architectures and implement robust identity verification protocols that do not rely solely on social media authentication.

     

    Latest articles

    Related articles