The Rise of Fake AI Platforms: Report

May 29, 2025 | Cybersecurity
By Daksh Dhruva, 63SATS Cybertech News Desk

As generative AI tools rapidly enter mainstream use, cybercriminals are exploiting this surge in popularity to fuel a dangerous new wave of attacks. What was once the domain of curious innovators and tech enthusiasts has now become a powerful weapon in the hands of sophisticated threat actors.

One particularly alarming campaign, active since mid-2024, has seen attackers deploy fake AI platforms designed to lure victims into downloading malware disguised as cutting-edge technology. Promoted aggressively through targeted ads on platforms like Facebook and LinkedIn, these campaigns impersonate popular and trusted AI tools such as Luma AI, Kling AI, and Canva Dream Lab.

According to Mandiant Threat Defense, a cybercriminal group known as UNC6032, believed to be based in Vietnam, has been operating this deceptive network since November 2024. Their primary tactic? Preying on users’ interest in AI tools that promise the ability to generate videos from simple text prompts. By launching fraudulent websites that mimic the look and feel of genuine platforms, they entice victims into downloading what appears to be harmless software.

But the trap runs deeper.

These fake AI sites are paired with thousands of malicious social media ads, reaching millions of unsuspecting users across platforms like Facebook and LinkedIn. Mandiant’s researchers — Diana Ion, Rommel Joven, and Yash Gupta — believe the campaign likely extends even further, with similar attacks underway on other social media platforms. As cybercriminals constantly refine their techniques, their ability to evade detection and widen their reach grows.

Malicious Ads and Fake Websites

The mechanics of the attack are both simple and ingenious. UNC6032 operates over 30 malicious domains, all carefully designed to resemble legitimate AI services. Victims are promised free access to AI-generated videos or images, only to be redirected to deceptive sites where the download files carry invisible tricks. For example, a file might appear to be an .mp4 video, but hidden Unicode characters actually mask a .exe executable file — the real vehicle for infection.

Multi-Stage Malware Attack

Once launched, the attack unfolds in multiple stages. First, a Rust-based dropper called STARKVEIL is executed. This dropper has to run twice to fully activate its payload. On the second execution, it triggers a Python loader known as COILHATCH, which uses sophisticated encryption layers — including RSA, AES, RC4, and XOR — to decrypt and inject malicious DLLs into legitimate system processes.

The final stage delivers three potent malware payloads:

GRIMPULL: A downloader communicating over the Tor network, designed to fetch and inject additional payloads.

XWORM: A known remote access trojan (RAT) capable of system surveillance, keylogging, and data theft through Telegram channels.

FROSTRIFT: A specialized backdoor targeting crypto wallets, password managers, and browser extensions to steal sensitive user data.

These components are designed for persistence, using AutoRun registry keys and embedding within trusted executables to evade antivirus detection.

AI: The New Social Engineering Bait

While the technical sophistication of the malware is impressive, what truly makes this campaign dangerous is its social engineering finesse. By leveraging the excitement and hype around generative AI, attackers manipulate human trust, curiosity, and the natural desire to explore new technologies.

Even seasoned professionals can be tricked by slick interfaces, polished ads, and seemingly authentic brand names.

What This Means for Businesses

This campaign highlights a growing shift in cyber threats — one where attackers exploit not just system vulnerabilities but also emerging cultural and technological trends. Fake AI platforms are only the beginning. As the generative AI ecosystem expands, so too will the opportunities for cybercriminals to abuse it.

At 63SATS Cybertech, we believe the best defense lies in advanced threat intelligence, behaviour-based detection tools, and continuous cyber awareness training. It’s not enough to guard systems; organizations must also prepare their people to recognize and resist evolving social engineering tactics.