RedLine Stealer malware lurks on unsafe sites offering AI tools

June 15, 2023
RedLine Stealer Malware AI Artificial Intelligence BATLOADER Google Search ChatGPT Midjourney

The rise of malicious Google Search ads has taken a vicious turn, targeting users seeking generative AI services such as OpenAI ChatGPT and Midjourney. These ads have been strategically designed as part of a BATLOADER campaign to redirect unsuspecting users to attacker-controlled websites harbouring the RedLine Stealer malware.

Studies about this issue highlight the vulnerability of these popular AI services due to the absence of first-party standalone apps. Threat actors have seized the opportunity to lure AI enthusiasts into malicious web pages, promoting counterfeit applications.

A new malware dubbed BATLOADER is involved in this campaign, spreading through drive-by downloads targeting unsuspecting users conducting search engine queries for particular keywords.

These users are presented with deceptive advertisements, redirecting them to fraudulent landing pages hosting malicious software upon clicking. According to findings, the installer file of BATLOADER contains an executable file named ChatGPT.exe or midjourney.exe, along with a PowerShell script named Chat.ps1 or Chat-Ready.ps1.

 

These components work together to download and execute RedLine Stealer malware from a remote server.

 

To avoid suspicion, once the installation is completed, the binary cleverly employs MS Edge WebView2 to load the legitimate URLs of ChatGPT or Midjourney in a discreet pop-up window, ensuring no suspicions are raised among targets.

Recent threat analysis has shed light on the tactics employed by adversaries, who utilise ChatGPT and Midjourney-themed lures to propagate malicious ads, resulting in the infiltration of RedLine Stealer malware.

Researchers have observed a staggering 910% surge in monthly domain registrations associated with ChatGPT from November 2022 to early April 2023, highlighting the extent of this concerning trend.

This is not the first instance where the operators behind BATLOADER have capitalised on the widespread interest in AI to distribute malware. As early as March 2023, experts documented attacks employing ChatGPT lures, successfully deploying other malware strains like Vidar Stealer and Ursnif.

Security experts have also warned about a rise in fraudulent activities that imitate the ChatGPT service, targeting unsuspecting users to extract their credit card information, engage in credit card fraud, and steal victims’ Facebook account details through counterfeit chatbot web browser extensions.

As the popularity of AI tools continues to surge, threat actors will continue to leverage this wave to launch phishing and scam campaigns aimed at distributing malware and deceptive applications.

This report highlights the ongoing risk of phishing and scam campaigns exploiting the AI craze to distribute malware and deceive unsuspecting users. Heightened vigilance and security measures in the AI ecosystem are required to combat these threats.

About the author

Leave a Reply