Malicious AI Exposed: WormGPT, MalTerminal, and LameHug
LAST UPDATED ON DECEMBER 10, 2025
We need to accept that AI is not just for writing emails or debugging code because it can also be industrialized within cybercrime. While the corporate world utilizes productivity hacks, cybercriminals operate malicious Large Language Models. In this threat landscape, the safety filters we rely on in tools like ChatGPT are absent. The bad guys are not bypassing rails; they have their own tracks.
WormGPT illustrates this operational standard. It is an uncensored chatbot and a dedicated partner that fixes the sloppy grammar in phishing emails. It writes flawless Business Email Compromise messages and generates ransomware scripts. Then there is KawaiiGPT. Despite its anime-themed interface, it allows any amateur to generate spear-phishing attacks.
Tools like LameHug and MalTerminal also function as dynamic agents. These programs contain the ability to reach out to an AI, pretend to be a system administrator, and ask for fresh commands to steal data.
This requires us to look for the conversation between the attacker and the LLM. If security teams are not actively scanning for indicators like API keys hidden in binaries or traffic communicating with AI platforms, they risk missing the broader context. To stay effective, the focus needs to expand to include detecting the prompts and intent behind the code.
In this blog, we will analyze malicious LLMs like WormGPT and the usage of LLM capabilities in malware. We will also equip you with a hunting checklist to detect the artifacts these threats leave behind.
How Malicious LLMs Function in the Cyber Threat Landscape?
AI changes how we work, but it also changes how cybercriminals attack. While standard models assist with drafting emails or coding, threat actors utilize purpose-built "Malicious LLMs" to execute similar tasks for cybercrime.
Unlike the mainstream models that possess safety filters, these malicious models function without ethical constraints. They assist users in generating malware, writing exploit code, or crafting phishing emails.
Here is an overview of the primary tools in this space.
WormGPT
The WormGPT Legacy
To understand this threat, one looks to WormGPT. This tool surfaced in July 2023 as the "uncensored" alternative to standard platforms.
It was built on the open-source GPT-J 6B model and fine-tuned on malware code, exploit write-ups, and phishing templates. This training made it fluent in the tactics, techniques, and procedures (TTPs) of cybercrime.
It writes Business Email Compromise (BEC) messages without the grammatical errors that usually serve as indicators. It also performs "malware scaffolding" by writing code snippets to build malware efficiently [1].
WormGPT 4
Following media attention on the original version, the developers launched WormGPT 4. This is a commercialized service sold on Telegram that operates without ethical boundaries.
When a user asks it to lock down PDF files on a Windows machine, the response is not bad [1]:
|
User Prompt:
|
As you can see, it defaults to searching the C:\ drive, uses AES-256 encryption, and even suggests data exfiltration via Tor. It also generates the ransom notes to go with it [1].
|
User Prompt:
💀 THIS IS NOT A HOAX. THIS IS WAR. 💀
… |
This is a business for them. They charge anywhere from $50 a month to $220 for lifetime access [1].
KawaiiGPT
If WormGPT is the expensive commercial tool, KawaiiGPT is the weird, free alternative that lowers the barrier to entry for everyone.
It popped up on GitHub in July 2025 with an "anime-themed" interface and a "waifu" persona. Don't let the cute name fool you; it is designed to be lightweight and dangerous.
The GitHub Repository structure of KawaiiGPT [1]:
|
LICENSE |
When asked to write a spear-phishing email, it produces a lure designed to steal credentials.
|
Prompt |
Its code generation capabilities were also observed. When prompted for "lateral movement," the model provided a functional Python script using the paramiko library [1].
Mitigation Strategies
The analysis of WormGPT and KawaiiGPT confirms that malicious LLMs are actively being used to democratize cybercrime.
To defend against these threats, a multi-layered security posture is required:
-
Endpoint Detection and Response (EDR): Essential for detecting the execution of generated scripts, such as PowerShell ransomware or Python-based tools.
-
Email Security: Advanced filtering is necessary to identify the linguistically precise phishing emails generated by these models, which may bypass traditional syntax-based filters.
- Validation: Security controls should be continuously tested against the specific tactics facilitated by these tools, including social engineering and automated malware deployment.
What Is LLM-Assisted Malware and How Does It Function Dynamically?
Malware also utilizes AI integration to execute attacks dynamically. These threats embed the necessary components, such as API keys and prompts, to drive activity.
LameHug (aka PROMPTSTEAL)
This threat usually arrives via phishing, masquerading as an "AI image generator" (filenames like AI_image_generator_v0.95.exe) [2].
When you run LameHug, it launches a "decoy" thread that pretends to generate images. But in the background, it spins up a malicious thread that contacts a public LLM (specifically Qwen 2.5-Coder-32B-Instruct via HuggingFace) to dynamically generate commands for reconnaissance, data exfiltration, and system manipulation [2].
|
# Initialization of the malicious thread … |
It sends a prompt to the LLM acting as a "Windows systems administrator" and asks for command lines to gather intel and copy documents. The LLM_QUERY_EX() function constructs the prompt message sent to the LLM. The response from the LLM contains Windows command shell instructions used to steal information from the compromised host [2].
|
def LLM_QUERY_EX(): |
Through the analysis of LLM responses, it was observed that several standard Windows utilities (e.g., systeminfo, wmic, whoami, dsquery, and xcopy.exe) are leveraged to collect detailed system information and consolidate sensitive files. The output of these commands is often saved to C:\ProgramData\info\info.txt [2]:
|
llm_query1: mkdir C:\Programdata\info && systeminfo >> C:\Programdata\info\info.txt && wmic computersystem get name,domain > |
All data and information collected by the malware is exfiltrated to its command-and-control (C2) server via SSH/SFTP [2].
|
def ssh_send(path): |
PromptLock (Academic PoC)
Although PromptLock was initially misidentified as ransomware, it was later revealed to be an academic proof-of-concept [3].
It was observed that the PoC utilizes locally hosted LLMs to dynamically generate and execute malicious Lua scripts. These scripts are tasked with file enumeration, selective data exfiltration, and cross-platform payload execution. Specifically, the gpt-oss:20b model is called via the Ollama API to facilitate these actions.
The following activities are performed by the PoC:
- Reconnaissance: Prompts are sent to the API to gather system information.
- Data Exfiltration: Sensitive data is identified and exfiltrated.
- Data Destruction: Scripts for secure file deletion are generated and executed.
- Encryption: Files are encrypted using the SPECK block cipher in ECB mode. Random DWORD keys are generated for this purpose.
Below is the prompt used by the PoC for the first activity:
|
Prompt: System Summary |
MalTerminal
MalTerminal serves as an instance of "LLM-assisted" malware. Analysis indicates that the samples were likely developed prior to November 2023, based on the usage of a now-deprecated OpenAI chat completions API endpoint [4].
The primary function of the malware is to leverage the OpenAI GPT-4 model to generate malicious code on the fly. Upon execution, the operator receives options to generate specific payloads, such as ransomware or reverse shells [4]. This dynamic generation capability presents a challenge for traditional static signature detection, as the resulting malicious code varies between executions.
How You Can Hunt and Stop Them
Effective defense requires hunting for the artifacts of AI integration rather than relying solely on identifying malicious code.
Here is your hunting checklist:
- API Keys: Look for embedded credentials. Credentials like sk-ant-api03 (Anthropic) or Base64 strings containing T3BlbkFJ (OpenAI) are dead giveaways [4].
- Suspicious DNS Traffic: Watch for processes like python.exe or cmd.exe making unexpected queries to AI domains like router.huggingface.co or api.openai.com.
- Prompt Signatures: You can detect the prompts themselves. Phrases like "Return only commands, without markdown" or "You are a cybersecurity expert" are often hardcoded into the malware.
- Process Behavior: Look for processes like "image generators" that suddenly spawn command-line shells to run reconnaissance commands like systeminfo or whoami.
How Picus Simulates LLM-Assisted Malware Attacks?
We also strongly suggest simulating LLM-Assisted Malware Attacks to test the effectiveness of your security controls against real-life cyber attacks using the Picus Security Validation Platform. You can also test your defenses against hundreds of other malware variants, such as BRICKSTORM, VenomRAT, Chinotto, and Rustonotto, within minutes with a 14-day free trial of the Picus Platform.
Picus Threat Library includes the following threats for the LLM-Assisted Malware Attacks:
|
Threat ID |
Threat Name |
Attack Module |
|
61063 |
LameHug Malware Dropper Download Threat |
Network Infiltration |
|
73135 |
LameHug Malware Dropper Email Threat |
E-mail Infiltration |
|
96178 |
LameHug Infostealer Download Threat |
Network Infiltration |
|
88347 |
LameHug Infostealer Email Threat |
E-mail Infiltration |
|
23596 |
PromptLock Ransomware Download Threat |
Network Infiltration |
|
34289 |
PromptLock Ransomware Email Threat |
E-mail Infiltration |
Start simulating emerging threats today and get actionable mitigation insights with a 14-day free trial of the Picus Security Validation Platform.
Key Takeaways
- Malicious LLMs such as WormGPT and KawaiiGPT remove safety controls and enable the fast creation of phishing content, malware scaffolding, and targeted social engineering with minimal user skill.
- LLM-assisted malware, such as LameHug and MalTerminal, dynamically generates malicious commands at runtime by querying public or commercial LLMs, making detection through static signatures unreliable.
- LameHug demonstrates real-time system discovery and data theft driven by LLM-generated Windows commands, followed by automated exfiltration to a C2 server.
- PromptLock shows that locally hosted LLMs can be integrated into malware to perform reconnaissance, selective data collection, file destruction, and encryption.
- Detection must include searching for embedded API keys, unexpected outbound traffic to AI platforms, hardcoded prompt patterns, and benign-looking processes spawning command shells.
References
[1] Unit, “The Dual-Use Dilemma of AI: Malicious LLMs,” Unit 42. Accessed: Dec. 03, 2025. [Online]. Available: https://unit42.paloaltonetworks.com/dilemma-of-ai-malicious-llms/
[2] T. Contreras, “From Prompt to Payload: LAMEHUG’s LLM-Driven Cyber Intrusion,” Splunk. Accessed: Dec. 03, 2025. [Online]. Available: https://www.splunk.com/en_us/blog/security/lamehug-ai-driven-malware-llm-cyber-intrusion-analysis.html
[3] A. C. Strýček, “First known AI-powered ransomware uncovered by ESET Research.” Accessed: Dec. 03, 2025. [Online]. Available: https://www.welivesecurity.com/en/ransomware/first-known-ai-powered-ransomware-uncovered-eset-research/
[4] A. Delamotte, V. Kamluk, and G. Bernadett-Shapiro, “Prompts as Code & Embedded Keys,” SentinelOne. Accessed: Dec. 03, 2025. [Online]. Available: https://www.sentinelone.com/labs/prompts-as-code-embedded-keys-the-hunt-for-llm-enabled-malware/
