According to a detailed report from Google’s Threat Intelligence Group (GTIG), in 2025, threat actors began embedding AI tools directly into their malware, enabling malicious programs to adapt and conceal themselves while actively running on victim systems.
This new approach leverages large language models (LLMs) mid-execution, offering attackers a new kind of flexibility and evasion.
The report catalogs the use of this method by state-sponsored actors from countries like Russia, China, North Korea, and Iran. These groups are experimenting with self-modifying code and using AI with malicious scripts, turning to underground markets for tools designed to support phishing, malware, and data theft.
Why It Matters: Hackers are now incorporating AI in ways that allow malware to change its behavior on the fly. The use of generative AI tools by malicious actors introduces new risks beyond the static nature of traditional malware and presents growing challenges for defenders.
- Using AI to Conceal Code: Google researchers discovered a VBScript-based malware dropper, PROMPTFLUX, that uses the Gemini API to request rewritten versions of its own code. This process happens while the malware is already active, with instructions provided to Gemini on how to make the code harder to detect. The code is rewritten and saved with updated masking techniques in order to survive on infected systems. PROMPTFLUX is still in development, but it shows how malware authors are beginning to build software that can evolve continuously in response to defensive tools.
- First Recorded Use of LLMs in Live Operations: In a real-world incident involving Russian threat group APT28, Google identified malware known as PROMPTSTEAL being used in attacks on Ukrainian targets. Unlike traditional malware that comes preloaded with commands, PROMPTSTEAL asks the Qwen2.5-Coder-32B-Instruct AI model on Hugging Face’s platform to generate system commands based on specific prompts. These commands are then executed in the background without user knowledge. The prompts are designed to collect documents, system data, and other files before sending them back to attacker-controlled servers.
- Role-Playing Prompts to Trick AI Guardrails: Adversaries have been observed crafting prompts that frame their requests as harmless cybersecurity research to bypass safety features in models like Gemini. For example, one actor posed as a student participating in a Capture-The-Flag exercise and modified the language of their request to appear more academic after being initially denied assistance. This allowed them to receive answers that helped them build exploit tools. These manipulative techniques have proven effective in getting around built-in safeguards meant to prevent AI misuse.
- Cybercriminal Marketplaces Now Offer AI Tools to Anyone Willing to Pay: GTIG identified a growing trend in which AI-powered tools for hacking and data theft are being sold on underground forums in multiple languages. These products come with capabilities such as malware generation, phishing email creation, and deepfake production. Sellers advertise these tools using language similar to legitimate AI developers by promoting simplicity and improved results.
- AI Being Used Across All Stages of Cyber Intrusions by State-Backed Groups: The report details how government-aligned groups from countries including North Korea, China, and Iran are using generative AI tools throughout every stage of their cyber operations. This includes creating deceptive content, building and managing command-and-control servers, and helping exfiltrate stolen data. In several cases, actors used Gemini to research technical topics or debug code for custom malware, and to develop social engineering messages in foreign languages to expand campaign reach.
Go Deeper -> GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools – Google Cloud
New malware uses AI to adapt during attacks, report finds – The Record
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


