{"id":3437,"date":"2025-11-06T10:02:48","date_gmt":"2025-11-06T10:02:48","guid":{"rendered":"http:\/\/ft365.org\/index.php\/2025\/11\/06\/ai-enabled-malware-now-actively-deployed-says-google\/"},"modified":"2025-11-06T10:02:48","modified_gmt":"2025-11-06T10:02:48","slug":"ai-enabled-malware-now-actively-deployed-says-google","status":"publish","type":"post","link":"http:\/\/ft365.org\/index.php\/2025\/11\/06\/ai-enabled-malware-now-actively-deployed-says-google\/","title":{"rendered":"AI-Enabled Malware Now Actively Deployed, Says Google"},"content":{"rendered":"<div>\n<p><img decoding=\"async\" src=\"http:\/\/ft365.org\/wp-content\/uploads\/2025\/06\/localimages\/ea721ff9-8ba4-4d88-b386-57e9e1606077.jpg?width=64&#038;height=64&#038;mode=crop&#038;scale=both&#038;format=webp\" alt=\"Photo of Phil Muncaster\" loading=\"lazy\"><\/p>\n<\/div>\n<div id=\"cphContent_pnlArticleBody\" data-layout-id=\"2\" data-edit-folder-name=\"text\" data-index=\"0\">\n<p>Google has discovered a new breed of AI-powered malware that uses large language models (LLMs) during execution to dynamically generate malicious scripts and evade detection.<\/p>\n<p>A Google Threat Intelligence Group (GTIG) report yesterday highlighted two families it said use \u201cjust-in-time AI\u201d in this way \u2013 PromptFlux and PromptSteal.<\/p>\n<p>\u201cThese tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware,\u201d the report explained.<\/p>\n<p>\u201cWhile still nascent, this represents a significant step toward more autonomous and adaptive malware.\u201d<\/p>\n<p><em>Read more on LLM abuse:\u00a0New \u201cLameHug\u201d Malware Deploys AI-Generated Commands<\/em><\/p>\n<p>PromptFlux is a dropper written in VBScript which \u201cregenerates\u201d by using the Google Gemini API. It prompts the LLM to rewrite its own source code on the fly, and then save the obfuscated version to the Startup folder for persistence. The malware also tries to spread by copying itself to removable drives and mapped network shares, GTIG said.<\/p>\n<p>PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders\u00a0and send the data to a command-and-control (C2) server.<\/p>\n<p>GTIG said it had observed PromptSteal being used by Russian actor APT28 in Ukraine, while PromptFlux is still being developed.<\/p>\n<p>Among the other AI-enabled malware families the report highlighted are:<\/p>\n<ul>\n<li><strong>FruitShell<\/strong>: a reverse shell written in PowerShell which establishes\u00a0remote C2 connections and enables the execution of commands on a targeted system. It uses hard-coded prompts to evade detection by LLM-based security<\/li>\n<li><strong>PromptLock<\/strong>: ransomware written in Go which uses an LLM to dynamically generate malicious Lua scripts at runtime\u00a0for reconnaissance, data encryption and exfiltration<\/li>\n<li><strong>QuietVault<\/strong>: a JavaScript credential stealer which uses an AI prompt and on-host installed AI CLI tools to search for and exfiltrate secrets<\/li>\n<\/ul>\n<h2>The AI Malware Market Matures<\/h2>\n<p>Google warned that the cybercrime market for AI tools is developing at a rapid pace. It cited \u201cmultiple offerings of multifunctional tools designed to support phishing, malware development, and vulnerability research,\u201d which could democratize cybercrime further.<\/p>\n<p>It also noted continued efforts to bypass guardrails in Gemini by using \u201csocial engineering-like pretexts\u201d in prompts. Additionally, GTIG warned that nation state actors are misusing the chatbot to assist in all stages of their attacks \u2013 from reconnaissance and creation of phishing lures to C2\u00a0development and data exfiltration.<\/p>\n<p>Cory Michal, CSO at AppOmni, said the GTIG report echoes what his firm is seeing in the SaaS threat landscape.<\/p>\n<p><strong>\u201c<\/strong>AI-enabled malware mutates its code, making traditional signature-based detection ineffective. Defenders need behavioral EDR that focuses on what malware\u00a0<em>does<\/em>, not what it\u00a0looks like,\u201d he added.<\/p>\n<p>\u201cDetection should key in on unusual process creation, scripting activity\u00a0or unexpected outbound traffic especially to AI APIs like Gemini, Hugging Face or OpenAI. By correlating behavioral signals across endpoint, SaaS\u00a0and identity telemetry, organizations can spot when attackers are abusing AI and stop them before data is exfiltrated.\u201d<\/p>\n<p>Max Gannon, cyber intelligence team manager at Cofense, argued that the use of AI at every step of the kill chain should be a concern to network defenders.<\/p>\n<p>\u201cThis is a significant change from last year when AI was used minimally with a focus on phishing emails and kits,\u201d he added.<\/p>\n<p>\u201cI expect that in the near future enterprising threat actors will be selling all-inclusive AI-based kits that generate every part of the attack chain and require zero knowledge \u2013 making the only barrier to entry the subscription fee.\u201d<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Google has discovered a new breed of AI-powered malware that uses large language models (LLMs) during execution to dynamically generate malicious scripts and evade detection. A Google Threat Intelligence Group (GTIG) report yesterday highlighted two families it said use \u201cjust-in-time AI\u201d in this way \u2013 PromptFlux and PromptSteal. \u201cThese tools dynamically generate malicious scripts, obfuscate<\/p>\n","protected":false},"author":2,"featured_media":3438,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-3437","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"featured_image_urls":{"full":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"thumbnail":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078-150x150.jpg",150,150,true],"medium":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"medium_large":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"large":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"1536x1536":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"2048x2048":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"morenews-featured":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"morenews-large":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"morenews-medium":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078.jpg",300,300,false],"crawlomatic_preview_image":["http:\/\/ft365.org\/wp-content\/uploads\/2025\/11\/3437-0f87fe8f-3084-4bfe-9bd7-ac41f5dd8078-146x146.jpg",146,146,true]},"author_info":{"display_name":"henry","author_link":"http:\/\/ft365.org\/index.php\/author\/henry\/"},"category_info":"<a href=\"http:\/\/ft365.org\/index.php\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","tag_info":"Uncategorized","comment_count":"0","_links":{"self":[{"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/posts\/3437","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/comments?post=3437"}],"version-history":[{"count":0,"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/posts\/3437\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/media\/3438"}],"wp:attachment":[{"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/media?parent=3437"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/categories?post=3437"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/ft365.org\/index.php\/wp-json\/wp\/v2\/tags?post=3437"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}