When Malware Learned to 'Think': PROMPTFLUX & PROMPTSTEAL in 2025
December 2, 20257 min read

When Malware Learned to 'Think': PROMPTFLUX & PROMPTSTEAL in 2025

How adaptive AI-driven malware is changing cybersecurity forever

AI MalwareCybersecurityLLMsThreat Intelligence
Dhaval Shah
Dhaval Shah
Founder & CEO, The Dev Guys

In late 2025, cybersecurity reached a turning point. Google's Threat Intelligence Group (GTIG) and Darktrace uncovered a new class of malware that doesn't just run — it reasons. PROMPTFLUX and PROMPTSTEAL are designed to query large language models like Gemini during execution, transforming their identities as defenders react.

What Makes This Different

Until now, malware relied on pre-written logic. These new threats use AI in real time, making decisions, rewriting themselves, and continuing attacks even as detection systems adapt.

PROMPTFLUX

PROMPTFLUX is a VBScript-based dropper that repeatedly asks an LLM to rewrite its own code. Every rotation produces fresh obfuscation — changing filenames, code structure, and signatures — allowing it to slip past antivirus tools designed to detect known patterns.

  • Uses Gemini API calls at runtime
  • Rewrites itself hourly
  • Persists via Windows Startup folder
  • Spreads to removable and network drives

PROMPTSTEAL

PROMPTSTEAL, often delivered as a PyInstaller package, has been associated with advanced persistent threat actors. Instead of embedding fixed commands, it generates system reconnaissance and exfiltration actions on-demand through an LLM capable of writing code.

  • Executes dynamically generated commands
  • Context-aware reconnaissance
  • Targeted file collection and exfiltration
  • Hard to classify through static analysis

The Dawn of Adaptive AI Malware

This isn't AI helping attackers write malware — this is malware using AI while alive. It shifts from static threats to living software organisms that evolve with the environment they're attacking.

💡
Traditional signatures collapse when the malware no longer stays the same.

How Security Must Respond

Because these threats use AI as an execution layer, defenders must go beyond signature detection. Focus shifts to real-time behavior, credential protection, and limiting AI model misuse.

  • Monitor unexpected API calls to LLM providers
  • Encrypt and rotate developer tokens frequently
  • Use anomaly-based, behavior-driven detection
  • Adopt AI-powered security operations
  • Educate teams on social-engineering upgrades

A Dual-Use Technology Dilemma

LLMs enable innovation, but their flexibility also lowers the skill barrier for cybercrime. Polymorphic malware — once requiring elite adversaries — can now be assembled using a stolen billing token and a few prompts.

Final Thought

PROMPTFLUX and PROMPTSTEAL are early warning signs. We are entering an era where software attacks will be guided by models that can reason, rewrite, and react. To keep up, cybersecurity must become as adaptive as its adversaries.

Dhaval Shah
About the author

Dhaval Shah

Founder & CEO, The Dev Guys

Founder, architect, and the first call for products that can’t afford to fail.

Dhaval has spent 25+ years helping founders and teams translate ambiguous ideas into precise systems. He leads The Dev Guys with a bias toward clarity, deep thinking, and high-craft execution.

View profile →