A cybersecurity researcher has demonstrated a serious vulnerability in Orchids, a popular AI-powered “vibe coding” platform that allows users to build apps and games using simple text prompts.
In a controlled test, the flaw enabled the researcher to secretly inject malicious code into a BBC reporter’s project and ultimately take control of the reporter’s laptop, without any action from the victim.
Orchids, which claims around one million users and says it is used by major technology companies, allows AI agents to autonomously generate and execute code directly on users’ machines. While the platform is praised for making software creation accessible to non-technical users, the incident highlights how granting AI systems deep system access can introduce new and significant security risks.
Why It Matters: This is a glimpse of what happens when AI tools are given real autonomy inside real machines. Even if your company isn’t using this specific platform, someone on your team could possibly be experimenting with similar tools. And when those tools can write and run code on laptops with little oversight, the risk shifts from “did someone click a bad link?” to “what exactly is this AI doing on our devices?” The productivity upside is obvious, but so is the uncomfortable question of “who’s watching the AI?”
- A Demonstrated Zero-Click Attack: The exploit required no phishing email, malicious download, or user interaction. By inserting a single line of malicious code into an AI-generated project, the researcher was able to change the laptop’s wallpaper and create a file reading “Joe is hacked.” In a real-world scenario, such access could have enabled malware installation, data theft, or surveillance through cameras and microphones.
- Deep System Permissions as a Core Risk: Vibe coding platforms function by compiling and running AI-generated code locally. This grants them extensive access to file systems and operating system functions. If compromised, attackers could potentially access private data, financial information, internet history, or sensitive corporate assets stored on the device.
- Rapid Growth, Limited Response: Orchids was founded in 2025 and reportedly has fewer than 10 employees, yet claims a large user base and enterprise adoption. The researcher said he attempted to alert the company for weeks before receiving a response that they may have “missed” his warnings due to high inbound volume, thus raising concerns about vulnerability disclosure processes in fast-scaling AI startups.
- A New Category of AI-Era Vulnerabilities: According to researcher Etizaz Mohsin, the shift toward AI handling coding tasks autonomously creates security issues that did not previously exist. When users cannot easily review or understand thousands of lines of AI-generated code, malicious modifications can be hidden in plain sight.
- Implications for Agentic AI Tools: The case brings about wider concerns surrounding AI agents that are capable of performing tasks directly on users’ devices, such as managing messages or calendars. Cybersecurity experts advise running such tools on dedicated machines and using disposable accounts for experimentation to limit potential exposure.
Go Deeper -> AI coding platform’s flaws allow BBC reporter to be hacked – BBC
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


