A recent report indicates that Google has resolved a security flaw within its Antigravity AI coding platform, which could have enabled attackers to execute commands on developers’ machines through a prompt injection attack. Cybersecurity experts from Pillar Security identified the vulnerability as stemming from Antigravity’s find_by_name file search tool, which transmitted user input directly to a command-line utility without proper validation. This oversight allowed malicious inputs to transform a simple file search into an execution of arbitrary commands, facilitating remote code execution.
Pillar Security highlighted that combining this flaw with Antigravity’s capability to create files opened the door to a full attack sequence: deploying a harmful script and triggering it via a seemingly legitimate search command without further user intervention once the prompt injection occurred. The AI-powered development environment, launched by Google in November of last year, is designed to assist programmers in writing, testing, and managing code through autonomous software agents. Pillar Security informed Google about this issue on January 7th, with Google acknowledging it immediately and resolving it by February 28th.
As of the time of reporting, Google had not yet responded to a comment request from Decrypt. Prompt injection attacks exploit hidden instructions in content that manipulate an AI system into performing unintended actions. Given that AI tools frequently process external files or text during normal operations, they may treat these embedded instructions as valid commands, allowing attackers to initiate actions on a user’s machine without direct access or further interaction.
The potential risks associated with prompt injection attacks for large language models gained attention last summer when OpenAI warned of vulnerabilities in its ChatGPT agent. In a blog post, OpenAI stated: ‘When you sign ChatGPT agent into websites or enable connectors, it will be able to access sensitive data from those sources, such as emails, files, or account information.’
To demonstrate the Antigravity flaw, researchers developed a test script within a project workspace and activated it using the search tool. Upon execution, the script launched the computer’s calculator application, proving that the search function could be manipulated for command execution.
The report emphasized that this vulnerability circumvented Antigravity’s Secure Mode, the platform’s highest security setting. These findings underscore the growing security challenges facing AI-powered development tools as they autonomously perform tasks. ‘The industry must move beyond simple sanitization-based controls to execution isolation,’ Pillar Security advised. ‘Every native tool parameter reaching a shell command represents a possible injection point. Auditing for such vulnerabilities is essential and necessary before deploying agentic features.’