Home / AI World / Antigravity AI: Everything You Need to Know About Google’s New Tool

Antigravity AI: Everything You Need to Know About Google’s New Tool

Google Antigravity Changelog

Antigravity AI is an AI-powered integrated development environment (IDE) developed by Google that introduces an “agent-first” paradigm to software development. Announced on November 18, 2025, alongside the release of Gemini 3, the platform allows developers to move beyond simple code assistance and instead delegate complex tasks to autonomous AI agents. By leveraging models like Gemini 3.1 Pro and Gemini 3 Flash, Antigravity AI can plan, edit, run, and verify code across editors, terminals, and web browsers.

Table of Contents

What Exactly is Antigravity AI?

At its core, Antigravity AI is a heavily modified fork of Visual Studio Code. Unlike traditional AI tools that act as mere autocomplete assistants, Antigravity AI is designed as an agentic platform. This means the AI does not just suggest the next line of code; it acts as a collaborator that can execute entire workflows.

The platform is powered by Google’s most advanced models, specifically the Gemini 3 family. This allows the agents to possess a high level of reasoning, enabling them to handle multi-step programming tasks. The tool is available for Microsoft Windows, macOS, and Linux, offering a flexible environment for developers working in various operating systems.

Key Features and Interfaces

To accommodate different developer needs, Antigravity AI provides two distinct ways to interact with its autonomous agents. This dual-interface approach allows for both granular control and high-level automation.

Editor View: Hands-on Coding

The Editor View functions much like a traditional AI-powered IDE. It provides developers with inline commands and immediate assistance within their coding workspace. This view is ideal for developers who want to maintain direct control over their code while using AI to speed up specific segments of their work or to debug localized issues.

Manager Surface: Autonomous AI Agents

The Manager Surface is where the “agentic” power of the platform truly shines. In this interface, users can deploy autonomous agents across multiple workspaces. These agents are capable of generating entire features, running commands in the terminal, and even testing their own work in an integrated browser. This shifts the developer’s role from writing every line of code to managing a fleet of AI agents that execute complex project requirements.

The Role of Verifiable Artifacts

To build trust between the human developer and the autonomous agent, Antigravity AI utilizes a system of “Artifacts.” Rather than just outputting raw code or tool calls, agents generate verifiable deliverables. These include task lists, implementation plans, screenshots, and even browser recordings. These artifacts allow developers to review the agent’s logic and progress before committing to major changes.

Critical Security Risks and Vulnerabilities

Despite its technological leaps, Antigravity AI has faced significant scrutiny. Within just 24 hours of its launch, security researchers identified several critical flaws that could expose developers to serious threats. Reports from Forbes and other outlets highlighted that the platform was effectively hacked shortly after its debut.

The Early Hack and Persistent Backdoors

Security researcher Aaron Portnoy demonstrated a terrifying exploit where an agent could be coerced into replacing a global MCP (Model Context Protocol) configuration file with a malicious version. Because this file is part of the project environment that runs every time Antigravity AI launches, the backdoor could survive even after a user closed their projects or attempted to reinstall the software. This type of persistence is a major concern for enterprise security teams.

Prompt Injection and Data Risks

Another major concern involves prompt injection. Since agents often process untrusted data—such as external code snippets or markdown files—they are susceptible to malicious instructions embedded within that data. A sophisticated attacker could use prompt injection to trick an agent into leaking sensitive files or executing harmful commands on the host machine. Firms like Prompt Armor have also raised concerns regarding potential data exfiltration through these methods.

The “Trusted Folder” Dilemma

The platform’s design creates a difficult security trade-off. To unlock the full capabilities of the AI agents, users are prompted to mark certain folders as “trusted.” If a folder is marked as untrusted, agent functionality is significantly limited. Security experts warn that the pressure to use these powerful features might lead developers to mark malicious workspaces as trusted, inadvertently giving attackers persistent access to their systems.

Current Stability and Quota Issues

Beyond security, the developer community has reported various functional issues. Many users have taken to official forums to report bugs that disrupt their daily workflows.

Quota and Subscription Anomalies

Users on the Google Artificial Intelligence Pro subscription have reported significant issues with rate limits. Instead of the promised 5-hour refresh cycle, some users have experienced “99-hour weekly resets” or found their models locked for up to 74 hours. These quota bugs make it difficult for professional developers to rely on the tool for continuous work.

System Stability and Crashes

Stability has also been a recurring theme in recent updates. Developers have reported various crashes, including the Antigravity Language Server failing with “ECONNREFUSED” errors due to backend routing issues. Additionally, some users have experienced crashes (SIGILL) on CPUs that lack AES-NI support, suggesting that recent updates may have introduced hardware-specific regressions.

How Google is Improving Security

Google has been proactive in responding to these early challenges. By inviting external researchers to report bugs and maintaining a public updates page, the company is working to patch vulnerabilities as they are discovered. You can follow the latest fixes in the Google Antigravity Changelog.

Secure Mode and Human Review

One of the most significant additions to the platform is the “Secure Mode” option. When enabled, this mode enforces strict settings to prevent agents from autonomously running targeted exploits. Most importantly, it requires human review for all agent actions, ensuring that no command is executed without explicit user permission.

Improved Sandboxing and Permissions

Google has also introduced better sandboxing, particularly for macOS users. This allows agents to execute terminal commands within a controlled environment, preventing them from accidentally or maliciously damaging files outside of the designated workspace. These improvements in agent permissions and sandboxing are critical steps toward making agentic AI safe for professional use.

Frequently Asked Questions

Is Antigravity AI free to use?

Antigravity AI was released in public preview and is available free of charge for Windows, macOS, and Linux, though it includes specific rate limits for the Gemini 3.1 Pro models.

What models power Antigravity AI?

The platform primarily utilizes Google’s Gemini 3 family, including Gemini 3.1 Pro for complex reasoning and Gemini 3 Flash for faster, more streamlined tasks.

How can I protect my system when using Antigravity AI?

To stay safe, it is highly recommended to enable “Secure Mode,” use the sandboxing features, and be extremely cautious about marking folders as “trusted,” especially if they contain code from unknown sources.

Can Antigravity AI replace my current IDE?

While it is a powerful tool that can automate many tasks, it is currently designed as an agentic companion. Many developers use it alongside traditional tools to balance autonomy with manual control.

Disclaimer: As with all emerging AI technologies, users should exercise caution and maintain strict security protocols when allowing autonomous agents access to their local environments.

Leave a Reply

Your email address will not be published. Required fields are marked *