OpenAI launched workspace agents inside ChatGPT, giving teams the ability to build shared AI assistants that handle multi-step tasks across tools like Slack, code editors, and internal dashboards.
The agents run on Codex, OpenAI's code-generation engine. Teams can configure them to generate reports, write and review code, draft communications, and manage recurring workflows — all within a shared workspace visible to collaborators.
How workspace agents differ from custom GPTs
Custom GPTs let individual users build single-purpose chatbots. Workspace agents operate at the team level. They persist across sessions, integrate with external services, and execute multi-turn tasks that span tools rather than just answering questions.
The feature is available now in research preview for select ChatGPT enterprise and team plans. OpenAI has not disclosed pricing beyond existing plan tiers.
Workspace agents sit in a competitive lane already occupied by Google, which this same week shipped Workspace Intelligence — a semantic layer connecting Gmail, Docs, Sheets, Drive, and Chat to Gemini-powered agents. Google's update added natural-language spreadsheet building in Sheets and AI-driven features across its productivity suite.
The timing is not coincidental. Both companies are racing to become the default AI layer inside enterprise productivity tools, where daily active usage translates directly into retention and upsell.
What else moved this week
Several other developments landed alongside the workspace announcements:
- Alibaba's Qwen team released Qwen3.6-27B, a dense model the team claims surpasses its previous flagship Qwen3.5-397B-A17B on all major coding benchmarks despite being far smaller at 55.6 GB on Hugging Face
- Perplexity published research on a two-stage pipeline for search-augmented language models, combining supervised fine-tuning with reinforcement learning to improve factual accuracy while reducing cost per query
- Google separately launched the Gemini Enterprise Agent Platform, a full-stack environment for building, governing, and scaling production agents with integrated DevOps and security tooling
- Jerry Tworek, a former OpenAI researcher, founded Core Automation, a lab aiming to automate its own AI research pipeline before developing architectures designed to scale beyond transformers
Microsoft also confirmed it will move all GitHub Copilot subscribers to token-based billing starting in June, a shift that could reshape how developers budget for AI coding tools.
Anthropic published guidance on building production agents with MCP, the Model Context Protocol, arguing it becomes the critical integration layer as agents move from local prototypes to cloud-hosted systems.
The week's pattern is clear: the major labs are no longer just shipping models. They are shipping the scaffolding — agent platforms, workspace integrations, billing models — that locks AI into daily enterprise workflows. The next competitive front is not benchmark performance but how deeply these tools embed into the tools teams already use.
💬 Discussion
Sign in to join the discussion.
Sign in →No comments yet — be the first.