Keeping your data out of Google's AI training pipeline requires navigating what privacy researchers call dark patterns — and giving up features most users expect to work.

An Ars Technica investigation published this week details how Google's Gemini AI, now embedded across Gmail, Drive, and other Workspace products, processes user data in ways that blur the line between helpful assistant and training-data vacuum. Google maintains it does not use Workspace content to train its foundational models. But Gemini's outputs — summaries of emails, snippets from documents — can feed back into training datasets, the company confirmed.

Opting out means losing your chat history

The only way to fully block AI training on your interactions is to disable a setting called Gemini Apps Activity, buried in a dedicated page at myactivity.google.com. Turning it off stops Google from training on your chats. It also permanently deletes your conversation history. Users face a binary choice: allow training, or lose every past and future chat log.

That trade-off fits the definition of a forced action, one of the most recognised dark patterns in interface design.

"It doesn't matter whether it's intentional or not," said Marie Potel of Fair Patterns, a startup building AI tools to detect predatory design. "What matters is whether the autonomy — the agency — of users is respected and whether the design goes against what users want to do."

Ars Technica checked multiple Google accounts and found no link to Gemini's privacy controls inside the main Account Activity Controls page, where toggles for Maps, Search, and Assistant all appear. A Google representative said the link should be there. It was not.

The scale of Google's AI bet

Google is expected to spend $185 billion on AI infrastructure in 2026 alone. That investment creates pressure to push Gemini adoption across its product base. Gmail now offers AI-drafted replies, chain summaries, and an AI-organised inbox — all switched on by default.

Disabling Gemini inside Gmail requires a separate, non-obvious process. Unlike other Gmail features that offer simple toggles in settings, Gemini has no granular controls. Users who want email without AI summaries must navigate a different path entirely, one Ars Technica found difficult to locate even when searching Google's own support documentation.

Potel noted that burying privacy settings behind excessive clicks is a pattern Google has repeated before. "Google has a history of hiding features, especially privacy settings, in a number of clicks that are absolutely made to deter people from using them," she said.

Google told Ars Technica it tries to "filter and reduce" personal information flowing into training data, but offered no detail on how that automated process works or how effective it is.

The European Union's Digital Services Act and upcoming AI Act both address dark patterns explicitly. Whether Google's current Gemini opt-out flow would survive regulatory scrutiny in the EU remains an open question — and one Brussels is likely already examining.