Claude AI Privacy Guide

What Anthropic Does With
Your Claude Conversations

Claude is widely regarded as one of the most thoughtful AI assistants — but Anthropic's data policies still leave consumer users exposed. Here is what you need to know.

Add to Chrome — Free
📋

Conversation Storage

Claude.ai stores your full conversation history to power the chat interface. Even if you delete a conversation from your view, Anthropic retains server-side backups for a retention period.

🏗️

Consumer vs. Enterprise Gap

Consumer Claude.ai accounts may have conversations used for model improvement. Enterprise API customers get a signed DPA with a zero-training commitment — a protection consumers do not receive by default.

📁

Projects Context Risk

Claude Projects let you upload documents as persistent context. Any sensitive data in those documents is sent to Anthropic with every message in that project, potentially hundreds of times.

👁️

Safety Review Access

Anthropic employees may read conversations flagged by automated safety systems. This is standard practice for responsible AI — but means your private messages are not guaranteed to stay private.

Understanding Anthropic's Data Handling Philosophy

Anthropic has built a reputation for safety-first AI development, and its privacy practices are generally more transparent than some competitors. However, Claude.ai consumer accounts operate under a standard data policy that permits Anthropic to review conversations for safety and to potentially use them for model improvement unless you explicitly opt out.

The critical distinction is between consumer Claude.ai and enterprise API access. Organizations that process sensitive data through Claude should be using the API with a signed Data Processing Addendum. Without a DPA, the default consumer terms apply — and those terms give Anthropic significant latitude over how your data is used.

The Projects Feature and Persistent Context Risk

Claude's Projects feature introduced a new privacy surface that many users overlook. When you add documents to a project, those documents are sent to Anthropic alongside every message you send within that project. If you have uploaded client agreements, internal memos, or documents containing names, addresses, or financial figures, that data is retransmitted repeatedly throughout the life of the project.

PromptGnome scans the user-composed portion of each outbound message. For project context documents, the best protection is to sanitize them before upload — remove or redact any PII before adding documents to a Claude Project.

What PromptGnome Detects in Claude Messages

  • Email addresses, phone numbers, and physical addresses
  • Social Security Numbers and government ID numbers
  • Credit card numbers with Luhn validation
  • API keys, tokens, and credentials embedded in messages
  • Dates of birth and other demographic identifiers
  • Full names and organization names (Pro tier via NER)

How PromptGnome's Claude Adapter Works

Claude's API uses dynamic URL paths that include the organization ID and conversation ID. PromptGnome's adapter matches on the URL pattern containing /completion rather than an exact URL, so it works regardless of which account or conversation you are in. The prompt field is extracted from the POST body and scanned before the request is allowed to proceed.

Frequently Asked Questions

Common questions about Claude privacy and how PromptGnome helps.

Yes. Consumer Claude.ai accounts store conversation history by default to enable the chat interface. Anthropic may review conversations for safety and may use them to improve models, subject to its privacy policy. Claude for Enterprise and API customers have separate data handling agreements with stronger no-training commitments.
Claude.ai consumer accounts are subject to Anthropic's standard privacy policy, which permits data use for model improvement. Claude for Enterprise and API plans include a Data Processing Addendum with a zero-data-training commitment: Anthropic will not use your inputs or outputs to train models. If you share sensitive business information with Claude, you should be on an enterprise plan with a signed DPA.
Claude Projects store a persistent system prompt and document context that is prepended to every conversation in that project. If you have added documents containing PII or confidential information to a project, that data is sent to Anthropic with every message in that project. PromptGnome scans the user-composed portion of each message, but project context documents should be reviewed and sanitized before upload.
PromptGnome intercepts the POST request to Claude's completion endpoint before it is sent. It extracts the prompt field from the request body, scans for PII locally in under 10ms, and shows a warning overlay if sensitive data is found. The interception matches on URL patterns containing /completion so it works across Claude.ai's dynamic conversation IDs.
Anthropic's privacy policy permits sharing data with service providers that process data on its behalf, such as cloud infrastructure providers. Anthropic states it does not sell personal information. However, any data sent to Claude's servers is subject to Anthropic's security posture and any future policy changes. The only way to guarantee sensitive data does not leave your control is to not send it — which is what PromptGnome helps you achieve.

Protect Every Message You Send to Claude

PromptGnome detects sensitive information locally before your message leaves the browser. Free, instant, and requires no account.

Add to Chrome — Free