ChatGPT Privacy Guide

What ChatGPT Does With
Your Private Data

Millions of people share sensitive personal and professional information with ChatGPT every day. Here is what OpenAI's policies actually say — and how to protect yourself.

Add to Chrome — Free
💾

Conversation Retention

ChatGPT stores every message you send by default. Conversations are retained even if you delete them from your history — OpenAI keeps backups for safety and compliance purposes.

🧠

Training Data Use

Free and Plus users' conversations may be used to train future models. While opt-out is available in Settings → Data controls, the default is opt-in for most account types.

🏢

The Samsung Leak

In 2023, Samsung engineers pasted proprietary source code and meeting transcripts into ChatGPT. That data potentially became training material. Samsung banned internal AI chatbot use afterward.

🔄

Rapid API Changes

OpenAI updates ChatGPT's internal API every 2–4 weeks without public notice. Privacy tools that rely on network interception can silently break, leaving your data unprotected.

The Real Privacy Risk When Using ChatGPT

ChatGPT has become the go-to tool for everything from drafting emails to debugging code. But in the rush to get answers quickly, users routinely paste in material they should never share with an external service — client names, medical histories, financial figures, API keys, internal HR documents.

OpenAI's privacy policy is clear: conversations may be reviewed by human trainers for safety purposes, may be used to improve model accuracy, and are retained on OpenAI's infrastructure subject to their security practices. OpenAI has suffered security incidents — including a 2023 breach that exposed conversation titles and, for some users, partial payment information.

The Samsung Incident: A Cautionary Tale

The most high-profile corporate ChatGPT data leak came from Samsung Semiconductor in spring 2023. Within weeks of lifting a ban on AI tool use, employees had pasted confidential semiconductor process details, internal testing data, and full meeting transcripts into ChatGPT sessions. Samsung's security team only discovered the leaks after the fact, with no ability to retrieve or delete the data from OpenAI's systems.

The lesson is not that ChatGPT is malicious — it is that any data you send becomes data you no longer fully control. PromptGnome's approach is to catch sensitive data before it leaves your browser, so you never have to rely on a third party's data governance promises.

What PromptGnome Detects in ChatGPT Messages

  • Email addresses and full names (free tier + Pro NER)
  • US Social Security Numbers and national ID numbers
  • Credit card numbers (validated with Luhn algorithm)
  • API keys, GitHub tokens, AWS credentials, Stripe keys
  • Street addresses, dates of birth, phone numbers
  • IBAN and financial account numbers

All detection happens locally in your browser in under 10ms. Nothing is sent to PromptGnome's servers. If PII is found, you see a warning overlay before the message is sent — giving you a chance to edit or auto-anonymize.

How PromptGnome Stays Current With ChatGPT API Changes

Because ChatGPT's internal API changes frequently, PromptGnome maintains a versioned adapter that is updated with each breaking change. The interceptor matches on URL patterns rather than exact endpoints, and uses defensive parsing so that if a payload structure changes, detection fails open — your message goes through rather than being silently blocked.

Frequently Asked Questions

Common questions about ChatGPT privacy and how PromptGnome helps.

Yes. By default, OpenAI stores your ChatGPT conversations and may use them to improve its models. You can opt out of training data use in Settings → Data Controls, but conversation history is still retained unless you delete it. Business and Team plans have stronger data-use controls, but free and Plus users are subject to OpenAI's standard data retention policy.
In early 2023, Samsung engineers accidentally pasted proprietary source code and internal meeting notes into ChatGPT. Because OpenAI used conversation data for training at the time, this sensitive IP potentially became part of the model's training corpus. Samsung subsequently banned internal ChatGPT use. This incident illustrates why sensitive data should never be pasted into AI chatbots without first removing identifying information.
OpenAI updates ChatGPT's internal API roughly every 2–4 weeks. Endpoint paths, request payload shapes, and SSE stream formats all change without public notice. PromptGnome's ChatGPT adapter is maintained with each breaking change so protection remains active even after OpenAI updates.
Yes. PromptGnome intercepts your message in the browser before the network request fires. It scans for emails, SSNs, credit card numbers, API keys, and 14+ other PII types in under 10ms. If sensitive data is detected, you see a warning and can edit your message or auto-anonymize before anything reaches OpenAI.
No. PromptGnome only intercepts the text message endpoint. File and image uploads use a separate multipart endpoint and are not intercepted. If you are uploading documents with sensitive data, consider redacting them before upload.

Stop ChatGPT From Seeing Your Private Data

PromptGnome detects sensitive information locally before your message is sent. Free, instant, and requires no account.

Add to Chrome — Free