AI coding assistants have become indispensable for most developers. They write boilerplate faster, explain unfamiliar APIs, and help debug tricky problems. But they've also created a new and rapidly growing attack surface: the accidental disclosure of credentials in the prompts used to get that help.

The Numbers Every Developer Should Know

29M+
secrets leaked on GitHub in 2025 (GitGuardian State of Secrets Sprawl)
rate at which AI-assisted commits leak secrets vs. non-AI commits
58%
of leaked secrets on GitHub remain valid after 5 days — more than enough time for exploitation

GitGuardian's 2025 State of Secrets Sprawl report found more than 29 million secrets leaked on public GitHub repositories — and the AI-assisted commit correlation was one of the most striking findings. Developers using AI coding tools ship code faster, which compresses review time. More critically, when asking AI for help with code, developers often include real credentials for context — and sometimes the AI's suggested code re-includes those credentials in its output, which gets committed.

The risk is bidirectional: credentials pasted into AI chatbot prompts are sent to the provider's servers. Credentials that then appear in AI-generated code may be committed to your repository. You have two distinct leak vectors to protect against.

The Secrets Most Commonly Leaked to AI Tools

Not all credentials are equally likely to appear in AI prompts. The most common types PromptGnome's detection engine covers:

AWS Access Key
AWS Secret Key
GitHub Token
Stripe API Key
OpenAI API Key
Anthropic API Key
Database URL
Generic API Keys

Green = currently detected by PromptGnome. Grey = detected as generic API key pattern.

Developer Best Practices

1. Use .env Files and Never Commit Them

This should be standard practice by now, but it bears repeating: credentials belong in environment files, not in source code. Your .gitignore must include .env, .env.local, .env.production, and any variant you use.

.gitignore (add these)
# Environment files — never commit
.env
.env.*
!.env.example
# Secret manager exports
*.pem
*.key
credentials.json

Maintain a .env.example file with placeholder values that IS committed, showing teammates which environment variables are needed without exposing real values.

2. Use a Secrets Manager

For production workloads, credentials should live in a secrets manager — AWS Secrets Manager, HashiCorp Vault, 1Password Secrets Automation, or Doppler — and be injected at runtime. This means the credential never exists in a file on disk, which dramatically reduces the surface area for leaks.

For local development, tools like Doppler and Infisical provide CLI wrappers that inject environment variables without writing them to disk: doppler run -- node server.js.

3. Sanitise Code Before Pasting into AI Tools

When asking AI for debugging help, replace real values with clearly fake placeholders before pasting:

Before AI review — BAD
const openai = new OpenAI({ apiKey: 'sk-proj-AbC123realKeyHere...' });
const s3 = new S3({ accessKeyId: 'AKIAIOSFODNN7REALKEY' });
Before AI review — GOOD
const openai = new OpenAI({ apiKey: 'sk-PLACEHOLDER' });
const s3 = new S3({ accessKeyId: process.env.AWS_ACCESS_KEY_ID });

The AI will still understand your code perfectly. The function call signature, error handling, and logic are all the same regardless of the credential value.

4. Rotate Credentials Regularly

Keys that have a limited lifespan reduce the damage window if they are leaked. Use short-lived credentials wherever possible: AWS IAM roles with temporary credentials via STS, GitHub fine-grained tokens with expiry dates, and API keys with rotation policies. If a key is leaked and expires in 24 hours, the exploitation window is far smaller than for a long-lived key.

Pre-Commit Hooks vs. Real-Time Detection

Two categories of tools address the credentials-in-code problem, but they operate at different points in the development workflow and protect against different risks.

Dimension Pre-Commit Hooks (e.g. Gitleaks) Real-Time Browser Detection (e.g. PromptGnome)
When it catches leaks Before git commit Before the network request fires
What it protects Your git history and repository Your AI chatbot sessions
Protects against AI prompt leaks No — a different vector Yes — exactly this use case
Protects git history Yes — primary purpose No — different layer
Works with all AI providers N/A — only covers git Yes — all major AI chatbots
Recommendation Install for all projects Install for all developers

The answer is not either/or — you need both. Pre-commit hooks protect your codebase; real-time browser detection protects your AI sessions. They address different but complementary risks.

Recommended Pre-Commit Tools

  • Gitleaks: Open-source, fast, supports 150+ secret types, CI/CD ready. gitleaks protect --staged runs before every commit.
  • detect-secrets: Yelp's Python-based tool, supports baseline files to manage false positives.
  • git-secrets: AWS-maintained, focused on AWS credential patterns.
  • truffleHog: Deep git history scanning for leaked secrets — useful for auditing existing repos.
Install Gitleaks pre-commit hook (via pre-commit framework)
# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.18.0
    hooks:
      - id: gitleaks

What PromptGnome Detects

PromptGnome's regex engine includes patterns for the most common credentials a developer might accidentally paste:

  • AWS Access Keys: Pattern AKIA[0-9A-Z]{16} — the distinct prefix makes these highly reliable to detect.
  • AWS Secret Keys: 40-character base64 strings in credential contexts.
  • GitHub Tokens: ghp_, github_pat_, gho_, ghu_ prefixes — GitHub's prefix-based format makes detection precise with very low false positive rate.
  • Stripe Keys: sk_live_, sk_test_, pk_live_, pk_test_ prefixes.
  • OpenAI / Anthropic keys: sk-proj-, sk-ant- prefixes.
  • Generic API keys: Key-value patterns like API_KEY=..., SECRET_TOKEN=... in environment variable format.

Zero false positives: PromptGnome only flags credentials with a confidence above 0.7. Structured key formats (those with vendor-specific prefixes) have near-zero false positive rates because the prefixes are unique to real credentials. Generic patterns use context gates to reduce false alerts.

Incident Response: What to Do After a Leak

If you realise you've pasted a real credential into an AI chatbot:

  1. Rotate immediately. Go to the provider's dashboard and revoke the exposed key. Generate a new one. Treat the old key as permanently compromised.
  2. Check access logs. Review the usage logs for the compromised key. AWS CloudTrail, GitHub audit log, Stripe event log — look for any activity you didn't initiate.
  3. Update all consumers. Update every system using the revoked key with the new credential. Missing one consumer causes an outage.
  4. Assess the window. How long was the key exposed? Was it used? File an incident report even if no misuse is found — it documents the exposure for compliance purposes.
  5. Review your process. Why did the real key end up in the prompt? Fix the root cause — better secrets management, better habits, or better tooling.

Stop secrets from reaching AI chatbots

PromptGnome detects AWS keys, GitHub tokens, Stripe keys, and 15+ other credential types in your prompts — before they leave your browser.

Get PromptGnome Free

Frequently Asked Questions

Why do AI-assisted commits leak secrets at a higher rate?

Developers using AI tools ship code faster, compressing review time. Additionally, when developers paste code for AI help, they often include real credentials for context — and the AI's suggested code may re-include those credentials, which then gets committed.

What types of secrets are most commonly leaked to AI chatbots?

AWS access keys and secret keys, GitHub tokens, Stripe keys, OpenAI and Anthropic API keys, database connection strings, and generic API key patterns in environment variable format.

Are pre-commit hooks enough to prevent API key leaks?

No — pre-commit hooks protect your git history but don't protect against secrets being pasted into AI chatbot prompts, which is a separate vector. You need both: pre-commit hooks for your git workflow, and real-time detection for your AI tool usage.

What should I do if I accidentally paste a real API key into ChatGPT?

Rotate the key immediately — treat it as compromised. Review access logs for any activity you didn't initiate. Update all systems using the old key. File an incident report even if no misuse is detected.

How do I use environment variables safely when asking AI for coding help?

Replace real values with clearly fake placeholders before pasting: use sk-PLACEHOLDER instead of a real OpenAI key, and process.env.AWS_ACCESS_KEY_ID references instead of inline key strings. The AI provides equally useful assistance without seeing real credentials.