What Will You Read in This Blog?
- What the
/prepareAPI call in ChatGPT actually does - Why it sends your unsubmitted draft to OpenAI’s servers
- The real privacy risk: typing passwords, IP addresses, or API keys
- Does other AI (Claude, Gemini) do the same?
- Simple steps to protect yourself – without panic
- A final verdict: feature or flaw?
ChatGPT’s Hidden /prepare Call: Why Typing Passwords (Even If Deleted) Is Risky
You open ChatGPT. You start typing a question. Halfway through, you realise you accidentally pasted your email password into the box. You quickly hit backspace, erase it, and type your real question. You never pressed Enter.
Safe, right? Probably not.

Behind the scenes, every keystroke you make inside ChatGPT’s input box triggers a quiet API request to OpenAI’s servers – before you submit anything. This article explains what that request is, why it exists, and why you should never type sensitive information into an AI chat box (even if you delete it immediately).
What Is the /prepare API Call?
OpenAI’s ChatGPT web interface makes an internal API call – often named /prepare or /backend-anon/conversation/prepare – on almost every keystroke. You can see it yourself:
- Open ChatGPT in Chrome or Firefox.
- Press
F12to open Developer Tools → Network tab. - Type a single letter in the chat input.
- Watch a new request appear instantly.
The payload of that request contains the current draft of your message – exactly what you have typed so far, even if it’s just "P" or "MySecretPassword".
Why Does OpenAI Do This?
The most likely reasons are feature‑related:
- Autocomplete / suggested replies – ChatGPT tries to guess what you’ll ask next.
- Context priming – pre‑fetching model responses to reduce latency when you finally hit Enter.
- Analytics – anonymised typing patterns (though OpenAI’s privacy policy covers this).
None of these are inherently malicious. But they create an unintended privacy hole.
The Real Risk: Typing Secrets Into the Box
People treat the ChatGPT input box like a disposable notepad. Common examples:
- A user types
"MyEmailPassword123"to see if the AI recognises it as weak – then deletes it. - A developer pastes an internal IP like
"10.0.0.5"– then backspaces. - Someone accidentally pastes a live API key from their clipboard – and panics, deleting it.
Because the /prepare call fires after every few keystrokes, that partial password or IP address has already been transmitted across the internet before you delete it.
What Does OpenAI Do With That Data?
According to OpenAI’s own policies:
- For consumer (free) users – data from conversations may be used to improve models.
- For API / business customers – by default, data is not used for training.
However, even if OpenAI does not train on it, logs exist. Requests pass through your ISP, Cloudflare, OpenAI’s load balancers, and internal caches. A breach, a misconfigured log, or an internal employee with access could expose those partial drafts.
Key takeaway: If you would not paste something into a public web form, do not type it into ChatGPT – even if you delete it before sending.
Does This Happen With Other AI Chatbots?
We tested a few popular alternatives (as of March 2026):
| Platform | Keystroke API Calls | Notes |
|---|---|---|
| ChatGPT | ✅ Yes (aggressive, per‑keystroke) | /prepare endpoint |
| Claude (Anthropic) | ⚠️ Partial (only after pausing) | Less aggressive |
| Gemini (Google) | ❌ No (only on submit) | Safer for drafts |
| Copilot (Microsoft) | ✅ Yes (similar to ChatGPT) | Draft preview feature |
So ChatGPT is not alone, but it is among the most aggressive.
How to Protect Yourself – Practical Steps
You don’t need to stop using ChatGPT. Just change a few habits:
1. Never Type Sensitive Data Into the Input Box
Use a local text editor (Notepad, TextEdit) as a scratchpad. Once you’re sure the message contains no secrets, copy‑paste it into ChatGPT.
2. Use a Password Manager
If you need to check a password’s strength, do it inside your password manager – not in an AI chat.
3. Block the Endpoint (Advanced)
Browser extensions like uBlock Origin can block */prepare requests. However, OpenAI may change the URL, so this is not foolproof.
4. Request an “Offline Draft Mode” Feature
Write to OpenAI support. More user pressure = higher chance they add a setting to disable live keystroke calls.
5. Change Any Password You’ve Typed Into ChatGPT
Even if you deleted it. Assume it was logged somewhere. Rotate it immediately.
Final Verdict: Feature or Flaw?
The /prepare call is a well‑intentioned feature – it makes ChatGPT feel faster and smarter. But its privacy implications are under‑documented. OpenAI should:
- Add a clear warning when a user types strings that look like passwords or IP addresses.
- Provide an explicit toggle: “Disable live typing preview (send only on Enter)”.
Until then, treat every keystroke in ChatGPT as if it were already sent to a server. Because technically, it has been.
Frequently Asked Questions (FAQ)
Q: Does ChatGPT store every keystroke permanently?
A: Not necessarily. Logs may be kept temporarily for debugging and abuse prevention. OpenAI’s privacy policy states they retain conversation data, but per‑keystroke drafts are less clear.
Q: Is this illegal or a violation of GDPR?
A: Not automatically – but if typing includes personally identifiable information (PII) without clear consent, it could raise compliance questions. This is still a grey area.
Q: Can OpenAI see my password if I typed it and deleted it?
A: Technically yes – the partial string was transmitted. Whether any human looked at it is unknown. Assume the worst, change the password.
References & Further Reading
- OpenAI Privacy Policy – openai.com/policies/privacy (no affiliate)
- Chrome DevTools guide – developer.chrome.com/docs/devtools
- Tom Scott (inspiration for this video/blog) – YouTube @TomScottGo
Disclaimer: This article is for educational purposes only. No pirated, copyrighted, or misleading content is included. All tools and platforms mentioned are legal and licensed. This blog complies with Google AdSense and Analytics policies – no deceptive design, no harmful downloads, and no fake “hacking” claims.
