Could Your ChatGPT Conversations Leak to Google? The Story of Tech “Self-Doxxing” and a Must-Learn Lesson for Software Engineers
- Krzysztof Kosman
- Aug 2
- 2 min read

Have you ever imagined that fragments of your private conversations with ChatGPT could show up… on Google’s front page? That’s exactly what happened to thousands of users worldwide—with the case making waves across the software industry.
OpenAI reacted swiftly, but the lesson about default privacy and feature design will linger for a long time. Find out how this situation unfolded, why it’s a real warning for software engineers, and which best practices are essential when building services
with users in mind.
When “Share” Turns Into… “Index” - "ChatGPT leak"
At the end of July 2025, the internet was shaken: users discovered that their ChatGPT leak conversations, shared via a public link, were showing up—verbatim—in Google search results. All because of the seemingly innocent “Make this chat discoverable” feature, which allowed search engine bots to index publicly shared conversations. For some, it was a convenient shortcut, for others—a potential disaster.
You could find everything: from lighthearted dialogues, through dramatic stories of personal trauma, to names, emails, and even confidential business information!
OpenAI – Rapid Response and Feature Removal
When the issue went public, OpenAI almost immediately announced full removal of the feature. Dane Stuckey, the organization’s CISO, didn’t mince words: the feature created too much risk for accidental exposure of information that should never see the light of day.
Within 48 hours, “Make this chat discoverable” was gone for all users—and the company began working with search engines to purge existing links from results.
It’s Not Just OpenAI
It turns out that similar risks threaten users of other AI platforms, including Meta AI. This signals to the whole industry: when you share content via a “share” feature, you can’t be sure where it will land—or who might see it.
Three Golden Rules for Software Engineers
(and AI Users!)
Privacy-by-default architecture: Features that allow public sharing must be clearly and unambiguously marked as such.
Never share sensitive data with AI! Despite the absence of names, it’s often surprisingly easy to identify someone from conversation details.
Review your “shared links” and use management tools. Check if you’ve shared anything that should remain private.
For Us—Software Engineers
This is a case study that should feature in every UX, privacy by design, and “responsible coding” presentation. Every interface, every new option isn’t just a new convenience, but a potential vector for user data leaks.
Take this as a lesson—and build products you’d feel safe using yourself!
In Conclusion
OpenAI’s experiment proved that even the biggest players can overlook a critical user-facing detail. Let’s safeguard our users—even when something “probably won’t harm anyone.” And if you’re using AI? Always think twice before sharing another snippet of your conversation with the world.