Users often believe their interactions with AI chatbots remain private, but a recent study reveals otherwise. Researchers from IMDEA Networks Institute disclosed on May 4 that the four leading AI assistants—ChatGPT, Claude, Grok, and Perplexity—silently share data with third-party advertising and analytics services such as Meta, Google, and TikTok. The research project, named LeakyLM, identified over 13 trackers embedded within these platforms, none of which are clearly disclosed to users.
Here’s how it works: Every time a chat is initiated, hidden software tools on the webpage contact ad networks, sending information about your identity, current page, and sometimes even your typed messages. Even seemingly benign data leaks, such as conversation URLs—a web address specific to your chat—pose significant privacy risks. Many platforms make these URLs publicly accessible by default, allowing anyone with the link to view your conversation without needing to log in. When these URLs are sent to Meta or Google’s ad systems, it grants those companies potential access to read your chats.
“Leaking a URL isn’t just metadata; it can equate to leaking the entire conversation,” according to the researchers.
Grok, Elon Musk’s AI chatbot from xAI, is particularly exposed. Guest conversations are publicly accessible by default, allowing anyone to read them without logging in. TikTok goes further, receiving not only URLs but also the actual content of messages through Open Graph metadata, which creates preview images for shared links.
While Claude (Anthropic) and ChatGPT (OpenAI) have better access controls—keeping chats private unless explicitly shared—they still send conversation URLs and identifying data like advertising cookies to Meta and Google. For Claude, this data is routed through 11 advertising platforms via Anthropic’s servers, not the browser, rendering ad blockers ineffective.
Perplexity removed its Meta tracker last month.
The study hasn’t confirmed that Meta or Google have read any chats, but it highlights the existing infrastructure for such access. “LLMs offer privacy controls to limit conversation visibility but may mislead users about the extent of actual protections,” researchers note. Although there’s no evidence yet of conversations being read by trackers, the capability exists through permalink dissemination, posing a potential risk.
This is not the first instance of AI platforms under scrutiny for privacy issues. Claude recently required government ID verification for new subscribers, prompting backlash from users who had previously switched from ChatGPT due to surveillance concerns, as Decrypt reported last month.
Currently, practical measures are limited. On Grok, adjust conversation visibility settings and revoke any shared links. For Claude, rejecting non-essential cookies disables the Meta Pixel. On Perplexity, set conversations to Private. On ChatGPT, rejecting cookies can reduce exposure, though Google Analytics remains active for free logged-in users.
For comprehensive protection, consider consulting our guide on AI Privacy. Researchers aim to expand their analysis to include Meta AI, Microsoft Copilot, and Google Gemini, which were excluded from this study due to their dual roles as AI providers and ad companies, complicating the threat model.
The findings were submitted to Data Protection Authorities on April 13, 2026, with xAI notified by April 17. No company has responded at the time of publication.