Published in AI

Nearly a third of AI chatbots share data

by on21 February 2025


Google Gemini does the most evil 

AI chatbots are harvesting user data at an alarming rate, with 30 per cent of them handing it over to third parties, including data brokers.

A new report by Surfshark exposes the extent of privacy violations by popular AI chatbots, with Google Gemini leading the pack as the most data-hungry of them all.

Every AI chatbot analysed collects some form of user data, with an average of 11 out of 35 possible data types stored per app. A staggering 40 per cent of them even track users’ locations. Even more disturbingly, 30 per cent of AI chatbots engage in tracking—linking user or device data with third-party sources for targeted ads and marketing purposes.

Surfshark  Chief Security Officer Tomas Stamulis said: “The apps we use every day continuously collect data, which is gathered by the developers behind them. AI chatbot apps can go even further by processing and storing conversations. This data could be used within the company or shared across third-party networks, potentially reaching hundreds of partners, and leading to highly targeted ads or an increase in spam calls."

Google Gemini, which hoards 22 out of 35 possible data types, including precise location data. Gemini also collects contact details, search history, browsing history, and even personal contacts stored on users’ phones. Privacy-conscious users may see this as a gross invasion of their personal space.

Other major chatbots are not innocent either. ChatGPT collects 10 types of data but allows users to delete chat history or opt for temporary chats that auto-delete after 30 days. Meanwhile, Copilot, Poe, and Jasper actively track users for advertising purposes, with Jasper harvesting device IDs, product interactions, and advertising data.

One of the biggest concerns is DeepSeek, which collects user input—including chat history—and stores it indefinitely on servers in China. Unlike US-based chatbots, DeepSeek operates outside strict privacy laws like GDPR, raising major concerns about accountability and data security.

“Unlike other AI chatbots, such as ChatGPT or Gemini, which operate under US federal law and collaborate with regulatory bodies, DeepSeek is not subject to comparable legal frameworks such as GDPR. This lack of oversight further increases concerns about accountability and data protection," Stamulis warns.

These fears are not unfounded. DeepSeek has already suffered a major breach, exposing over 1 million records of chat history, API keys, and other sensitive data.

Experts advise users to be extremely cautious when using AI chatbots. “As a rule, the more information is shared, the greater the risk of data leaks. Always be mindful of the information you provide to chatbots, review your sharing settings, and disable chat history when possible,” Stamulis said.

Last modified on 21 February 2025
Rate this item
(0 votes)

Read more about: