top of page
Betterworld Logo

Millions of Private AI Chat Conversations and Images Exposed in Massive Data Leak

Updated: 15 hours ago

Millions of users' deeply personal conversations and images have been exposed due to data leaks from several AI companion applications. The breaches, affecting apps like Chat & Ask AI, Chattee Chat, and GiMe Chat, have revealed sensitive user interactions, including discussions about mental health struggles, illegal activities, and intimate fantasies. These leaks highlight significant security vulnerabilities in how AI applications handle and store user data.

Google Firebase | BetterWorld Technology

Key Takeaways

  • Millions of private conversations and images from AI companion apps have been exposed.

  • The leaks stem from misconfigured databases and unprotected servers.

  • Exposed data includes sensitive personal information, financial transactions, and intimate content.

  • Users are urged to exercise caution when sharing personal data with AI applications.

The Scope of the Breach

An independent security researcher discovered that the popular "Chat & Ask AI" app, which boasts over 50 million users, had a misconfigured backend using Google Firebase. This allowed unauthorized access to approximately 300 million messages from over 25 million users. The exposed data included full chat histories, timestamps, custom chatbot names, and AI model configurations. Users had reportedly asked the AI about topics ranging from suicide and drug manufacturing to hacking.

In a separate incident, two other AI companion apps, Chattee Chat and GiMe Chat, developed by Imagime Interactive Limited, exposed millions of intimate conversations, over 600,000 images and videos, and detailed usage data from more than 400,000 users. This leak occurred due to an unprotected Kafka Broker instance, which left sensitive user data, including messages, media files, and user logs, accessible to anyone with the link. The content was described as not suitable for work environments.

How the Leaks Occurred

The primary cause of these breaches appears to be a lack of proper security configurations. In the case of "Chat & Ask AI," a misconfiguration in its use of Google Firebase made it easy for outsiders to gain authenticated access to the app's database. For Chattee Chat and GiMe Chat, an unprotected Kafka Broker instance, essentially a data streaming service, was left open on the internet without any authentication or access controls.

These vulnerabilities allowed attackers to access vast amounts of sensitive user data, including full chat histories, timestamps, and even details about how users configured their AI models. In some instances, users had spent significant amounts of money on in-app purchases, with some transactions reaching up to $18,000, and these financial details were also exposed.

Implications for Users

These data leaks raise serious concerns about user privacy in the rapidly evolving field of AI applications. Many users treat AI chats as private journals or confidantes, sharing deeply personal information they would not disclose elsewhere. The exposure of this data can lead to severe consequences, including harassment, reputational damage, financial fraud, and targeted attacks.

While the exposed data did not always include direct names or email addresses, IP addresses and device identifiers were present, which could potentially be used to identify users by correlating them with data from other breaches. The potential for sextortion and phishing attacks is significant.

Protecting Yourself

Experts advise users to be cautious about the information they share with AI applications. It is recommended to research an app's privacy policy and data storage practices before use. Users should assume that conversations may be stored and limit sharing deeply personal or sensitive information. Additionally, reviewing app permissions, limiting account linking, and utilizing data removal services can help mitigate risks. In cases of breaches, users should change passwords, enable two-factor authentication, and monitor for suspicious activity.

As cyber threats continue to evolve, your security strategy needs to evolve with them. BetterWorld Technology delivers adaptive cybersecurity solutions designed to keep your business secure while supporting innovation. Connect with us today to schedule a personalized consultation.


Sources

  • Chat & Ask AI app exposed 300 million messages due to misconfiguration, Fox News.

  • Massive AI Chat App Leaked Millions of Users Private Conversations, 404 Media.

  • Millions of (very) private chats exposed by two AI companion apps, Malwarebytes.

  • AI girlfriend apps leaked millions of intimate conversations and images - here's what we know, BetaNews.

Join our mailing list

bottom of page