1. Home >
  2. Internet & Security

AI 'Girlfriends' Are Snapping Up Users' Data

Think twice before sharing your personal information with an AI 'significant other.'
By Adrianna Nine
A graphic made by Luka, Inc. that shows a human woman sitting at the dinner table across from a Replika avatar.
Credit: Luka, Inc.

To some, a cheeky conversation with an AI chatbot might seem pretty innocuous—once you get past the fact that there isn’t a human on the other side. But even if you don’t think your virtual romance will end tragically à la Her (2013), your text-based flirtations still pose unique risks. Researchers involved with Mozilla’s “*Privacy Not Included” project shared this Valentine’s Day that apps offering AI girlfriends (or any AI “significant others”) almost always harvest and sell users’ personal data. It’s just like the adage says: If you’re not paying for it, you are the product.

Mozilla researchers scoured the privacy disclosures, terms and conditions, and marketing materials of 11 romantic AI chatbot apps. These apps ranged from Chai—which infamously encouraged a user to break into the grounds of Windsor Castle and end his own life last year—to Anima, which appears to offer AI girlfriends and boyfriends complete with AI-generated art. The team found that the companies behind the chatbots refuse to take responsibility for anything the chatbots say or prompt you to do. This was the case even for the apps that billed themselves as “self-help” (Talkie Soulful AI), “wellbeing” (EVA AI), or “mental health” (Romantic AI) providers. 

A screenshot of EVA AI marketing materials that encourage users to share "all their likes and dislikes."
One of EVA AI's marketing materials. Credit: Novi Limited

Why is this important? Romantic chatbots are designed to be conversational, asking you anything from how you’re feeling on a particular day to your deepest, most secret desires. These conversations will prompt users to say some pretty confidential stuff, whether in a romantic capacity or because the apps they occur on explicitly claim to help you maintain your mental wellbeing. And while your virtual girlfriend might appear empathetic to your struggles or open to your weirdest sexual fantasies, she’s just a large language model with a cute coat of paint, and she—or, rather, the company she was made by—is preparing to sell your data to brokers in 90% of cases. 

Even if Mozilla hadn’t found evidence of the above, these apps’ poor privacy practices would make them dangerous. Most (73%) of the apps neglected to say how they manage security vulnerabilities, and nearly as many (64%) lacked any public-facing information about encryption and whether they use it. More than half (54%) of the apps do not allow users to delete their data, and almost half (45%) allow users to implement weak passwords, including literally just the number “1.” 

Ultimately, all 11 chatbots earned Mozilla’s “*Privacy Not Included” warning label, which it uses to inform potential hardware and software users of confidentiality pitfalls. Once a product has received this label, site visitors can rank their opinion on the product between “not creepy” and “super creepy.” While the specificities of each chatbot’s visitor ranking vary, every one appears to have minimally surpassed the “somewhat creepy” threshold.

Tagged In

Artificial Intelligence Privacy

More from Internet & Security

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up