When apps leak our data, who is responsible?
A recent cyberattack exposed the sensitive personal data of thousands of women who used the Tea Dating Advice app to discuss and review men they date. A few days later, a California jury found that Meta wrongfully collected data from women using the period-tracking app Flo.
The steady drum of high-profile app hacks and leaks has become background noise for many consumers — in 2024 alone, 1.7 billion people had their personal data compromised, according to data from the Identity Theft Resource Center. Among the recent targets are genetic data company 23andMe, Microsoft’s workplace software and Tea, which explicitly billed itself as a safety app for women.
On Tuesday a California judge combined five class-action lawsuits from Tea users accusing the company of failing to protect their sensitive information. The plaintiffs include a single mother fleeing domestic violence and a woman who posted on Tea about an alleged rapist in her community. After the Tea hack, people online used the leaked data to create a map of users’ locations. Others shared users’ photos along with misogynistic insults.
Tea and Flo are both still operating and available in major app stores. It’s a good reminder how often we turn over sensitive information to our apps and what little recourse we have when things go wrong.
Online safety advocates have been warning for years that our apps — from big-name mainstays to relative newcomers like Tea — collect too much data and store it unsafely. But despite a stream of unnerving hacks, not much has changed, they say. The United States still doesn’t have a comprehensive data privacy law. Tech companies, increasingly aided by AI programs that write code, rush products to market without proper safety measures. And consumers are left to fend for themselves, according to tech and security experts.
“It’s not uncommon among software developers — especially small, scrappy startup kind of stuff — to not even know how to store this information securely,” said Chester Wisniewski, a global director at cybersecurity company Sophos.
You couldn’t blame app users for wondering: When cybersecurity disaster strikes, who should be held responsible?
Tea shot to the top of the Apple App Store in July as videos trended on social media discussing the app’s controversial components, including letting women rate and review the men they date along with “red flags,” “green flags” and photos. Soon after, people on Reddit and 4chan called for the app to be targeted, and hackers found and shared the selfies, government IDs and direct messages of thousands of Tea users.
Since the hack, Tea has continued to post lighthearted content promoting itself on its Instagram page. Last week, it posted a statement in response to the hack, saying it was taking its direct message system down out of an “abundance of caution.”
But the app’s setup reflects a lack of safety precautions and security testing, putting users at risk from day one, says Dave Meister, a global head at cybersecurity research firm Check Point Software. Like many app startups, Tea appears to have released a product that looks good on the front end but lacks appropriate security infrastructure on the back end, he said. In this case, an exposed database let bad actors easily access troves of sensitive information, according to Meister.
“The fact that [the hackers] got in and just got free rein in the style which they did makes it very clear that the security there wasn’t adequate and probably hadn’t been considered as a part of the development of the application,” he said.
Tea’s founder and CEO, Sean Cook, has said that he got the idea for the app after watching his mother struggle with catfishing online. Cook previously worked as a product manager at Salesforce, Shutterfly and other tech companies, according to his LinkedIn. Cook, through the company’s PR firm, declined to be interviewed for this story or comment on the breach.
Tea spokesperson Taylor Osumi said Wednesday in an emailed statement that the company “remains fully engaged in strengthening the Tea App’s security, and we look forward to sharing more about those enhancements soon.” Tea will provide “free identity protection services” to affected individuals, according to the statement.
Apple, meanwhile, is still hosting the Tea app as well as the similar TeaOnHer app in its online store. Its guidelines require that apps “implement appropriate security measures to ensure proper handling of user information” and “prevent its unauthorized use, disclosure, or access by third parties.”
When Apple finds that an app is out of compliance, it contacts the developer to explain the violation and gives them time to resolve it, Apple spokesperson Peter Ajemian said. He declined to comment on the Tea app specifically.
With companies and app stores often passing the buck, it might fall to regulators to keep consumers safe, security experts say. Last week’s Flo app ruling against Meta comes after the Federal Trade Commission accused Flo in 2021 of misleading users over how it treats their health data. A group of users also sued Flo over its privacy practices. Flo settled both lawsuits without admitting wrongdoing.
But while regulators catch up, tech industry changes are putting consumers at increased risk of shoddy apps, Wisniewski said. For example “vibe coding,” in which people use AI tools to write software programs, lets inexperienced developers spin up new apps with just a few typed commands.
“Everybody’s talking about vibe-coding,” he said. “You think these apps are bad now? Wait until AI starts writing them, they’re going to be a hundred times worse.”
Unsafe apps pose an outsize risk to women and other vulnerable groups, said Michael Pattullo, senior threat intelligence manager at Moonshot, a company that monitors online dangers. Moonshot has recorded an average of 3,484 violent threats against women per month in high-risk online spaces such as 4chan since it started monitoring in 2022. Data breaches fuel this ecosystem and put users at risk of physical harm when their names or addresses are leaked, Pattullo said.
Social media platforms don’t do enough to stop the spread of leaked information, he noted. Mainstream social media sites took down 28% of the violative posts Moonshot flagged in 2024, the company says. So far this year, that rate has decreased to six percent.
Without tech companies, social platforms and app stores keeping users safe, the burden falls on regular people to withhold their data or try to guess which apps are trustworthy, Pattullo said.
“A user isn’t joining any of these platforms expecting to have their privacy and physical security at risk, just by being in an online space, especially one that presents itself as secure,” he said. “The one who has to take accountability and responsibility for this isn’t the user, right?”