When a company calls a feature "Incognito," users reasonably expect their data to stay private. A new lawsuit against Perplexity AI, Google, and Meta alleges that expectation was never honored, and that millions of private conversations were quietly shared among the three companies to fuel advertising revenue. If the claims hold up, the case could become one of the more consequential privacy suits of the AI era, not just for what it reveals about Perplexity, but for what it suggests about the broader ecosystem of AI tools wrapping surveillance infrastructure in reassuring language.
The lawsuit, which targets all three companies, accuses them of sharing user chat data in ways that directly contradict the promises made to users who activated Perplexity's so-called Incognito Mode. The plaintiffs allege that rather than isolating or discarding user conversations, Perplexity was feeding that data into pipelines connected to Google and Meta, both of which have extensive advertising systems that depend on behavioral signals to target users. The word "sham" appears in the complaint not as hyperbole but as a legal characterization: the feature, plaintiffs argue, was designed to look like protection while functioning as its opposite.
This is not the first time incognito or private browsing features have been exposed as less protective than advertised. Google itself settled a $5 billion class-action lawsuit in 2024 over Chrome's Incognito Mode, with the court finding that Google continued collecting user data even when users believed they were browsing privately. That settlement, covering millions of users, did not appear to slow the industry's appetite for attaching the word "private" to features that are anything but. The Perplexity case suggests the pattern has migrated from browsers into AI chat interfaces, where the stakes are arguably higher. People tell AI assistants things they would never type into a search bar.
What makes this lawsuit particularly worth watching from a systems perspective is the incentive structure it lays bare. Perplexity is an AI search startup competing against deeply capitalized rivals. Its core product, a conversational search engine, generates enormous volumes of rich, intent-laden user data with every query. That data is extraordinarily valuable to advertising platforms. Google and Meta, meanwhile, are perpetually hungry for new behavioral signals as cookie-based tracking erodes under regulatory pressure and browser-level changes. The alleged arrangement, if proven, would represent a near-perfect exchange: a startup gets infrastructure support or revenue, and the platforms get a fresh stream of high-quality user intent data.

The feedback loop here is worth naming clearly. As privacy regulations tighten in Europe and, more slowly, in parts of the United States, large platforms face growing constraints on how they collect first-party data. That pressure creates demand for alternative data sources. Smaller AI companies, dependent on partnerships and cloud infrastructure often controlled by those same large platforms, find themselves in a structurally compromised position. The result is a system where privacy promises made to users are quietly undermined by the commercial dependencies that keep the product running at all.
The second-order consequences of this case extend well beyond Perplexity. If courts find that AI chat interfaces have been systematically misrepresenting their privacy features, the regulatory response could reshape how the entire sector discloses data practices. The Federal Trade Commission has shown increasing willingness to pursue deceptive design cases, and a finding here could establish precedent that "Incognito" or "Private" labels carry enforceable legal weight, not just marketing discretion.
There is also a trust dimension that no settlement can fully repair. AI assistants are being positioned as intimate tools, health advisors, therapy supplements, legal guides. The value proposition depends entirely on users believing their conversations are protected. Each lawsuit like this one chips away at that foundation, and the damage is not evenly distributed. Sophisticated users will read the headlines and adjust their behavior. Less informed users, often those with the most to lose from data exposure, will keep sharing sensitive information under the assumption that "Incognito" means what it says.
The deeper question the case raises is whether the AI industry will wait for regulators to define what privacy actually means in this context, or whether competitive pressure will eventually reward the companies that build genuine protection rather than the appearance of it. History suggests the former is more likely. But as AI chat logs grow richer and more personal, the political appetite for meaningful enforcement may finally be catching up to the technology.
Discussion (0)
Be the first to comment.
Leave a comment