Perplexity AI Hit With Class Action Over Alleged Hidden Tracking

A class action lawsuit has been filed against Perplexity AI, one of the more prominent AI-powered search engines to emerge in recent years. The complaint accuses the company of embedding "undetectable" tracking software directly into its search engine code, allegedly allowing user conversations to be transmitted to third parties including Meta and Google, all without user knowledge or consent.

The lawsuit puts a spotlight on a question that more users are beginning to ask: when you type a question into an AI search tool, where does that data actually go?

What the Lawsuit Claims

According to the complaint, the tracking technology was not disclosed to users and was designed to operate without detection. If the allegations hold up, this would mean that people using Perplexity AI to search for information, ask personal questions, or explore sensitive topics were unknowingly having those conversations shared with some of the largest data-harvesting companies on the internet.

This is not a case of buried fine print in a terms of service document. The lawsuit specifically claims the tracking was "undetectable," suggesting users had no reasonable way to know their data was being collected and forwarded.

Perplexity AI has positioned itself as a smarter, more direct alternative to traditional search engines. That positioning, combined with the nature of conversational AI (where users often ask detailed, personal questions), makes the alleged privacy violations particularly significant.

Why This Matters Beyond One Company

The Perplexity AI lawsuit is not an isolated incident. It reflects a broader pattern emerging across the AI industry, where the race to build useful products has sometimes outpaced the development of clear, honest privacy practices.

AI search tools and chatbots are different from traditional search engines in an important way: the queries tend to be more conversational and revealing. People ask about health symptoms, financial situations, relationship problems, and political views. The data generated by those interactions carries a level of personal detail that a simple keyword search rarely does.

When that data is allegedly shared with advertising giants like Meta and Google without consent, the implications stretch well beyond a single platform. Those companies have established infrastructure for building detailed behavioral profiles from data points exactly like these.

Regulators have taken notice. The lawsuit adds momentum to growing calls for stronger, more enforceable privacy laws specifically covering AI companies, which currently operate in a regulatory environment that has not fully caught up with the technology.

What This Means For You

If you use AI-powered search tools or chatbots regularly, the key takeaway from this lawsuit is straightforward: you cannot always know what is happening to your data based on a company's public image or marketing alone.

Several practical steps can help reduce your exposure:

  • Be selective about what you share. Treat AI search tools the way you would a public conversation. Avoid entering sensitive personal, financial, or medical details unless you have reviewed and understood the platform's privacy policy.
  • Check privacy policies actively. Look specifically for sections about data sharing with third parties. Vague language around "partners" or "service providers" often signals broad data sharing practices.
  • Use a VPN when browsing. A VPN encrypts your internet traffic and masks your IP address, which limits the amount of metadata that can be collected about your browsing habits and location, even when other tracking is happening at the application layer.
  • Consider privacy-focused alternatives. For sensitive queries, tools that explicitly do not log conversations or share data with third parties offer a meaningful difference in risk profile.
  • Stay informed about class action developments. If you used Perplexity AI during the period covered by the lawsuit, you may have standing as a class member.

The Bigger Picture on AI Privacy

The Perplexity AI lawsuit is a reminder that privacy in the AI era requires active attention, not passive trust. Companies build reputations for being innovative, fast, or even privacy-conscious, but the technical realities of how their products handle data can tell a different story.

No single tool or habit provides complete protection, but layering your defenses makes a real difference. Understanding what data you are generating, who might receive it, and what technical protections you have in place puts you in a much stronger position than simply hoping the platforms you use are behaving as advertised.

As this lawsuit moves through the courts, it will be worth watching both the legal outcome and how Perplexity AI responds publicly. The result could set a meaningful precedent for how AI companies are held accountable for their data practices going forward.