Meta Contractors Accessed Private AI Chats Containing Personal Data: Report

Contractors working on Meta’s AI reportedly accessed chats containing personal data, sparking renewed scrutiny over the company’s privacy practices.
Meta Platforms, the parent company behind Facebook and Instagram, is once again under fire over privacy concerns. According to a recent report by Business Insider, contractors hired to train Meta’s artificial intelligence models were regularly exposed to sensitive and identifiable user information — including names, photos, emails, and even explicit content — during their review of AI conversations.
Several contract workers, brought on board through third-party platforms such as Outlier (owned by Scale AI) and Alignerr, told the publication that they were tasked with evaluating thousands of real conversations users had with Meta’s AI-powered assistants. In doing so, they encountered deeply personal content — from emotional outpourings and therapy-style confessions to flirtatious or romantic exchanges.
Shockingly, one worker estimated that nearly 70% of the chats they reviewed contained some form of personally identifiable information (PII). This includes not only voluntarily shared names and email addresses but also images — both selfies and, in some cases, sexually explicit pictures — submitted by users who assumed their chats were private.
Supporting documents reviewed by Business Insider also revealed that, in some instances, Meta itself provided additional user background such as names, locations, and hobbies. These were reportedly intended to help the AI offer more personalized and engaging responses. However, the report adds that even when Meta didn’t provide such data, users often revealed it themselves during the course of their interactions, despite the company’s privacy policies clearly discouraging users from disclosing personal details to the chatbot.
Meta acknowledged that it does, in fact, review user interactions with AI tools to improve the system's quality. A spokesperson told Business Insider:“While we work with contractors to help improve training data quality, we intentionally limit what personal information they see.”The spokesperson added that Meta enforces “strict policies” about who can access such data and how it must be handled.
However, the contractors interviewed suggested otherwise. They claimed Meta projects exposed more unredacted personal data than similar initiatives at other tech companies. One such initiative, codenamed Omni, reportedly focused on enhancing user engagement in Meta’s AI Studio, while another project, PQPE, encouraged the AI to tailor responses based on prior user conversations or data from social media profiles.
One of the more concerning incidents cited involved a sexually explicit AI chat that contained enough identifiable information for a journalist to trace the user’s actual Facebook profile within minutes.
This report adds to Meta’s growing list of controversies surrounding its handling of user data. The company previously faced major backlash during the Cambridge Analytica scandal in 2018, as well as criticism over reports of contractors listening to users’ voice messages without adequate privacy protections.
While using human reviewers to improve AI systems is common industry practice, Meta’s history and the scale of unfiltered access reported here have reignited fears over the adequacy of its privacy safeguards.














