Holistic living made easy with BIPOC-centered, clean, and soulful product picks

Meta Ray-Ban Glasses Are Recording People in Private and Sensitive Situations


A significant privacy breach involving Meta’s smart glasses has been exposed through a joint investigation by 2 Swedish newspapers, Svenska Dagbladet and Göteborgs-Posten. The core of the scandal lies in how the glasses’ AI assistant operates: when a user says “Hey Meta,” the footage captured is sent to human contractors in Kenya.

These contractors view extremely private content, far beyond public scenes like traffic or nature. They have access to highly sensitive recordings, including footage from bathrooms, bedrooms, and images of bank cards and people naked. The scale of the issue is alarming: Meta sold over 7 million pairs of smart glasses in 2025 alone, and the majority of buyers are reportedly unaware of where their recordings ultimately end up.

The Initial Investigation

Kenyan contractors review Meta Ray-Ban footage showing users undressing, using toilets, and sharing bank card details without their knowledge. Credit: Shutterstock

On February 27, 2026, the newspapers Svenska Dagbladet and Göteborgs-Posten jointly published the results of their investigation into Meta’s data annotation pipeline. Their report included interviews with over thirty employees at various levels of Sama, a Nairobi, Kenya-based data annotation company. Sama contracts with Meta to train its AI systems. This training involves workers labelling images, video, and speech, including drawing bounding boxes, assigning object labels, and performing quality assurance on footage captured by Meta’s glasses.

Workers describe what they actually see

The content described by contractors raised serious concerns among privacy advocates around the world. Workers reported reviewing footage that included people using the toilet or undressing, apparently without their knowledge that they were being filmed. One contractor explicitly told Swedish journalists, “I don’t think they know, because if they knew they wouldn’t be recording.” Additionally, employees described reviewing footage of explicit sexual acts, users watching pornography, and bank card information on their screens without any censorship.

A bedside table and an unaware wife

A particularly disturbing instance highlights the issue of non-consensual recording by Meta glasses. A user unintentionally filmed their wife undressing when the active glasses were left recording on a bedside table. Crucially, the wife was unaware that an intimate moment was being captured. This highly sensitive footage subsequently became part of a dataset processed thousands of kilometers away in Kenya.

According to workers, this type of footage was not a rare but a regular occurrence. One annotator’s statement to investigators perfectly encapsulates the vast and intrusive nature of the content Sama employees review daily: “We see everything, from living rooms to naked bodies.”

Phones are banned at work for a reason

Sama forbids personal phones in its offices to stop the unauthorized release of footage. Breaking this confidentiality policy seriously risks job termination. Several employees told Swedish journalists that the material they handle is highly sensitive; potential disclosures could lead to major public scandals. Despite these strong security rules and big risks, a continuous flow of sensitive footage enters the annotation process.

Photo of Laptop Near Plant
Meta’s Ray-Ban glasses blurring system frequently fails, leaving faces visible in recordings sent to overseas workers for AI training. Credit: Pexels

Meta asserts that it automatically blurs faces in footage before it is sent for annotation, a system former employees have confirmed is in place. In comparison, this sounds like a reasonable safeguard intended to strip identifying features from recordings, but the intended security is often not achieved. In theory, workers should never see a recognizable face; however, the gulf between this theory and the reality of practice has proven significant.

Workers say the algorithms are constantly missing

Data annotators in Kenya have reported that the automated blurring system frequently fails, often leaving faces fully visible in the material they review. According to a former Meta employee, these failures occur because the algorithms “sometimes miss,” particularly under “difficult lighting conditions,” which can make specific faces and bodies visible. Workers confirmed that poor lighting, rapid movement, and unusual camera angles are common factors that defeat the automated system, leading to frequent instances of unblurred content rather than isolated cases.

A privacy tool that creates false confidence

Meta’s content-blurring system creates a risky paradox. Although marketed as a responsible data protection tool that leads users to believe all recordings are secure, bystanders are often unaware of its presence. Despite this, the system frequently fails, causing workers to regularly see unblurred, sensitive footage. This gap between the company’s assurances and the system’s actual performance is fueling growing legal concerns. Even though the system is largely effective, it still overlooks countless critical moments.

No real way to opt out

In April 2025, Meta updated its privacy policy to enable AI camera features by default, unless users choose to deactivate the “Hey Meta” voice command. The option to opt out of storing voice recordings in the cloud was removed, and Meta now retains recordings for up to a year to improve its services. Users must manually delete individual clips using the companion app. These changes are similar to Amazon’s recent update to its Echo devices, in which cloud processing has replaced local management.

While Meta’s terms of service, which users must agree to, permit the company to use human reviewers for assessing AI interactions (a clause buried in the legal text), this agreement is only between the wearer of the glasses and Meta. Individuals who are recorded by the glasses in private settings, such as a living room, bedroom, or medical office, have not signed any contract with Meta and have not provided consent. Despite this lack of consent, their images are still fed into the same AI annotation pipeline.

GDPR demands consent from data subjects

Under European data protection law, consent is mandated from every individual whose image or personal information is captured, a requirement extending beyond the device owner. The General Data Protection Regulation (GDPR) treats bystanders as data subjects, meaning Meta cannot collect their data or have their private moments reviewed by offshore workers without their explicit consent. According to Kleanthi Sardeli, a data protection lawyer at the privacy nonprofit NOYB, this presents “a clear transparency problem.” She cautions that users effectively forfeit control over their footage once it enters Meta’s training pipeline.

Sweden’s privacy authority weighs in

Petter Flink, an IT and security specialist at IMY (the Swedish Authority for Privacy Protection), offered a stark view of the user’s lack of awareness regarding this type of technology. “The user really has no idea what is happening behind the scenes,” Flink stated. This assessment addressed not only Meta’s practices but also the wider issue of gaining informed consent in the use of wearable, accessory-forward devices. A core problem is that few individuals read the terms of service, and even fewer grasp the subsequent fate of their data after accepting those conditions. This pervasive lack of knowledge was starkly highlighted by the findings of the Swedish investigation.

Kenya lacks EU adequacy status

The European Commission has not granted Kenya an EU adequacy decision, indicating it does not consider Kenya’s data protection framework equivalent to GDPR standards. Although both parties discussed a mutual adequacy agreement in 2024, they have reached no subsequent decision. This regulatory void raises significant concerns about the legality of transferring European user data to Nairobi for annotation. Under GDPR, cross-border data flows require justification through either an adequacy decision, the use of standard contractual clauses, or explicit consent. Meta has not publicly disclosed the specific mechanism it uses to ensure compliance for this data pipeline.

Seven Million Cameras Walking Around

Sales of Meta and EssilorLuxottica’s Ray-Ban smart glasses have surged, tripling to over 7 million units in 2025 alone. This dramatic increase is a significant jump from the combined 2 million units sold in 2023 and 2024.

Confirming the 2025 figure during EssilorLuxottica’s Q4 2025 earnings report, CEO Francesco Milleri noted that the total number of units sold since the product’s launch now stands at approximately 9 million. This performance puts the companies on track to hit their original projection of ten million annual sales by 2027, well ahead of schedule.

Why Meta succeeded where Google Glass failed

While Google Glass failed partly due to its conspicuous design, which led to social backlash, bans in public places, and the derogatory term “Glassholes,” Meta has approached the issue differently. The futuristic, unusual appearance of Glass caused a social stigma that stifled its adoption and prevented product maturity.

Meta’s Ray-Ban smart glasses successfully addressed this aesthetic problem by looking indistinguishable from standard Ray-Ban sunglasses. Consequently, most people cannot easily tell if a wearer is recording. However, by solving the problem of appearance and social acceptance, Meta has left the underlying privacy concerns completely unresolved.

Facial recognition looms on the horizon

According to internal Meta documents revealed by The New York Times in February 2026, Meta is developing a feature for its glasses called “Name Tag.” This function uses the glasses’ camera to capture a person’s face in real time. Meta’s AI would then cross-reference this image with public information to display the person’s name and social media profiles to the wearer. Mark Zuckerberg reportedly champions the feature as a way to distinguish Meta’s glasses from competitors’. An internal memo from May 2025 indicated that the company planned to launch “Name Tag” first at a conference for the visually impaired before a public release.

Harvard students already proved it works

In October 2024, two Harvard students, AnhPhu Nguyen and Caine Ardayfio, independently demonstrated a concept using Meta Ray-Ban glasses and facial recognition technology. They created software named I-XRAY, which streamed video from the glasses to a laptop running PimEyes. This setup allowed them to identify strangers in public and retrieve their names, addresses, and workplaces. The students were able to build the complete system in just four days, proving that while Meta’s glasses do not inherently feature facial recognition, the existing hardware is readily capable of supporting it with minimal modification.

A class action lawsuit was filed against Meta and Luxottica of America by the Clarkson Law Firm on March 5, 2026, in the US District Court for the Northern District of California. The plaintiffs, New Jersey resident Gina Bartone and California resident Mateo Canu, allege that Meta violated consumer protection laws. The complaint argues that the company misled consumers about how their footage is handled, citing marketing claims such as “built for your privacy” and “designed for privacy, controlled by you.”

The UK regulator demands answers

The UK’s Information Commissioner’s Office responded quickly after the Swedish investigation went public. The ICO confirmed it wrote to Meta to demand clarification on how the company meets its obligations under UK data protection law. The regulator described the allegations as “concerning.” It emphasized that any organization deploying products that capture personal data must remain transparent about what it collects and who has access to it. Meta responded with a statement referencing its privacy policy and the use of contractors to improve user experience.

European Parliament members push for accountability

Direct action was taken by 2 Italian Members of the European Parliament (MEPs), Sandro Ruotolo and Nicola Zingaretti of the S&D group. They formally addressed the Irish Data Protection Commission (DPC), the primary regulator for Meta in the European Union, with a letter seeking clarification on 2 points: whether investigations have been launched and what safeguards are in place for data accessed by non-EU suppliers. Concurrently, a separate group of 17 MEPs, representing 4 political groups, formally inquired of the European Commission whether Meta’s current practices comply with the General Data Protection Regulation (GDPR).

Meta’s defense relies on fine print

Contractors occasionally review user-shared content from Meta AI to enhance the user experience, as explained by company spokesperson Christopher Sgro. Sgro defends this practice, stating that Meta employs safeguards to filter data and prevent reviewers from accessing identifying information. Moreover, Meta asserts that recordings remain on the user’s device unless the user chooses to share media. Critics argue that this defense is insufficient, noting that simply using the AI features of the glasses triggers the “sharing” that raises the main concern.

Read More: 15+ Facebook Hacks You Didn’t Know You Could Use

The Bigger Picture for Wearable Privacy

Sama, formerly known as Samasource, provides data annotation services to major technology companies like Meta and OpenAI. Its partnership with Meta started with a contract in 2017. The company employs around 1,500 workers in Kenya for data labeling tasks.

Despite working with leading tech firms, Sama has faced accusations of labor violations in previous contracts, especially those involving OpenAI. Additionally, a Kenyan court has ruled that Meta and Sama can be legally targeted in Kenya over allegations of unfair dismissals. This focus has highlighted concerns not only about privacy but also about the often-hidden human labor that supports AI systems worldwide.

Retail staff do not know what the glasses do

When the Swedish investigation looked into stores selling the glasses, a significant lack of knowledge about the device’s data practices was evident among retail staff. Many employees could not explain what data the glasses transmit, where recordings are stored, whether Meta automatically receives footage, or how voice and video recordings are processed post-capture. This inability of salespeople to clarify crucial data practices leaves consumers with insufficient information to make truly informed purchasing decisions.

The road ahead remains uncertain

Regulators across multiple jurisdictions now scrutinize Meta’s smart glasses. The US class action sits in its early stages, and Meta has not yet filed a formal response. Discovery and class certification could take one to three years. EU data protection authorities may initiate their own inquiries beyond the Irish DPC. Meanwhile, Meta continues developing new features like Name Tag while selling millions more units each quarter. One Kenyan annotator offered a sobering perspective on the whole situation. “You think that if they knew about the extent of the data collection, no one would dare to use the glasses,” the worker said. Seven million people bought them in 2025 anyway.

Read More: Mark Zuckerberg Warns Facebook Users Not To Screenshot Chats





Source link

We will be happy to hear your thoughts

Leave a reply

TheKrisList
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart