Search

    Select Website Language

    Kenya became the latest country to launch an investigation into the data privacy of content captured by Meta’s Ray-Ban glasses. The smart glasses are the latest innovation in wearable AI, allowing users to search for information, answer calls, capture photos, and videos hands-free. 

    However, those involved in the video data processing revealed they see far more than they bargained for. A report filed by Kenya’s Office of the Data Protection Commissioner has raised concerns around mass surveillance, non-consensual recording of intimate moments, and unlawful data processing.

    Kenya now joins the U.S. and the U.K. in increasing pressure on Meta to strengthen its data privacy and security practices.

    How the Glasses Work 

    At first glance, Meta’s Ray-Ban smart glasses look like a regular pair of sunglasses. But built into the frame are a camera, microphones, speakers, and an AI assistant that lets users interact with the device via voice commands. Users activate the glasses by saying, “Hey Meta.” 

    The glasses store captured content on the local device. However, the data pipeline changes when users engage with the AI assistant. When users issue prompts, such as asking the AI to analyze a scene or respond to a voice command, Meta can send that interaction to its servers to improve its systems.

    Meta may then use this data to train AI models, often running it through a review process where human annotators assess and label the content. Users cannot opt out of this data processing, raising concerns over data ownership and privacy.

    Related post: Why OpenAI Is Shutting Down Its Video Generation App, Sora

    Inside the Data Pipeline 

    A deeper probe into how the data is processed has revealed troubling findings. In February, Swedish publications Svenska Dagbladet and Göteborgs-Posten released an investigation into Meta’s AI training supply chain. They identified Samasource Impact Sourcing, a California-based company with offices in Nairobi, as one of Meta’s subcontractors.

    Kenyan workers tasked with reviewing footage report exposure to far more than expected: people undressing, using the bathroom, having sex, and even visible credit card information.

    “We see everything, from living rooms to naked bodies. Meta has that type of content in its databases,” one worker said in the report. 

    Related post: Kenya’s Equity Bank Named Africa’s Strongest Banking Brand, Ranks No. 6 Globally

    The Human Cost of Training AI Models 

    There is a growing workforce of AI data laborers tasked with training AI models. Companies often outsource these roles to workers in the Global South, who earn a fraction of what their Western counterparts make. Samasource has faced scrutiny before; in 2023, the company came under fire over working conditions tied to content moderation for OpenAI and Meta. Workers were tasked with filtering out some of the most disturbing and harmful content from the AI to ensure that it was safe for public use. 

    Reviewing such distressing content for hours on end takes a psychological toll on the people behind the screens. As a result, workers have reported having nightmares, anxiety, and feeling uncomfortable going to work, not knowing what they’ll see that day. 

    What companies frame as automated intelligence actually relies heavily on invisible human labor.

    Related post: Nigerian and Kenyan Creators to Monetize on Meta Starting June 2024

    Privacy Under Scrutiny

    In 2025, Meta and its partner EssilorLuxottica sold more than 7 million pairs of glasses. Marketing campaigns promoted the device with claims such as “designed for privacy, controlled by you” and “built for your privacy.”

    Meta has stated that it blurs faces in footage used to train its AI systems. However, workers involved in the data review process report that many videos still contain unblurred faces. So far, Meta is facing a class-action lawsuit in the U.S. and an investigation by the U.K.’s Information Commissioner’s Office into whether the glasses comply with British data protection laws. 

    “We have already been in contact with Sama, and they confirmed that they are not aware of any workflows where sexual or objectionable content is present, or where faces or sensitive content are continually unblurred. We will continue to investigate this,” said a Meta spokesperson in response. 

    Bystander Privacy and Consent 

    Although the glasses record the user’s data, they also expose everyone around them. Wearable devices make it far less obvious when recording is taking place, raising serious concerns around consent in everyday interactions.

    While Meta has designed safety features into the glasses, such as a small LED light that activates during recording, critics argue that these signals are easy to miss. Some users have even tried to conceal the light, increasing the risk of recording others without their knowledge in both public and private settings.

    As a result, moments that were once assumed to be unrecorded, from casual conversations to more intimate situations, can now be captured passively. This shift challenges traditional ideas of consent and raises the risk for widespread surveillance without people’s knowledge. 

    With several countries pursuing legal action against Meta, the company will have no choice but to rethink its data collection and privacy practices. The glasses may be designed for convenience, but true privacy remains the ultimate luxury. 

    Main Image: Meta Ray-Ban glasses  via Wikimedia

    The post Kenya Probes Meta Ray-Ban Glasses Over Privacy Concerns appeared first on UrbanGeekz.

    Previous Article
    Inside Polymarket: How Prediction Markets Are Redefining Forecasting and Online Betting
    Next Article
    Malala Fund Appoints First Nigerian-Based Global CEO

    Related Tech Updates:

    Are you sure? You want to delete this comment..! Remove Cancel

    Comments (0)

      Leave a comment