Not a Good Look, AI: What Happens to Privacy When Glasses Get Smart?

In recent months the public has begun to wake up to a new kind of privacy threat: wearables that record without your knowledge, increasingly subtly. One especially stark example is the revelations around the Ray‑Ban smart-glasses line developed in partnership with Meta Platforms, where the built-in LED meant to signal recording can in fact be disabled or covertly obscured.

According to a deep dive by 404 Media [a link to their article can be found in the comments below], hobbyists are even offering mods that disable the white LED while the camera remains fully functional.

Why this matters

The LED light on these glasses is more than a gimmick – in many cases it is the social and technical indicator of the wearer’s intention to record. If that indicator is removed the basics of consent and privacy start to unravel.

Here are key issues:

  • The assumption of consent is undermined. People assume recording devices will show some visible sign, so when that sign is removed the power dynamics shift.
  • It creates a chilling effect in the opposite direction: if you suspect you are being recorded but cannot see the indicator, you may withdraw or self-censor.
  • It shifts detection burden onto the recorded rather than the recorder – this is a structural imbalance.
  • For organisations here in Australia the legal and reputational risk is significant if they fail to manage recording in private or semi-private spaces (staff, visitors, clients).

What the 404 Media story uncovered

The article makes three hard-to-ignore points:

  1. Meta designed the glasses so that if someone simply covered the LED with tape, the recording function would stop working — but the modification bypasses that protection, allowing the camera to keep recording while the light remains off.
  2. A hobbyist is charging about US $60 to disable the LED on Ray-Ban/Meta smart glasses.
  3. The modified device remains fully operational (camera still works, recording still works) but without the visible indicator. This means the person next to you may be recorded and you will have no obvious sign.

What you can do: a layered defence

No single tool will stop all recording risks. Instead I suggest a layered approach, combining awareness, policy, detection tools, physical protections and advocacy.

1. Awareness & policy

States and organisations should proactively declare their position: “Recording without consent is prohibited” in sensitive areas. Train staff and visitors: ask directly if someone is wearing smart glasses, or request they are turned off in sensitive zones. Signage and behavioural norms create friction for surreptitious recorders.

2. Training & social cues

Encourage people to look for multiple cues — not just a little LED light on a pair of glasses. Cultivate a culture where it is socially acceptable to ask: “Are you recording?” or “Can we disable cameras for this meeting?” That social expectation works as much as any tech deterrent.

3. Detection tools & tech

In environments where covert recording is a credible threat (changing rooms, waiting areas, high-value meetings) deploy detectors: lens-reflectivity detectors, infrared-flash detectors or RF/network sniffers can help identify hidden cameras rather than relying on one tiny LED. Some firms specialise in “hidden-camera detection sweeps”.

4. Physical & procedural controls

Control the environment: define no-wearables zones, require removal of smart glasses in sensitive areas, use anti-recording signage, check devices on entry if needed. A procedural layer creates significant friction and risk for someone trying to record covertly.

5. Advocacy for design & regulation

You can also push the needle by advocating for better product design (tamper-resistant indicators, firmware locks, mandatory visible recording lights) and for regulation treating hidden-recording wearables more like CCTV in private spaces (with obligations around indicators, signage, consent and penalties).

Why opt for “anti-IR” or privacy-optics protection

As part of your personal protection toolkit, consider eyewear or lens coatings that disrupt certain biometric or infrared-based scanning systems. For example, some lenses reflect near-infrared light (commonly used by facial-recognition systems), helping to reduce your biometric footprint. Products such as Reflectacles Ghost and Phantom glasses or other anti-IR coatings (including Zenni’s ID Guard) are inexpensive, passive safeguards — but they don’t prevent visible-light recording. They’re one piece of a broader defence.

Everyday exposure

Most of these measures — detection tools, policies, signage and staff training — apply to workplaces, events, or controlled environments. But what about the rest of us simply walking down the street, sitting in a café, or browsing in a shop? At that level, there’s currently little that can be done to prevent or even detect if someone’s wearing a modified recording device.

And that’s where this issue takes a darker turn. Like many emerging technologies, smart glasses can easily become tools of technology-facilitated abuse, coercive control or stalking. The ability to record secretly — especially when paired with live-streaming or geolocation data — creates obvious risks for victim-survivors of domestic violence or anyone under surveillance by an abusive partner or ex-partner.

Ultimately, this isn’t just a question of privacy or ethics — it’s a Trust and Safety issue. If we can’t tell when or how we’re being recorded, the public trust that underpins everyday digital interactions starts to erode.

It raises a larger question: should there be clearer obligations on wearable manufacturers and stronger oversight around covert recording technologies? Australia’s eSafety Commissioner has made great strides in online protection and digital literacy — perhaps it’s time this conversation extended into the physical world too.

Final thoughts

The 404 Media investigation is a wake-up call: the assumption that a recording device will always show when it’s recording no longer holds true. For organisations and individuals alike, this isn’t only a privacy issue — it’s a Trust and Safety one. When the technology we wear can be altered to watch others without their consent, the very notion of public trust starts to erode.

Beyond reputational or compliance risks, the human cost is significant. Tools designed for convenience or creativity can just as easily be weaponised for surveillance, stalking or coercive control. Recognising and addressing that potential must be part of every safety-by-design conversation.

That’s why a layered defence — combining awareness, social training, detection tools, procedural controls, and advocacy for safer design — is so important. No single measure is foolproof, but together they build resilience. By embedding Trust and Safety thinking into every layer, from product design to everyday policy, we can create environments where people feel secure, respected, and free to engage without fear of unseen recording.

[Disclaimer: All product references are used for the purpose of critique and commentary under fair dealing provisions.]

Read the Original on LinkedIn

Please like and share the original LinkedIn post to help it reach more members of our community.

View on LinkedIn

About the Author

Kim Chandler McDonald

Kim Chandler McDonald is the Co-Founder and CEO of 3 Steps Data, driving data/digital governance solutions.

She is the Global VP of CyAN, an award-winning author, storyteller, and advocate for cybersecurity, digital sovereignty, compliance, governance, and end-user empowerment.