A coalition of 75 civil rights and privacy groups led by the American Civil Liberties Union (ACLU) is urging Meta to drop any plan to bring facial recognition to its Ray-Ban and Oakley smart glasses. Their argument is straightforward and hard to ignore. Turning everyday eyewear into a tool that can identify strangers by name is a “red line” with real consequences in public life.
The timing matters, too. Smart glasses are moving from niche gadget to mainstream wearable, and the next set of features could reshape expectations around anonymity in public spaces. Once identification becomes “just another setting,” it stops being a tech debate and starts feeling like a daily-life rule nobody voted on.
What the coalition is demanding
The groups want Meta to publicly disavow facial recognition on its smart glasses, not simply “pause” it or limit it to a narrow set of scenarios. Their point is that this capability changes the product’s purpose. It is no longer just a camera you can see, but a scanner you might never notice.
The coalition argues that the risks cannot be solved with opt-ins, design tweaks, or incremental safeguards. That is a direct challenge to the typical Silicon Valley playbook of “ship first, patch later.” If you have ever felt uneasy realizing you are on camera in a store, imagine the same moment with instant identification layered on top.
Why smart glasses change the privacy math
Phones at least give people a fighting chance. You can usually see one in someone’s hand, and you can often guess when it is pointed at you. Glasses are different, because they sit at eye level and blend into normal behavior, and that is what makes notice and consent so messy.
That is also why regulators increasingly focus on “bystanders,” not just users. The Federal Trade Commission has warned that biometric tools can create new risks when they identify people in sensitive contexts, and those are situations where someone cannot realistically opt out.
In practical terms, that means the person wearing the device gets the power, while everyone around them absorbs the risk.
The Name Tag question and a feature Meta has not confirmed
Civil society groups say the biggest fear is a real-time “who is that” button in the real world. Reports have pointed to an internal feature dubbed “Name Tag,” described as a way to identify strangers and surface personal information through an AI assistant.
Meta has not publicly confirmed such a feature, but concern has grown because the idea is technically plausible and commercially tempting.
One reason the backlash is widening is that identification is not just about a name. Once a face can be matched to a profile, it can be linked to a wider data trail that includes where someone works, who they know, and what they post.
That broader “digital identity” layer is already shifting in other parts of tech, including updates around digital identity systems that show how quickly the definition of an “identifier” can evolve.
A market that is moving faster than rules
Meta is not pushing this conversation in a vacuum. Its eyewear partner has signaled that demand is scaling quickly, and that is the kind of momentum that can make companies treat privacy concerns like a speed bump instead of a stop sign.
According to EssilorLuxottica, AI glasses sales have already reached “more than seven million” units, which changes the stakes for any feature that affects people beyond the buyer.
Competition is also closing in. Kering Eyewear has announced a partnership with Google to develop smart glasses, and luxury brands have their own incentives to make wearables feel seamless and always available. At the end of the day, that race can turn safety questions into an afterthought unless policymakers force the issue.
Meta’s history with face recognition is not reassuring to critics
Meta is not starting from a blank slate on biometric trust. In 2021, it said it would shut down Facebook’s facial-recognition system and delete a massive number of face templates, acknowledging the depth of public concern around the technology.
That company update is now back in the spotlight because critics see the smart-glasses debate as the same issue in a new form.
There is also the technical reality that facial recognition is not equally accurate across contexts and populations.
That is not just a talking point, but something measured over time through programs like the Face Recognition Vendor Test at the National Institute of Standards and Technology. For most people, the takeaway is simple. Even small error rates look different when a tool is used at scale in the wild.
What to watch next
Pressure is coming from lawmakers as well as advocacy groups. Senators Ed Markey, Ron Wyden, and Jeff Merkley have demanded details about consent, retention, and deletion, including what happens to the biometric data of nonusers who are captured in the glasses’ field of view.
The core question is the one most people would ask over coffee. How do you get meaningful consent from someone who never agreed to be part of the system?
The political and legal climate matters because identity checks are expanding in everyday settings, from airports to apps. Even outside social media, tools like paid identity verification are becoming normalized, and that creates a cultural drift toward “prove who you are” workflows.
It is easy to see how smart glasses could piggyback on that drift, especially as wearable tech becomes more common and AI grows more capable through systems like AI agents that can act on information, not just display it.
For now, the clearest takeaway is simple. Smart glasses are selling fast, and that makes the facial-recognition debate urgent rather than theoretical.
The official letter was published on the ACLU of Massachusetts.









