AI-enabled smart glasses, which combine eyewear with real-time audio, video, and AI capabilities, are entering the workplace. They can deliver productivity and accessibility benefits by helping users capture information, receive prompts, and interact with AI systems hands-free. At the same time, they introduce risks that many workplace policies and compliance frameworks were not designed to address. As adoption increases, employers should reassess whether existing rules adequately govern AI-enabled eyewear.
AI glasses raise concerns about recording that can be more significant than employers initially assume. Unlike smartphones, they can operate hands-free and discreetly, record audio or video passively or continuously, connect automatically to cloud-based platforms, and capture or analyze data with little obvious user action, which creates enforcement challenges when devices are indistinguishable from ordinary eyewear. These risks are heightened by state recording laws, including all-party-consent requirements in certain jurisdictions, and by the possibility that confidential internal meetings or customer-facing interactions may be captured without authorization.
Restrictions on AI glasses can also raise National Labor Relations Act (NLRA) issues, depending on the circumstances. Because these devices can record, photograph, or stream content, employees may argue that using them to document working conditions—such as safety concerns, harassment, discrimination, or union activity—constitutes protected concerted activity in some circumstances. Under the current NLRB scrutiny of workplace rules, employers are better positioned when any recording restrictions are narrowly tailored and clearly tied to legitimate and substantial business interests such as trade secrets, confidentiality, customer privacy, or safety.
AI-enabled wearables may create significant data security and privacy risks, including inadvertent capture of proprietary processes, trade secrets, internal communications, and customer, patient, or employee data. Because many devices connect to third-party AI platforms, information may be transmitted, stored, or processed outside the organization’s direct control, raising questions about retention, secondary use, safeguards, and compliance. These concerns can be especially acute in some industries such as healthcare and financial services, where the mere capture or transmission of protected information can trigger heightened obligations.
Finally, AI glasses may intersect with disability accommodation obligations. While ordinary prescription eyewear is not itself a disability under the ADA, AI-enabled features can function as assistive technology (for example, real-time transcription, object recognition, navigation assistance, and magnification), which may lead to accommodation requests that require a case-by-case interactive process. Employers generally may restrict or prohibit AI glasses when justified by safety, confidentiality, privacy, and compliance concerns, but should ensure consistent enforcement and be prepared to evaluate accommodation requests by considering medical necessity, essential job functions, available alternatives, and whether risks can be mitigated. Practically, employers should review device/recording/confidentiality policies related to AI wearables, establish an internal framework for handling AI-glasses accommodations, train managers on identification and escalation, and consult experienced counsel when needed.