Meta Unveils Multimodal Capabilities for Ray-Ban Smart Glasses

by | Apr 26, 2024 | Technology & AI

Meta has introduced multimodal capabilities for its Ray-Ban smart glasses, enabling them to comprehend and describe their users’ surroundings. Previously limited to audio interactions, Meta’s AI assistant can now process visual data from the built-in camera, providing relevant insights hands-free.

Enhanced User Experience

With the latest AI capabilities, users can utilize the glasses to translate text, identify objects, and access contextual information seamlessly. Additionally, wearers can share their views during video calls on WhatsApp and Messenger without the need for manual intervention. Meta has announced that the multimodal AI upgrade will be available as a beta feature to users in Canada and the US.

Collaboration with EssilorLuxottica

The second generation of smart glasses, developed in collaboration with eyewear brand EssilorLuxottica, offers new styles designed to fit various face shapes. The expanded Ray-Ban Meta smart glasses collection includes a hundred different custom frame and lens combinations on the Ray-Ban Remix platform, allowing users to personalize their glasses. These new styles are available in 15 countries, including the US, Canada, Australia, and across Europe.

Voice Control and Recognition

With Meta AI, users can now control the Meta glasses using voice commands by simply saying ‘Hey Meta!’ followed by their prompts. Multimodal AI updates were initially tested in December last year and are now being rolled out across the US and Canada.

Transforming Wearables

The introduction of multimodal capabilities into smart glasses marks a significant advancement, transforming wearables into powerful, context-aware smart assistants. This innovation promises to enhance the user experience and redefine the way individuals interact with their surroundings.

Follow YOUxTalks on Instagram: https://www.instagram.com/youxtalks/

You May Also Like